issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
The current document didn't cover the Metal support of Llama-cpp, which need different parameter to build.
Also, the default parameter of initial LLM will crash error:
```
GGML_ASSERT: .../vendor/llama.cpp/ggml-metal.m:706: false && "not implemented"
```
### Idea or request for content:
I would like to contribute a pull request to polish the document with better Metal (Apple Silicon Chip) support of Llama-cpp. | DOC: enhancement with Llama-cpp document | https://api.github.com/repos/langchain-ai/langchain/issues/7091/comments | 1 | 2023-07-03T18:03:36Z | 2023-07-03T23:57:09Z | https://github.com/langchain-ai/langchain/issues/7091 | 1,786,607,479 | 7,091 |
[
"hwchase17",
"langchain"
]
| ### System Info
When trying to do an aggregation on a table (I've tried average, min, max), two duplicate queries are being generated, resulting in an error
```
syntax error line 2 at position 0 unexpected 'SELECT'.
[SQL: SELECT AVG(YEAR) FROM my_table
SELECT AVG(YEAR) FROM my_table]
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. db_chain = SQLDatabaseChain.from_llm(...)
2. db_chain.run("what is the average year?")
### Expected behavior
should return the average year | SQLDatabaseChain Double Query | https://api.github.com/repos/langchain-ai/langchain/issues/7082/comments | 5 | 2023-07-03T13:22:53Z | 2023-10-12T16:07:47Z | https://github.com/langchain-ai/langchain/issues/7082 | 1,786,178,650 | 7,082 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain: 0.0.221
Python: 3.9.17.
OS: MacOS Ventura 13.3.1 (a)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Below is the code to reproduce the error. I attempt to set up a simple few-shot prompting logic with an LLM for named entity extraction from documents. At the moment, I supply one single document with the expected entities to be extracted, and the expected answer in key-value pairs. Then I want to format the few-shot prompt object before seeding it as an input to one of the LLM chains, and this is the step where I encounter the error:
```
KeyError Traceback (most recent call last)
Cell In[1], line 56
53 new_doc = "Kate McConnell is 56 and is soon retired according to the current laws of the state of Alaska in which she resides."
54 new_entities_to_extract = ['name', 'organization']
---> 56 prompt = few_shot_prompt_template.format(
57 input_doc=new_doc,
58 entities_to_extract=new_entities_to_extract,
59 answer={}
60 )
File [~/Desktop/automarkup/.venv/lib/python3.9/site-packages/langchain/prompts/few_shot.py:123](https://file+.vscode-resource.vscode-cdn.net/Users/username/Desktop/automarkup/src/~/Desktop/automarkup/.venv/lib/python3.9/site-packages/langchain/prompts/few_shot.py:123), in FewShotPromptTemplate.format(self, **kwargs)
120 template = self.example_separator.join([piece for piece in pieces if piece])
122 # Format the template with the input variables.
--> 123 return DEFAULT_FORMATTER_MAPPING[self.template_format](template, **kwargs)
File [/opt/homebrew/Cellar/python](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/Cellar/python)@3.9/3.9.17/Frameworks/Python.framework/Versions/3.9/lib/python3.9/string.py:161, in Formatter.format(self, format_string, *args, **kwargs)
160 def format(self, format_string, [/](https://file+.vscode-resource.vscode-cdn.net/), *args, **kwargs):
--> 161 return self.vformat(format_string, args, kwargs)
File [~/Desktop/automarkup/.venv/lib/python3.9/site-packages/langchain/formatting.py:29](https://file+.vscode-resource.vscode-cdn.net/Users/username/Desktop/automarkup/src/~/Desktop/automarkup/.venv/lib/python3.9/site-packages/langchain/formatting.py:29), in StrictFormatter.vformat(self, format_string, args, kwargs)
24 if len(args) > 0:
25 raise ValueError(
26 "No arguments should be provided, "
27 "everything should be passed as keyword arguments."
28 )
---> 29 return super().vformat(format_string, args, kwargs)
File [/opt/homebrew/Cellar/python](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/Cellar/python)@3.9/3.9.17/Frameworks/Python.framework/Versions/3.9/lib/python3.9/string.py:165, in Formatter.vformat(self, format_string, args, kwargs)
163 def vformat(self, format_string, args, kwargs):
164 used_args = set()
--> 165 result, _ = self._vformat(format_string, args, kwargs, used_args, 2)
166 self.check_unused_args(used_args, args, kwargs)
167 return result
File [/opt/homebrew/Cellar/python](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/Cellar/python)@3.9/3.9.17/Frameworks/Python.framework/Versions/3.9/lib/python3.9/string.py:205, in Formatter._vformat(self, format_string, args, kwargs, used_args, recursion_depth, auto_arg_index)
201 auto_arg_index = False
203 # given the field_name, find the object it references
204 # and the argument it came from
--> 205 obj, arg_used = self.get_field(field_name, args, kwargs)
206 used_args.add(arg_used)
208 # do any conversion on the resulting object
File [/opt/homebrew/Cellar/python](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/Cellar/python)@3.9/3.9.17/Frameworks/Python.framework/Versions/3.9/lib/python3.9/string.py:270, in Formatter.get_field(self, field_name, args, kwargs)
267 def get_field(self, field_name, args, kwargs):
268 first, rest = _string.formatter_field_name_split(field_name)
--> 270 obj = self.get_value(first, args, kwargs)
272 # loop through the rest of the field_name, doing
273 # getattr or getitem as needed
274 for is_attr, i in rest:
File [/opt/homebrew/Cellar/python](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/Cellar/python)@3.9/3.9.17/Frameworks/Python.framework/Versions/3.9/lib/python3.9/string.py:227, in Formatter.get_value(self, key, args, kwargs)
225 return args[key]
226 else:
--> 227 return kwargs[key]
KeyError: "'name'"
```
And the code is:
```
from langchain import FewShotPromptTemplate, PromptTemplate
example = {
"input_doc": "John Doe recently celebrated his birthday, on 2023-06-30, turning 30. He has been working with us at ABC Corp for a very long time already.",
"entities_to_extract": ["name", "date", "organization"],
"answer": {
"name": "John Doe",
"date": "2023-06-30",
"organization": "ABC Corp"
}
}
example_template = """
User: Retrieve the following entities from the document:
Document: {input_doc}
Entities: {entities_to_extract}
AI: The extracted entities are: {answer}
"""
example_prompt = PromptTemplate(
input_variables=["input_doc", "entities_to_extract", "answer"],
template=example_template
)
prefix = """You are a capable large language model. Your task is to extract entities from a given document.
Please pay attention to all the words in the following example and retrieve the requested entities.
"""
suffix = """
User: Retrieve the requested entities from the document.
Document: {input_doc}
Entities: {entities_to_extract}
AI:
"""
few_shot_prompt_template = FewShotPromptTemplate(
examples=[example],
example_prompt=example_prompt,
prefix=prefix,
suffix=suffix,
input_variables=["input_doc", "entities_to_extract"],
example_separator="\n\n"
)
new_doc = "Kate McConnell is 56 and is soon retired according to the current laws of the state of Alaska in which she resides."
new_entities_to_extract = ['name', 'organization']
prompt = few_shot_prompt_template.format(
input_doc=new_doc,
entities_to_extract=new_entities_to_extract,
answer={}
)
```
### Expected behavior
The expected behavior is to return the formatted template in string format, instead of the error showing up. | KeyError when formatting FewShotPromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/7079/comments | 7 | 2023-07-03T12:23:47Z | 2023-07-04T10:01:18Z | https://github.com/langchain-ai/langchain/issues/7079 | 1,786,073,569 | 7,079 |
[
"hwchase17",
"langchain"
]
| ### System Info
- Langchain Version 0.0.219
## Where does the problem arise:
When saving the OpenLLM model, there is a key called "llm_kwargs" which cannot be parsed when reloading the model. Instead the "llm_kwargs" should be directly inside the "llm" object without the key "llm_kwargs".
### Example
```json
"llm": {
"server_url": null,
"server_type": "http",
"embedded": true,
"llm_kwargs": {
"temperature": 0.2,
"format_outputs": false,
"generation_config": {
"max_new_tokens": 1024,
"min_length": 0,
"early_stopping": false,
"num_beams": 1,
"num_beam_groups": 1,
"use_cache": true,
"temperature": 0.2,
"top_k": 15,
"top_p": 1.0,
"typical_p": 1.0,
"epsilon_cutoff": 0.0,
"eta_cutoff": 0.0,
"diversity_penalty": 0.0,
"repetition_penalty": 1.0,
"encoder_repetition_penalty": 1.0,
"length_penalty": 1.0,
"no_repeat_ngram_size": 0,
"renormalize_logits": false,
"remove_invalid_values": false,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_scores": false,
"encoder_no_repeat_ngram_size": 0
}
},
"model_name": "opt",
"model_id": "facebook/opt-1.3b",
"_type": "openllm"
}
```
In contrast the serialization for a OpenAI LLM looks the following:
```json
"llm": {
"model_name": "text-davinci-003",
"temperature": 0.7,
"max_tokens": 256,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0,
"n": 1,
"request_timeout": null,
"logit_bias": {},
"_type": "openai"
}
```
### Who can help?
@hwchase17 @eyurtsev @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Create a OpenLLM object
2. Call `save()` on that object
3. Try to reload the previously generated OpenLLM object with `langchain.chains.loading.load_chain_from_config()`
### Expected behavior
The serialized OpenLLM model should be reloadable. | Serialization of OpenLLM Local Inference Models does not work. | https://api.github.com/repos/langchain-ai/langchain/issues/7078/comments | 4 | 2023-07-03T11:56:01Z | 2023-10-09T16:05:35Z | https://github.com/langchain-ai/langchain/issues/7078 | 1,786,027,453 | 7,078 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am trying to develop a chatbot that can give answers from multiple namespace.That why I am try to use the MULTIRETREIVALQA chain but the langchain versions seems to be making some issues for me . Any suggestions?
### Suggestion:
_No response_ | I am trying to use multi retreival QA chain but not sure what version of langchain will be best? | https://api.github.com/repos/langchain-ai/langchain/issues/7075/comments | 6 | 2023-07-03T11:12:00Z | 2023-08-25T20:15:00Z | https://github.com/langchain-ai/langchain/issues/7075 | 1,785,956,452 | 7,075 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain-0.0.221
python-3.10.11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.schema import Document
from langchain.vectorstores import FAISS
list_of_documents = [
Document(page_content="foo", metadata=dict(page=1)),
Document(page_content="bar", metadata=dict(page=1)),
Document(page_content="foo", metadata=dict(page=2)),
Document(page_content="barbar", metadata=dict(page=2)),
Document(page_content="foo", metadata=dict(page=3)),
Document(page_content="bar burr", metadata=dict(page=3)),
Document(page_content="foo", metadata=dict(page=4)),
Document(page_content="bar bruh", metadata=dict(page=4)),
]
db = FAISS.from_documents(list_of_documents, embeddings)
### Expected behavior
Now i'm following the FAISS Similarity Search with filtering code example on official Website,which is on https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/faiss,an error occur:
IndexError Traceback (most recent call last)
Cell In[56], line 13
2 from langchain.vectorstores import FAISS
3 list_of_documents = [
4 Document(page_content="foo", metadata=dict(page=1)),
5 Document(page_content="bar", metadata=dict(page=1)),
(...)
11 Document(page_content="bar bruh", metadata=dict(page=4)),
12 ]
---> 13 db = FAISS.from_documents(list_of_documents, embeddings)
File [d:\Python310\lib\site-packages\langchain\vectorstores\base.py:332](file:///D:/Python310/lib/site-packages/langchain/vectorstores/base.py:332), in VectorStore.from_documents(cls, documents, embedding, **kwargs)
330 texts = [d.page_content for d in documents]
331 metadatas = [d.metadata for d in documents]
--> 332 return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File [d:\Python310\lib\site-packages\langchain\vectorstores\faiss.py:551](file:///D:/Python310/lib/site-packages/langchain/vectorstores/faiss.py:551), in FAISS.from_texts(cls, texts, embedding, metadatas, ids, **kwargs)
525 @classmethod
526 def from_texts(
527 cls,
(...)
532 **kwargs: Any,
533 ) -> FAISS:
534 """Construct FAISS wrapper from raw documents.
...
--> 314 results[indices[i]].append(batched_embeddings[i])
315 num_tokens_in_batch[indices[i]].append(len(tokens[i]))
317 for i in range(len(texts)):
IndexError: list index out of range
How's that happened and how can i fix it? | Issue:IndexError: list index out of range | https://api.github.com/repos/langchain-ai/langchain/issues/7067/comments | 8 | 2023-07-03T07:33:27Z | 2024-02-27T16:08:40Z | https://github.com/langchain-ai/langchain/issues/7067 | 1,785,526,295 | 7,067 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am using Conversationalbuffermemory to maintain chathistory here is my code i dont know why my chathistory not working properly if any update required in my code please let me know I am using this code in my api
class Chatbot:
def __init__(self):
self.model = "gpt-3.5-turbo"
async def generate_response(self, user, question):
index = faiss.read_index("faiss_index/faisss_audien_new.index")
print(index)
with open("faiss_index/faisss_audien_new.pkl", "rb") as f:
store = pickle.load(f)
store.index = index
retriever = store.as_retriever(search_kwargs=dict(k=1))
memory = ConversationBufferMemory(retriever=retriever)
llm = OpenAI(temperature=0, max_tokens=1500, model_name='gpt-3.5-turbo') # Can be any valid LLM
_DEFAULT_TEMPLATE = """
You are an AI girlfriend having a conversation with a human. Respond to all inputs in 50 words or less. Do not say you are an AI language model.
Relevant pieces of previous conversation:
{history}
(You do not need to use these pieces of information if not relevant)
Current conversation:
User: {input}
Kira AI:
"""
PROMPT = PromptTemplate(
input_variables=["history", "input"], template=_DEFAULT_TEMPLATE
)
conversation_with_summary = ConversationChain(
llm=llm,
prompt=PROMPT,
memory=memory,
verbose=True
)
result = conversation_with_summary.predict(input=question)
gpt_response = result
print(gpt_response)
return gpt_response
### Suggestion:
_No response_ | Conversationalbuffermemory to maintain chathistory | https://api.github.com/repos/langchain-ai/langchain/issues/7066/comments | 1 | 2023-07-03T06:05:30Z | 2023-10-09T16:05:40Z | https://github.com/langchain-ai/langchain/issues/7066 | 1,785,402,763 | 7,066 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I find it is quite useful when using JSON format prompt.
Unfortunately, I cannot pass JSON format prompt to langchain.
### Motivation
Here is prompt example, which gives detail instruction on how the GPT should response to the student.
Could you please support the JSON format prompt in the feature release!
prompt:'''
{
"ai_tutor": {
"Author": "JushBJJ",
"name": "Mr. Ranedeer",
"version": "2.5",
"features": {
"personalization": {
"depth": {
"description": "This is the level of depth of the content the student wants to learn. The lowest depth level is 1, and the highest is 10.",
"depth_levels": {
"1/10": "Elementary (Grade 1-6)",
"2/10": "Middle School (Grade 7-9)",
"3/10": "High School (Grade 10-12)",
"4/10": "College Prep",
"5/10": "Undergraduate",
"6/10": "Graduate",
"7/10": "Master's",
"8/10": "Doctoral Candidate",
"9/10": "Postdoc",
"10/10": "Ph.D"
}
},
"learning_styles": [
"Sensing",
"Visual *REQUIRES PLUGINS*",
"Inductive",
"Active",
"Sequential",
"Intuitive",
"Verbal",
"Deductive",
"Reflective",
"Global"
],
"communication_styles": [
"stochastic",
"Formal",
"Textbook",
"Layman",
"Story Telling",
"Socratic",
"Humorous"
],
"tone_styles": [
"Debate",
"Encouraging",
"Neutral",
"Informative",
"Friendly"
],
"reasoning_frameworks": [
"Deductive",
"Inductive",
"Abductive",
"Analogical",
"Causal"
]
}
},
"commands": {
"prefix": "/",
"commands": {
"test": "Test the student.",
"config": "Prompt the user through the configuration process, incl. asking for the preferred language.",
"plan": "Create a lesson plan based on the student's preferences.",
"search": "Search based on what the student specifies. *REQUIRES PLUGINS*",
"start": "Start the lesson plan.",
"continue": "Continue where you left off.",
"self-eval": "Execute format <self-evaluation>",
"language": "Change the language yourself. Usage: /language [lang]. E.g: /language Chinese",
"visualize": "Use plugins to visualize the content. *REQUIRES PLUGINS*"
}
},
"rules": [
"1. Follow the student's specified learning style, communication style, tone style, reasoning framework, and depth.",
"2. Be able to create a lesson plan based on the student's preferences.",
"3. Be decisive, take the lead on the student's learning, and never be unsure of where to continue.",
"4. Always take into account the configuration as it represents the student's preferences.",
"5. Allowed to adjust the configuration to emphasize particular elements for a particular lesson, and inform the student about the changes.",
"6. Allowed to teach content outside of the configuration if requested or deemed necessary.",
"7. Be engaging and use emojis if the use_emojis configuration is set to true.",
"8. Obey the student's commands.",
"9. Double-check your knowledge or answer step-by-step if the student requests it.",
"10. Mention to the student to say /continue to continue or /test to test at the end of your response.",
"11. You are allowed to change your language to any language that is configured by the student.",
"12. In lessons, you must provide solved problem examples for the student to analyze, this is so the student can learn from example.",
"13. In lessons, if there are existing plugins, you can activate plugins to visualize or search for content. Else, continue."
],
"student preferences": {
"Description": "This is the student's configuration/preferences for AI Tutor (YOU).",
"depth": 0,
"learning_style": [],
"communication_style": [],
"tone_style": [],
"reasoning_framework": [],
"use_emojis": true,
"language": "Chinese (Default)"
},
"formats": {
"Description": "These are strictly the specific formats you should follow in order. Ignore Desc as they are contextual information.",
"configuration": [
"Your current preferences are:",
"**🎯Depth: <> else None**",
"**🧠Learning Style: <> else None**",
"**🗣️Communication Style: <> else None**",
"**🌟Tone Style: <> else None**",
"**🔎Reasoning Framework <> else None:**",
"**😀Emojis: <✅ or ❌>**",
"**🌐Language: <> else English**"
],
"configuration_reminder": [
"Desc: This is the format to remind yourself the student's configuration. Do not execute <configuration> in this format.",
"Self-Reminder: [I will teach you in a <> depth, <> learning style, <> communication style, <> tone, <> reasoning framework, <with/without> emojis <✅/❌>, in <language>]"
],
"self-evaluation": [
"Desc: This is the format for your evaluation of your previous response.",
"<please strictly execute configuration_reminder>",
"Response Rating (0-100): <rating>",
"Self-Feedback: <feedback>",
"Improved Response: <response>"
],
"Planning": [
"Desc: This is the format you should respond when planning. Remember, the highest depth levels should be the most specific and highly advanced content. And vice versa.",
"<please strictly execute configuration_reminder>",
"Assumptions: Since you are depth level <depth name>, I assume you know: <list of things you expect a <depth level name> student already knows.>",
"Emoji Usage: <list of emojis you plan to use next> else \"None\"",
"A <depth name> student lesson plan: <lesson_plan in a list starting from 1>",
"Please say \"/start\" to start the lesson plan."
],
"Lesson": [
"Desc: This is the format you respond for every lesson, you shall teach step-by-step so the student can learn. It is necessary to provide examples and exercises for the student to practice.",
"Emoji Usage: <list of emojis you plan to use next> else \"None\"",
"<please strictly execute configuration_reminder>",
"<lesson, and please strictly execute rule 12 and 13>",
"<execute rule 10>"
],
"test": [
"Desc: This is the format you respond for every test, you shall test the student's knowledge, understanding, and problem solving.",
"Example Problem: <create and solve the problem step-by-step so the student can understand the next questions>",
"Now solve the following problems: <problems>"
]
}
},
"init": "As an AI tutor, greet + 👋 + version + author + execute format <configuration> + ask for student's preferences + mention /language"
}
### Your contribution
I am not sure | Use JSON format prompt | https://api.github.com/repos/langchain-ai/langchain/issues/7065/comments | 5 | 2023-07-03T04:47:47Z | 2023-10-10T01:06:44Z | https://github.com/langchain-ai/langchain/issues/7065 | 1,785,310,309 | 7,065 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Example no longer working (key not found in dict) due to api changes: https://python.langchain.com/docs/modules/callbacks/how_to/multiple_callbacks
### Idea or request for content:
Corrected Serialization in several places:
from typing import Dict, Union, Any, List
from langchain.callbacks.base import BaseCallbackHandler
from langchain.schema import AgentAction
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain.callbacks import tracing_enabled
from langchain.llms import OpenAI
# First, define custom callback handler implementations
class MyCustomHandlerOne(BaseCallbackHandler):
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> Any:
print(f"on_llm_start {serialized['id'][-1]}")
def on_llm_new_token(self, token: str, **kwargs: Any) -> Any:
print(f"on_new_token {token}")
def on_llm_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> Any:
"""Run when LLM errors."""
def on_chain_start(
self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any
) -> Any:
print(f"on_chain_start {serialized['id'][-1]}")
def on_tool_start(
self, serialized: Dict[str, Any], input_str: str, **kwargs: Any
) -> Any:
print(f"on_tool_start {serialized['name']}")
def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:
print(f"on_agent_action {action}")
class MyCustomHandlerTwo(BaseCallbackHandler):
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> Any:
print(f"on_llm_start (I'm the second handler!!) {serialized['id'][-1]}")
# Instantiate the handlers
handler1 = MyCustomHandlerOne()
handler2 = MyCustomHandlerTwo()
# Setup the agent. Only the `llm` will issue callbacks for handler2
llm = OpenAI(temperature=0, streaming=True, callbacks=[handler2])
tools = load_tools(["llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION)
# Callbacks for handler1 will be issued by every object involved in the
# Agent execution (llm, llmchain, tool, agent executor)
agent.run("What is 2 raised to the 0.235 power?", callbacks=[handler1]) | DOC: Multiple callback handlers | https://api.github.com/repos/langchain-ai/langchain/issues/7064/comments | 1 | 2023-07-03T04:21:12Z | 2023-10-09T16:05:51Z | https://github.com/langchain-ai/langchain/issues/7064 | 1,785,284,378 | 7,064 |
[
"hwchase17",
"langchain"
]
| ### System Info
v0.0.221
### Who can help?
@vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
[MultiQueryRetriever](https://python.langchain.com/docs/modules/data_connection/retrievers/how_to/MultiQueryRetriever) is broken due to [5962](https://github.com/hwchase17/langchain/pull/5962) introducing a new method `_get_relevant_documents`.
Caught this b/c I no longer see expected logging when running example in [docs](https://python.langchain.com/docs/modules/data_connection/retrievers/how_to/MultiQueryRetriever).
```
unique_docs = retriever_from_llm.get_relevant_documents(query="What does the course say about regression?")
len(unique_docs)
```
For MultiQueryRetriever, get_relevant_documents does a few things (PR [here](https://github.com/hwchase17/langchain/pull/6833/files#diff-73c5e56610f556ddc4710db1b9da4c6277b8ffaef6d535e3cf1e1ceb4b22b186)).
E.g., it will run `queries = self.generate_queries(query, run_manager)` and log the queries.
It appears the method name has been changed to `_get_relevant_documents`?
And it now requires some additional args (e.g., `run_manager`)?
AFAICT, `get_relevant_documents` currently will just do retrieval w/o any of the `MultiQueryRetriever` logic.
If this is expected, then documentation will need to be updated for MultiQueryRetriever [here](https://python.langchain.com/docs/modules/data_connection/retrievers/how_to/MultiQueryRetriever) to use _get_relevant_documents and supply run_manager and explain what run_manager is.
Also this exposes that fact that we need a test for MultiQueryRetriever.
### Expected behavior
We should actually run the logic in `MultiQueryRetriever` as noted in the [PR](https://github.com/hwchase17/langchain/pull/6833/files#diff-73c5e56610f556ddc4710db1b9da4c6277b8ffaef6d535e3cf1e1ceb4b22b186).
```
INFO:root:Generated queries: ["1. What is the course's perspective on regression?", '2. How does the course discuss regression?', '3. What information does the course provide about regression?']
``` | MultiQueryRetriever broken on master and needs tests | https://api.github.com/repos/langchain-ai/langchain/issues/7063/comments | 0 | 2023-07-03T04:00:32Z | 2023-07-04T05:09:35Z | https://github.com/langchain-ai/langchain/issues/7063 | 1,785,264,664 | 7,063 |
[
"hwchase17",
"langchain"
]
| ### System Info
index = VectorstoreIndexCreator(vectorstore_cls=DocArrayInMemorySearch, embedding=embeddings)
index = index.from_loaders([PyMuPDFLoader('孙子兵法.pdf')])
query = '孙子兵法的战略'
index.query(llm=OpenAIChat(streaming=True), question=query, chain_type="stuff")
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
index = VectorstoreIndexCreator(vectorstore_cls=DocArrayInMemorySearch, embedding=embeddings)
index = index.from_loaders([PyMuPDFLoader('孙子兵法.pdf')])
query = '孙子兵法的战略'
index.query(llm=OpenAIChat(streaming=True), question=query, chain_type="stuff")
```
streaming ...
### Expected behavior
```
index = VectorstoreIndexCreator(vectorstore_cls=DocArrayInMemorySearch, embedding=embeddings)
index = index.from_loaders([PyMuPDFLoader('孙子兵法.pdf')])
query = '孙子兵法的战略'
index.query(llm=OpenAIChat(streaming=True), question=query, chain_type="stuff")
```
streaming ... | OpenAIChat(streaming=True) donot work | https://api.github.com/repos/langchain-ai/langchain/issues/7062/comments | 2 | 2023-07-03T04:00:09Z | 2023-11-16T16:07:11Z | https://github.com/langchain-ai/langchain/issues/7062 | 1,785,264,380 | 7,062 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain = 0.0.218
python = 3.11.4
### Who can help?
@hwchase17 , @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
To reproduce:
1. Follow [Add Memory Open Ai Functions](https://python.langchain.com/docs/modules/agents/how_to/add_memory_openai_functions)
2. Import Streamlit
3. Create Chat Element using st.chat_input
4. Try running agent through Streamlit
### Expected behavior
The agent memory works correctly when the Agent is called via ``` agent_chain.run("Insert prompt") ``` but when this is being done via a Streamlit Chat element like:
```python
if prompt := st.chat_input():
st.session_state.messages.append({"role": "user", "content": prompt})
st.chat_message("user").write(prompt)
with st.spinner('Thinking...'):
st_callback = StreamlitCallbackHandler(st.container())
agent_chain.run(prompt, callbacks=[st_callback])
st.chat_message("assistant").write(response)
st.session_state.messages.append({"role": "assistant", "content": response})
```
It no longer works. | OPENAI_FUNCTIONS Agent Memory won't work inside of Streamlit st.chat_input element | https://api.github.com/repos/langchain-ai/langchain/issues/7061/comments | 10 | 2023-07-03T03:11:16Z | 2023-12-20T16:07:28Z | https://github.com/langchain-ai/langchain/issues/7061 | 1,785,201,120 | 7,061 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchina: 0.0.221,
OS: WSL
Python 3.11
### Who can help?
The memory did not record the full input message, it only record the input parameter values
Below is a simple code to use the default ConversationBufferMemory.
++++++++++++++++++++++++++
llm = conn.connecting.get_model_azureopenai()
mem = langchain.memory.ConversationBufferMemory()
msg1 = langchain.prompts.HumanMessagePromptTemplate.from_template("Hi, I am {name}. How do you pronounce my name?")
prompt1 = langchain.prompts.ChatPromptTemplate.from_messages([msg1])
print("Input message=", prompt1.format(name="Cao"))
chain1 = langchain.chains.LLMChain(
llm=llm,
prompt=prompt1,
memory=mem,
)
resp1 = chain1.run({"name": "Cao"})
print(resp1)
print("+"*100)
print(mem)
+++++++++++++++++++++++++++++++++++
But the real memory only recorded like below:
chat_memory=ChatMessageHistory(messages=[HumanMessage(content='Cao', additional_kwargs={}, example=False), AIMessage(content='Hello Cao! Your name is pronounced as "Ts-ow" with the "Ts" sound similar to the beginning of "tsunami" and "ow" rhyming with "cow."', additional_kwargs={}, example=False)]) output_key=None input_key=None return_messages=False human_prefix='Human' ai_prefix='AI' memory_key='history'
It should record the full input, not only the parameter itself:
Input message= Human: Hi, I am Cao. How do you pronounce my name?
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Below is a simple code to use the default ConversationBufferMemory.
++++++++++++++++++++++++++
llm = conn.connecting.get_model_azureopenai()
mem = langchain.memory.ConversationBufferMemory()
msg1 = langchain.prompts.HumanMessagePromptTemplate.from_template("Hi, I am {name}. How do you pronounce my name?")
prompt1 = langchain.prompts.ChatPromptTemplate.from_messages([msg1])
print("Input message=", prompt1.format(name="Cao"))
chain1 = langchain.chains.LLMChain(
llm=llm,
prompt=prompt1,
memory=mem,
)
resp1 = chain1.run({"name": "Cao"})
print(resp1)
print("+"*100)
print(mem)
+++++++++++++++++++++++++++++++++++
But the real memory only recorded like below:
chat_memory=ChatMessageHistory(messages=[HumanMessage(content='Cao', additional_kwargs={}, example=False), AIMessage(content='Hello Cao! Your name is pronounced as "Ts-ow" with the "Ts" sound similar to the beginning of "tsunami" and "ow" rhyming with "cow."', additional_kwargs={}, example=False)]) output_key=None input_key=None return_messages=False human_prefix='Human' ai_prefix='AI' memory_key='history'
It should record the full input, not only the parameter itself:
Input message= Human: Hi, I am Cao. How do you pronounce my name?
### Expected behavior
It should record the full input, not only the parameter itself:
Input message= Human: Hi, I am Cao. How do you pronounce my name? | The memory did not record the full input message, it only record the input parameter values | https://api.github.com/repos/langchain-ai/langchain/issues/7060/comments | 1 | 2023-07-03T03:10:39Z | 2023-10-09T16:05:55Z | https://github.com/langchain-ai/langchain/issues/7060 | 1,785,200,414 | 7,060 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.221
WSL
Python 3.11
###
If design a very simple chain which do not need any input parameters. Then run it, it will throw error. I think this without any input params may be useful.
llm = conn.connecting.get_model_azureopenai()
mem = langchain.memory.ConversationBufferMemory()
msg1 = langchain.prompts.HumanMessagePromptTemplate.from_template("Hi there!")
prompt1 = langchain.prompts.ChatPromptTemplate.from_messages([msg1])
print(prompt1.input_variables)
chain1 = langchain.chains.LLMChain(
llm=llm,
prompt=prompt1,
memory=mem,
)
resp1 = chain1.run()
print(resp1)
==================
Error:
/usr/local/miniconda3/bin/python /mnt/c/TechBuilder/AIML/LangChain/Dev20230701/chains/runchain1.py
Traceback (most recent call last):
File "/mnt/c/TechBuilder/AIML/LangChain/Dev20230701/chains/runchain1.py", line 87, in <module>
run2()
File "/mnt/c/TechBuilder/AIML/LangChain/Dev20230701/chains/runchain1.py", line 81, in run2
resp1 = chain1.run()
^^^^^^^^^^^^
File "/usr/local/miniconda3/lib/python3.11/site-packages/langchain/chains/base.py", line 296, in run
raise ValueError(
ValueError: `run` supported with either positional arguments or keyword arguments, but none were provided.
[]
Process finished with exit code 1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Step1, develop a simple chain
/usr/local/miniconda3/bin/python /mnt/c/TechBuilder/AIML/LangChain/Dev20230701/chains/runchain1.py
Traceback (most recent call last):
File "/mnt/c/TechBuilder/AIML/LangChain/Dev20230701/chains/runchain1.py", line 87, in <module>
run2()
File "/mnt/c/TechBuilder/AIML/LangChain/Dev20230701/chains/runchain1.py", line 81, in run2
resp1 = chain1.run()
^^^^^^^^^^^^
File "/usr/local/miniconda3/lib/python3.11/site-packages/langchain/chains/base.py", line 296, in run
raise ValueError(
ValueError: `run` supported with either positional arguments or keyword arguments, but none were provided.
[]
Process finished with exit code 1
Step 2, run it without input params, because it does not need any input params.
### Expected behavior
Should run without error | Chain.run() without any param will cause error | https://api.github.com/repos/langchain-ai/langchain/issues/7059/comments | 3 | 2023-07-03T02:43:11Z | 2023-11-29T16:09:09Z | https://github.com/langchain-ai/langchain/issues/7059 | 1,785,176,797 | 7,059 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.221
langchainplus-sdk==0.0.19
Windows 10
### Who can help?
@dev2049 @homanp
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. install `langchain` with the aformentioned version
2. install `openai`
3. run the following code:
```py
import os
from langchain.chains.openai_functions.openapi import get_openapi_chain
os.environ['OPENAI_API_KEY'] = OPENAI_API_KEY
chain = get_openapi_chain("https://gptshop.bohita.com/openapi.yaml")
res = chain.run("I want a tight fitting red shirt for a man")
res
```
or this, as it will produce a similar error:
```py
import os
from langchain.chains.openai_functions.openapi import get_openapi_chain
os.environ['OPENAI_API_KEY'] = OPENAI_API_KEY
chain = get_openapi_chain("https://nani.ooo/openapi.json")
res = chain.run("Show me the amount of nodes on the ethereum blockchain")
res
```
### Expected behavior
The response dict. (not sure what that would look like as I have not gotten a successful response) | get_openapi_chain fails to produce proper Openapi scheme when preparing url. | https://api.github.com/repos/langchain-ai/langchain/issues/7058/comments | 3 | 2023-07-02T23:22:34Z | 2023-10-08T16:05:10Z | https://github.com/langchain-ai/langchain/issues/7058 | 1,785,011,372 | 7,058 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version 0.0.221
Anthropic versions 0.3.1 and 0.3.2
Whenever using ChatAnthropic, this now appears. Version 0.2.10 works, but the below error appears since the Anthropic update a couple of days ago
`Anthropic.__init__() got an unexpected keyword argument 'api_url' (type=type_error)`
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use ChatAnthropic in any form with the latest version of Anthropic and you will encounter: "Anthropic.__init__() got an unexpected keyword argument 'api_url' (type=type_error)"
### Expected behavior
It should work if the API key is provided. You should not have to specify a URL. | Anthropic Upgrade issue | https://api.github.com/repos/langchain-ai/langchain/issues/7056/comments | 2 | 2023-07-02T17:44:23Z | 2023-10-08T16:05:15Z | https://github.com/langchain-ai/langchain/issues/7056 | 1,784,742,819 | 7,056 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain: 0.0.221
python:3.10.2
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader("my_file.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=10, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
### Expected behavior
I‘m follow the smae code on official wbsite "https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/faiss",but replace the txt file content with another (http://novel.tingroom.com/jingdian/5370/135425.html),and the CharacterTextSplitter chunk_size set to10 ,or 100 whatever,then the error kept on raising up:InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 10137 tokens (10137 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
is this “from_documents” method has some kind of character restriction,or it just dont support Long text? | InvalidRequestError: This model's maximum context length | https://api.github.com/repos/langchain-ai/langchain/issues/7054/comments | 2 | 2023-07-02T17:18:17Z | 2023-08-09T01:43:27Z | https://github.com/langchain-ai/langchain/issues/7054 | 1,784,733,766 | 7,054 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain Version: 0.0.220
python 3.9.16
Windows 11/ ubuntu 22.04
anaconda
### Who can help?
@agola11 When ingesting the attched file (I'm attaching a zip as .pkl cannot be updated) in a normal document splitting loop
```python
embeddings = OpenAIEmbeddings()
chunk_sizes = [512, 1024, 1536]
overlap= 100
docs_n_code = "both"
split_documents = []
base_dir ="..."
for chunk_size in chunk_sizes:
text_splitter = RecursiveCharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=overlap)
docs = text_splitter.split_documents(documents)
faiss_dir = "..."
faiss_vectorstore = FAISS.from_documents(docs, embeddings)
faiss_vectorstore.save_local(faiss_dir)
```
I get the following error:
```
setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (10379,) + inhomogeneous part.
```
similar errors occur with chroma and deeplake, locally. With chroma i can get the error more frequently even when ingesting other documents.
[api_docs.zip](https://github.com/hwchase17/langchain/files/11930126/api_docs.zip)
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
with open('api_docs.pkl', 'rb') as file:
documents = pickle.load(file)
text_splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=100)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = FAISS.from_documents(docs, embeddings)
```
### Expected behavior
Create the index | Chroma, Deeplake, Faiss indexing error ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (10379,) + inhomogeneous part. | https://api.github.com/repos/langchain-ai/langchain/issues/7053/comments | 4 | 2023-07-02T17:04:55Z | 2024-04-23T08:14:56Z | https://github.com/langchain-ai/langchain/issues/7053 | 1,784,729,131 | 7,053 |
[
"hwchase17",
"langchain"
]
| ### Feature request
The LLM chain strictly validates all input variables against placeholders in the template. It will throw an error if they don't match. I wish it could handle missing variables, thus making the prompt template more flexible.
```python
def _validate_inputs(self, inputs: Dict[str, Any]) -> None:
"""Check that all inputs are present."""
missing_keys = set(self.input_keys).difference(inputs)
if missing_keys:
raise ValueError(f"Missing some input keys: {missing_keys}")
```
### Motivation
I'm using jinja2 to build a generic prompt template, but the chain's enforcement on variable validation forces me to take some extra steps, such as pre-rendering the prompt before running it through the chain.
I hope the official could support the handling of missing variables.
### Your contribution
I think I could add a variable to determine whether to check the consistency between input variables and template variables, and then submit a PR.
if you have better ideas, such as complex and complete solutions like custom detection rule callback functions.
I look forward to the official update. Please, I really need this, and I believe others do too.
This will make the design of the Prompt more flexible. | Chain support missing variable | https://api.github.com/repos/langchain-ai/langchain/issues/7044/comments | 8 | 2023-07-02T12:16:49Z | 2023-10-19T16:06:48Z | https://github.com/langchain-ai/langchain/issues/7044 | 1,784,612,945 | 7,044 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain==0.0.177
Python==3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When using the **map_reduce** technique with a **GPT 16k** model, you receive an error message: '**A single document was longer than the context length, we cannot handle this.'** I believe this is due to the token_max limit mentioned in the following context. Is it possible to address this issue? It no longer makes sense to use a 16k model if we cannot send documents exceeding the hardcoded limit of 3000 tokens. _Could you please modify this hardcoded value from the 4098 token models to accommodate the 16k models?_ Thank you.
<img width="1260" alt="Capture d’écran 2023-07-02 à 13 26 21" src="https://github.com/hwchase17/langchain/assets/20300624/5f09b7e6-0a7a-49ec-9585-a20524332682">
### Expected behavior
I would like be able to give a Document object that would be more than 3000 tokens. | Map_reduce not adapted to the 16k model 😲😥 | https://api.github.com/repos/langchain-ai/langchain/issues/7043/comments | 2 | 2023-07-02T11:30:01Z | 2023-10-08T16:05:26Z | https://github.com/langchain-ai/langchain/issues/7043 | 1,784,594,777 | 7,043 |
[
"hwchase17",
"langchain"
]
| ### System Info
PYTHON: 3.11.4
LANGCHAIN: 0.0.220
FLATFORM: WINDOWS
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**My code:**
```
import nest_asyncio
import gc
nest_asyncio.apply()
from langchain.document_loaders import WebBaseLoader
urls=['','']
loader = WebBaseLoader(urls)
loader.requests_per_second = 1
scrape_data = loader.aload()
```
**ERROR i got:**
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[4], line 13
10 loader = WebBaseLoader(urls)
11 loader.requests_per_second = 1
---> 13 scrape_data = loader.aload()
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain\document_loaders\web_base.py:227, in WebBaseLoader.aload(self)
224 def aload(self) -> List[Document]:
225 """Load text from the urls in web_path async into Documents."""
--> 227 results = self.scrape_all(self.web_paths)
228 docs = []
229 for i in range(len(results)):
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain\document_loaders\web_base.py:173, in WebBaseLoader.scrape_all(self, urls, parser)
170 """Fetch all urls, then return soups for all results."""
171 from bs4 import BeautifulSoup
--> 173 results = asyncio.run(self.fetch_all(urls))
174 final_results = []
175 for i, result in enumerate(results):
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\nest_asyncio.py:35, in _patch_asyncio..run(main, debug)
33 task = asyncio.ensure_future(main)
34 try:
---> 35 return loop.run_until_complete(task)
...
921 return _RequestContextManager(
--> 922 self._request(hdrs.METH_GET, url, allow_redirects=allow_redirects, **kwargs)
923 )
TypeError: ClientSession._request() got an unexpected keyword argument 'verify'
```
### Expected behavior
_**i want to get web content like basic guide you provide.
I remember I was initially using version 0.0.20, and it worked fine.
But today, after updating to the latest version: 0.0.220, it got this error**_ | ClientSession._request() got an unexpected keyword argument 'verify' | https://api.github.com/repos/langchain-ai/langchain/issues/7042/comments | 4 | 2023-07-02T07:29:01Z | 2023-09-17T18:22:23Z | https://github.com/langchain-ai/langchain/issues/7042 | 1,784,502,508 | 7,042 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Can Plan-And-Execute models be conditioned to plan in particular directions instead of doing the planning alone? Say I'd like the planning to take place differently and the LLM always decides on an exhaustive manner
### Suggestion:
_No response_ | Issue: Conditioning planner with custom instructions in Plan-And-Execute models | https://api.github.com/repos/langchain-ai/langchain/issues/7041/comments | 4 | 2023-07-02T06:57:36Z | 2023-12-18T23:49:38Z | https://github.com/langchain-ai/langchain/issues/7041 | 1,784,492,675 | 7,041 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Can we nest agents in one another? I would like an agent to use a set of tools + an LLM to extract entities, and then another agent after this should use some other tools + LLM to refine them.
### Suggestion:
_No response_ | Issue: Nested agents | https://api.github.com/repos/langchain-ai/langchain/issues/7040/comments | 3 | 2023-07-02T06:55:45Z | 2024-01-15T17:33:23Z | https://github.com/langchain-ai/langchain/issues/7040 | 1,784,492,025 | 7,040 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Can we reuse intermediate steps for an agent defined using STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION?
This is my definition:
```
agent_executor = initialize_agent(
tools,
llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
return_intermediate_steps=True,
agent_kwargs={
"prefix": PREFIX,
"suffix": SUFFIX,
"input_variables": [
"input",
"agent_scratchpad",
"intermediate_steps",
"tools",
"tool_names",
],
"stop": ["\nObservation:"],
},
)
```
But I'm getting this error:
```
Got mismatched input_variables. Expected: {'tools', 'agent_scratchpad', 'input', 'tool_names'}. Got: ['input', 'agent_scratchpad', 'intermediate_steps', 'tools', 'tool_names'] (type=value_error)
```
### Suggestion:
_No response_ | Issue: Doubt on STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION | https://api.github.com/repos/langchain-ai/langchain/issues/7039/comments | 2 | 2023-07-02T06:54:27Z | 2023-11-29T16:09:14Z | https://github.com/langchain-ai/langchain/issues/7039 | 1,784,491,423 | 7,039 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Is there a difference between these two agent types? Where should one use either of them?
### Suggestion:
_No response_ | Issue: LLMSingleActionAgent vs STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION | https://api.github.com/repos/langchain-ai/langchain/issues/7038/comments | 2 | 2023-07-02T06:52:29Z | 2023-10-17T16:06:14Z | https://github.com/langchain-ai/langchain/issues/7038 | 1,784,490,903 | 7,038 |
[
"hwchase17",
"langchain"
]
| ### System Info
Using the latest version of langchain (0.0.220).
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
Steps to reproduce the behavior:
1. `pip install langchain==0.0.220`
2. Run `from langchain.callbacks.base import AsyncCallbackHandler` in a Python script
Running this simple import gives the following traceback on my system:
```
Traceback (most recent call last):
File "C:\Users\jblak\OneDrive\Documents\language-tutor\test.py", line 1, in <module>
from langchain.callbacks.base import AsyncCallbackHandler
File "C:\Users\jblak\anaconda3\lib\site-packages\langchain\__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "C:\Users\jblak\anaconda3\lib\site-packages\langchain\agents\__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "C:\Users\jblak\anaconda3\lib\site-packages\langchain\agents\agent.py", line 16, in <module>
from langchain.agents.tools import InvalidTool
File "C:\Users\jblak\anaconda3\lib\site-packages\langchain\agents\tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "C:\Users\jblak\anaconda3\lib\site-packages\langchain\tools\__init__.py", line 3, in <module>
from langchain.tools.arxiv.tool import ArxivQueryRun
File "C:\Users\jblak\anaconda3\lib\site-packages\langchain\tools\arxiv\tool.py", line 12, in <module>
from langchain.utilities.arxiv import ArxivAPIWrapper
File "C:\Users\jblak\anaconda3\lib\site-packages\langchain\utilities\__init__.py", line 3, in <module>
from langchain.utilities.apify import ApifyWrapper
File "C:\Users\jblak\anaconda3\lib\site-packages\langchain\utilities\apify.py", line 5, in <module>
from langchain.document_loaders import ApifyDatasetLoader
File "C:\Users\jblak\anaconda3\lib\site-packages\langchain\document_loaders\__init__.py", line 52, in <module>
from langchain.document_loaders.github import GitHubIssuesLoader
File "C:\Users\jblak\anaconda3\lib\site-packages\langchain\document_loaders\github.py", line 37, in <module>
class GitHubIssuesLoader(BaseGitHubLoader):
File "pydantic\main.py", line 197, in pydantic.main.ModelMetaclass.__new__
File "pydantic\fields.py", line 506, in pydantic.fields.ModelField.infer
File "pydantic\fields.py", line 436, in pydantic.fields.ModelField.__init__
File "pydantic\fields.py", line 552, in pydantic.fields.ModelField.prepare
File "pydantic\fields.py", line 663, in pydantic.fields.ModelField._type_analysis
File "pydantic\fields.py", line 808, in pydantic.fields.ModelField._create_sub_type
File "pydantic\fields.py", line 436, in pydantic.fields.ModelField.__init__
File "pydantic\fields.py", line 552, in pydantic.fields.ModelField.prepare
File "pydantic\fields.py", line 668, in pydantic.fields.ModelField._type_analysis
File "C:\Users\jblak\anaconda3\lib\typing.py", line 852, in __subclasscheck__
return issubclass(cls, self.__origin__)
TypeError: issubclass() arg 1 must be a class
```
### Expected behavior
Running this line should import the `AsyncCallbackHandler` without error. | TypeError: issubclass() arg 1 must be a class (importing AsyncCallbackHandler) | https://api.github.com/repos/langchain-ai/langchain/issues/7037/comments | 2 | 2023-07-02T06:04:32Z | 2023-10-09T16:06:06Z | https://github.com/langchain-ai/langchain/issues/7037 | 1,784,472,838 | 7,037 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I was skimming through the repository for the MongoDB Agent and I discovered that it does not exist. Is it feasible to develop a MongoDB agent that establishes a connection with MongoDB, generates MongoDB queries based on given questions, and retrieves the corresponding data?
### Motivation
Within my organization, a significant portion of our data is stored in MongoDB. I intend to utilize this agent to establish connections with our databases and develop a practical application around it.
### Your contribution
If it is indeed possible, I am willing to work on implementing this feature. However, I would greatly appreciate some initial guidance or assistance from someone knowledgeable in this area. | Agent for MongoDB | https://api.github.com/repos/langchain-ai/langchain/issues/7036/comments | 17 | 2023-07-02T05:49:29Z | 2024-04-12T16:16:25Z | https://github.com/langchain-ai/langchain/issues/7036 | 1,784,467,107 | 7,036 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I want to chain the ```SelfQueryRetriever``` with ```ConversationalRetrievalChain``` and stream the output. I tested it in synchronous function it's working. However, when I convert it to async function, it pauses there and the execution never ended, no output as well. I went through the source code and found that ```aget_relevant_documents()``` for ```SelfQueryRetriever``` is not implemented.
```python
async def aget_relevant_documents(self, query: str) -> List[Document]:
raise NotImplementedError
```
Not sure whether the issue came from this, because I am using ```acall()``` for ```ConversationalRetrievalChain```. But I do not receive any error message from the execution. Any temporary workaround for me to use ```SelfQueryRetriever``` in async function or I should wait for the release?
### Suggestion:
_No response_ | SelfQueryRetriever not working in async call | https://api.github.com/repos/langchain-ai/langchain/issues/7035/comments | 1 | 2023-07-02T05:17:42Z | 2023-10-08T16:05:40Z | https://github.com/langchain-ai/langchain/issues/7035 | 1,784,455,921 | 7,035 |
[
"hwchase17",
"langchain"
]
| Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | Loading online PDFs gives temporary file path as source in metadata | https://api.github.com/repos/langchain-ai/langchain/issues/7034/comments | 4 | 2023-07-01T23:24:53Z | 2023-11-29T20:07:47Z | https://github.com/langchain-ai/langchain/issues/7034 | 1,784,312,865 | 7,034 |
[
"hwchase17",
"langchain"
]
| ### Feature request
## Summary
Implement a new type of Langchain agent that reasons using a PLoT architecture as described in this paper:
https://arxiv.org/abs/2306.12672
## Notes
* Such an agent would need to use a probabilistic programming language (the paper above used Church) to generate code that describes its world model. This means that it would only work for LLMs that can generate code, and it would also need access to a Church interpreter (if that is the language that is chosen)
### Motivation
An agent which uses probabilistic language of thought to construct a coherent world model would be better at reasoning tasks involving uncertainty and ambiguity. Basically, it would think more like a human, using an understanding of the world that is external to the weights & biases on which it was trained.
### Your contribution
I've already started work on this and I intend to submit a PR when I have time to finish it! I'm pretty busy so any help would be appreciated! Feel free to reach out on discord @jasondotparse or twitter https://twitter.com/jasondotparse if you are down to help! | Probabilistic Language of Thought (PLoT) Agent Implementation | https://api.github.com/repos/langchain-ai/langchain/issues/7027/comments | 2 | 2023-07-01T18:19:31Z | 2023-10-07T16:04:57Z | https://github.com/langchain-ai/langchain/issues/7027 | 1,784,135,218 | 7,027 |
[
"hwchase17",
"langchain"
]
| ### System Info
OutputParserException: Could not parse LLM output: `df = [] for i in range(len(df)):`
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I used open source LLM, HuggingFaceHub(repo_id = "google/flan-t5-xxl"), but not OpenAI LLM, to create agent below:
agent = create_csv_agent using HuggingFaceHub(repo_id = "google/flan-t5-xxl"), r"D:\ML\titanic\titanic.csv", verbose=True)
the agent created successfully:
but when I used the agent below, I got error above:
agent.run("Plot a bar chart comparing Survived vs Dead for Embarked")
agent.run("Find the count of missing values in each column")
agent.run("Fix the issues with missing values for column Embarked with mode")
agent.run("Drop cabin column")
### Expected behavior
I should get plot or other . | OutputParserException: Could not parse LLM output: `df = [] for i in range(len(df)):` | https://api.github.com/repos/langchain-ai/langchain/issues/7024/comments | 1 | 2023-07-01T16:41:37Z | 2023-10-07T16:05:02Z | https://github.com/langchain-ai/langchain/issues/7024 | 1,784,075,187 | 7,024 |
[
"hwchase17",
"langchain"
]
| ### System Info
```
langchain: latest as of yesterday
M2 Pro
16 GB
Macosx
Darwin UAVALOS-M-NR30 22.5.0 Darwin Kernel Version 22.5.0: Thu Jun 8 22:22:23 PDT 2023; root:xnu-8796.121.3~7/RELEASE_ARM64_T6020 arm64
Python 3.10.10
GNU Make 3.81
Apple clang version 14.0.3 (clang-1403.0.22.14.1)
```
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to repo.
1. Install latest requirement
2. Try to load the `gtr-t5-xxl` embedding model from file.
```
!pip uninstall llama-cpp-python -y
!CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir
!pip install 'llama-cpp-python[server]'
!pip install sentence_transformers
!pip install git+https://github.com/huggingface/peft.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install -v datasets loralib sentencepiece
!pip -v install bitsandbytes accelerate
!pip -v install langchain
!pip install scipy
!pip install xformers
!pip install langchain faiss-cpu
# needed to load git repo
!pip install GitPython
!pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
// not this is the latest repo cloned locally
from langchain.embeddings import HuggingFaceEmbeddings
embeddings = HuggingFaceEmbeddings(model_name='/Users/uavalos/Documents/gtr-t5-xxl')
```
### Expected behavior
* that it loads the embeddings just fine
* in fact, it was working a few days ago
* it stopped working after creating a new conda environment and reinstalling everything fresh
* if you look at the logs, it's looking for a module that's already installed. It's also failing to import the `google` module... i get the same error if i explicitly install that as well
Failure logs
```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
File ~/miniconda3/envs/llama3/lib/python3.11/site-packages/langchain/embeddings/huggingface.py:51, in HuggingFaceEmbeddings.__init__(self, **kwargs)
50 try:
---> 51 import sentence_transformers
53 except ImportError as exc:
File ~/miniconda3/envs/llama3/lib/python3.11/site-packages/sentence_transformers/__init__.py:3
2 __MODEL_HUB_ORGANIZATION__ = 'sentence-transformers'
----> 3 from .datasets import SentencesDataset, ParallelSentencesDataset
4 from .LoggingHandler import LoggingHandler
File ~/miniconda3/envs/llama3/lib/python3.11/site-packages/sentence_transformers/datasets/__init__.py:3
2 from .NoDuplicatesDataLoader import NoDuplicatesDataLoader
----> 3 from .ParallelSentencesDataset import ParallelSentencesDataset
4 from .SentencesDataset import SentencesDataset
File ~/miniconda3/envs/llama3/lib/python3.11/site-packages/sentence_transformers/datasets/ParallelSentencesDataset.py:4
3 import gzip
----> 4 from .. import SentenceTransformer
5 from ..readers import InputExample
File ~/miniconda3/envs/llama3/lib/python3.11/site-packages/sentence_transformers/SentenceTransformer.py:11
10 from numpy import ndarray
---> 11 import transformers
12 from huggingface_hub import HfApi, HfFolder, Repository, hf_hub_url, cached_download
File ~/miniconda3/envs/llama3/lib/python3.11/site-packages/transformers/__init__.py:26
25 # Check the dependencies satisfy the minimal versions required.
---> 26 from . import dependency_versions_check
27 from .utils import (
28 OptionalDependencyNotAvailable,
29 _LazyModule,
(...)
42 logging,
43 )
File ~/miniconda3/envs/llama3/lib/python3.11/site-packages/transformers/dependency_versions_check.py:16
15 from .dependency_versions_table import deps
---> 16 from .utils.versions import require_version, require_version_core
19 # define which module versions we always want to check at run time
20 # (usually the ones defined in `install_requires` in setup.py)
21 #
22 # order specific notes:
23 # - tqdm must be checked before tokenizers
File ~/miniconda3/envs/llama3/lib/python3.11/site-packages/transformers/utils/__init__.py:188
186 else:
187 # just to get the expected `No module named 'google.protobuf'` error
--> 188 from . import sentencepiece_model_pb2
191 WEIGHTS_NAME = "pytorch_model.bin"
File ~/miniconda3/envs/llama3/lib/python3.11/site-packages/transformers/utils/sentencepiece_model_pb2.py:17
1 # Generated by the protocol buffer compiler. DO NOT EDIT!
2 # source: sentencepiece_model.proto
3
(...)
15 # See the License for the specific language governing permissions and
16 # limitations under the License.
---> 17 from google.protobuf import descriptor as _descriptor
18 from google.protobuf import message as _message
ModuleNotFoundError: No module named 'google'
The above exception was the direct cause of the following exception:
ImportError Traceback (most recent call last)
Cell In[8], line 2
1 from langchain.embeddings import HuggingFaceEmbeddings
----> 2 embeddings = HuggingFaceEmbeddings(model_name='/Users/uavalos/Documents/gtr-t5-xxl')
File ~/miniconda3/envs/llama3/lib/python3.11/site-packages/langchain/embeddings/huggingface.py:54, in HuggingFaceEmbeddings.__init__(self, **kwargs)
51 import sentence_transformers
53 except ImportError as exc:
---> 54 raise ImportError(
55 "Could not import sentence_transformers python package. "
56 "Please install it with `pip install sentence_transformers`."
57 ) from exc
59 self.client = sentence_transformers.SentenceTransformer(
60 self.model_name, cache_folder=self.cache_folder, **self.model_kwargs
61 )
ImportError: Could not import sentence_transformers python package. Please install it with `pip install sentence_transformers`.
``` | Could not import sentence_transformers python package. | https://api.github.com/repos/langchain-ai/langchain/issues/7019/comments | 9 | 2023-07-01T15:22:37Z | 2024-03-18T14:01:19Z | https://github.com/langchain-ai/langchain/issues/7019 | 1,784,012,562 | 7,019 |
[
"hwchase17",
"langchain"
]
| ### Feature request
when i use jinja2 template, i get the following the problem
template:
```
{% if a %}\n
{{a}}\n
{% endif %}\n
{{b}}
```
and i pass b variable to template render,then the output like this
```
\n
b
```
Each variable placeholder is followed by a '\n'.
Since a is not passed in, and '\n' is left after rendering
i wand the right output is this:
```
b
```
in jinja2 ,use enviroment create template ,and enable trim_blocks ,the jinja2 can ignore '\n'.
this is origin code
```python
def jinja2_formatter(template: str, **kwargs: Any) -> str:
"""Format a template using jinja2."""
try:
from jinja2 import Template
except ImportError:
raise ImportError(
"jinja2 not installed, which is needed to use the jinja2_formatter. "
"Please install it with `pip install jinja2`."
)
return Template(template).render(**kwargs)
```
I tried modifying it a bit
```python
def jinja2_formatter(template: str, **kwargs: Any) -> str:
"""Format a template using jinja2."""
try:
from jinja2 import Environment
except ImportError:
raise ImportError(
"jinja2 not installed, which is needed to use the jinja2_formatter. "
"Please install it with `pip install jinja2`."
)
env = Environment(trim_blocks=True)
jinja_template = env.from_string(template)
return jinja_template.render(**kwargs)
```
### Motivation
I often use the control semantics of Jinja2 to construct prompt templates, creating several different prompts by passing in different variables.
However, since the Jinja2 render in Langchain does not support trim_blocks, it always results in unnecessary blank lines.
Perhaps LLM doesn't care about these extra lines, but as a human being, my OCD makes me feel uncomfortable with them.
### Your contribution
The jinja2_formatter function is too low-level within the Langchain code.
Although the modification is simple, I fear that the PR I submit could potentially affect other parts of the code.
So, if you have time, perhaps you could personally implement this feature. | jinja2 formate suppot trim_blocks mode | https://api.github.com/repos/langchain-ai/langchain/issues/7018/comments | 1 | 2023-07-01T14:43:22Z | 2023-10-07T16:05:07Z | https://github.com/langchain-ai/langchain/issues/7018 | 1,783,981,699 | 7,018 |
[
"hwchase17",
"langchain"
]
| ### Feature request
the following is part of the format code for PipelinePromptTemplate
```python
def _get_inputs(inputs: dict, input_variables: List[str]) -> dict:
return {k: inputs[k] for k in input_variables}
class PipelinePromptTemplate(BasePromptTemplate):
....
def format_prompt(self, **kwargs: Any) -> PromptValue:
for k, prompt in self.pipeline_prompts:
_inputs = _get_inputs(kwargs, prompt.input_variables)
if isinstance(prompt, BaseChatPromptTemplate):
kwargs[k] = prompt.format_messages(**_inputs)
else:
kwargs[k] = prompt.format(**_inputs)
_inputs = _get_inputs(kwargs, self.final_prompt.input_variables)
return self.final_prompt.format_prompt(**_inputs)
```
the function _get_inputs should support miss keyword process,like:
```python
def _get_inputs(inputs: dict, input_variables: List[str]) -> dict:
return {k: inputs.get(k) for k in input_variables}
```
### Motivation
I primarily use the Jinja2 template format and frequently use control statements. Consequently, there are times when I choose not to pass in certain variables. Thus, I hope that PipelinePromptTemplate could support handling missing variables.
### Your contribution
Perhaps, I can submit a PR to add this feature or fix this bug . If you don't have the time to address it, that is. | PipelinePromptTemplate format_prompt should support missing keyword | https://api.github.com/repos/langchain-ai/langchain/issues/7016/comments | 6 | 2023-07-01T13:16:11Z | 2023-10-12T16:07:57Z | https://github.com/langchain-ai/langchain/issues/7016 | 1,783,894,983 | 7,016 |
[
"hwchase17",
"langchain"
]
| Attach my code
```
llm = OpenAI(model_name='gpt-4', temperature=0)
embeddings = OpenAIEmbeddings(model='text-embedding-ada-002')
docs = TextLoader("state_of_the_union.txt").load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(docs)
state_of_union_store = Chroma.from_documents(
texts,
embeddings,
collection_name="state-of-union"
)
vectorstore_info = VectorStoreInfo(
name="state_of_union_address",
description="the most recent state of the Union adress",
vectorstore=state_of_union_store,
)
toolkit = VectorStoreToolkit(vectorstore_info=vectorstore_info)
agent_executor = create_vectorstore_agent(llm=llm, toolkit=toolkit, verbose=True)
agent_executor("What did biden say about ketanji brown jackson in the state of the union address?")
``` | How to save embeddings when using Agent for local document Q&A to avoid unnecessary waste by rereading API calls to generate embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/7011/comments | 1 | 2023-07-01T08:31:43Z | 2023-10-07T16:05:17Z | https://github.com/langchain-ai/langchain/issues/7011 | 1,783,681,691 | 7,011 |
[
"hwchase17",
"langchain"
]
| I want to add memory context for Agent, so I referred to the document [here](https://python.langchain.com/docs/modules/memory/how_to/agent_with_memory)
There is such a step in it

I want to implement the same step in the `create_vectorstore_agent `function,

but I found that `create_vectorstore_agent `is a pre-defined,
it has no way to pass some parameters, such as suffix(into `ZeroShotAgent.create_prompt`),
which means, if I want to use the `agent_with_memory `function ,
I must build the agent from scratch, and cannot use `create_vectorstore_agent`
| How to add memory with` create_vectorstore_agent` | https://api.github.com/repos/langchain-ai/langchain/issues/7010/comments | 2 | 2023-07-01T08:18:12Z | 2023-10-07T16:05:22Z | https://github.com/langchain-ai/langchain/issues/7010 | 1,783,670,404 | 7,010 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am using a create_pandas_dataframe_agent like so:
```
create_pandas_dataframe_agent(ChatOpenAI(model_name='gpt-4', temperature=0), df, verbose=True,
max_iterations=2,
early_stopping_method="generate",
handle_parsing_errors="I'm sorry, but the question you have asked is not available within the dataset. Is there anything else I can help you with?")
```
When I try to prompt it to do anything that requires a regex the output i get:
```
Thought: The client wants to know the top 5 hashtags used in the tweets. To find this, I need to extract all the hashtags from the 'full_text' column of the dataframe, count the frequency of each hashtag, and then return the top 5. However, the dataframe does not contain a column for hashtags. Therefore, I need to create a new column 'hashtags' in the dataframe where each entry is a list of hashtags found in the corresponding 'full_text'. Then, I can count the frequency of each hashtag in this new column and return the top 5.
Action: python_repl_ast
Action Input:
```python
import re
import pandas as pd
# Function to extract hashtags from a text
def extract_hashtags(text):
return re.findall(r'\#\w+', text)
# Create a new column 'hashtags' in the dataframe
df['hashtags'] = df['full_text'].apply(extract_hashtags)
# Flatten the list of hashtags and count the frequency of each hashtag
hashtag_counts = pd.Series([hashtag for hashtags in df['hashtags'] for hashtag in hashtags]).value_counts()
# Get the top 5 hashtags
top_5_hashtags = hashtag_counts.head(5)
top_5_hashtags
```
**Observation: NameError: name 're' is not defined**
```
Can anyone help?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
use this agent:
create_pandas_dataframe_agent(ChatOpenAI(model_name='gpt-4', temperature=0), df, verbose=True,
max_iterations=2,
early_stopping_method="generate",
handle_parsing_errors="I'm sorry, but the question you have asked is not available within the dataset. Is there anything else I can help you with?")
then prompt it with
top 5 hashtags used
### Expected behavior
the regex should work and return the answer | python tool issue: NameError: name 're' is not defined | https://api.github.com/repos/langchain-ai/langchain/issues/7009/comments | 2 | 2023-07-01T07:39:21Z | 2023-10-07T16:05:28Z | https://github.com/langchain-ai/langchain/issues/7009 | 1,783,632,265 | 7,009 |
[
"hwchase17",
"langchain"
]
| ### Feature request
ElasticSearch embedding need a function to add single text string as Azure OpenAI only support single text input now.
Message from Azure OpenAI Embedding:
embed_documents(texts)
# openai.error.InvalidRequestError: Too many inputs for model None.
# The max number of inputs is 1. We hope to increase the number of inputs per request soon.
# Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.
### Motivation
To support to use Azure OpenAI embedding API
### Your contribution
I can do more test after if this fixed. | ElasticSearch embedding need a function to add single text string as AzureOpenAI only support single text input now | https://api.github.com/repos/langchain-ai/langchain/issues/7004/comments | 1 | 2023-07-01T03:54:40Z | 2023-10-07T16:05:33Z | https://github.com/langchain-ai/langchain/issues/7004 | 1,783,463,930 | 7,004 |
[
"hwchase17",
"langchain"
]
| ### Feature request
[https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/faiss](https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/faiss)
There's a section that allows you to filter the documents stored in FAISS
It will be cool to allow
results = db.similarity_search("foo", filter=dict(page=1), k=1, fetch_k=4)
when you are doing
vectorstore.as_retriever(<<Include the filter here>>)
This can allow you to select a subset of your vector documents to chat with them.
This is if I have multiple documents from multiple sources loaded in the vectorstroe.
### Motivation
When I do a conversational
qa = ConversationalRetrievalChain(
retriever=vector..as_retriever(), ...
If allow to use only a portion of the document
qa = ConversationalRetrievalChain(
retriever=vector.as_retriever(<<Filter goes here>>>), ...
### Your contribution
I can test it :-) | FAISS Support for filter while using the as_retriever() | https://api.github.com/repos/langchain-ai/langchain/issues/7002/comments | 1 | 2023-07-01T02:31:45Z | 2023-10-07T16:05:38Z | https://github.com/langchain-ai/langchain/issues/7002 | 1,783,366,350 | 7,002 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python version: Python 3.10.6
Langchain version: 0.0.219
OS: Ubuntu 22.04
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I've simple code like this
```
from langchain.agents import create_csv_agent
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.agents.agent_types import AgentType
import os
import sys
directory = './test'
f = []
for filename in os.listdir(directory):
if filename.endswith(".csv"):
f.append(directory + "/" +filename)
agent = create_csv_agent(
ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613"),
f,
verbose=True,
agent_type=AgentType.OPENAI_FUNCTIONS,
)
qry ="how many rows are there?"
while True:
if not qry:
qry = input("Q: ")
if qry in ['quit', 'q', 'exit']:
sys.exit()
agent.run(qry)
qry = None
```
I'm using titanic dataset
```
https://github.com/datasciencedojo/datasets/blob/master/titanic.csv
```
Error as below
```
$ python3 langchain-csv.py
> Entering new chain...
Traceback (most recent call last):
File "/home/yockgenm/.local/lib/python3.10/site-packages/langchain/agents/openai_functions_agent/base.py", line 112, in _parse_ai_message
_tool_input = json.loads(function_call["arguments"])
File "/usr/lib/python3.10/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/mytuition/langchain-csv.py", line 32, in <module>
agent.run(qry)
File "/home/yockgenm/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 290, in run
return self(args[0], callbacks=callbacks, tags=tags)[_output_key]
File "/home/yockgenm/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 166, in __call__
raise e
File "/home/yockgenm/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 160, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/yockgenm/.local/lib/python3.10/site-packages/langchain/agents/agent.py", line 987, in _call
next_step_output = self._take_next_step(
File "/home/yockgenm/.local/lib/python3.10/site-packages/langchain/agents/agent.py", line 803, in _take_next_step
raise e
File "/home/yockgenm/.local/lib/python3.10/site-packages/langchain/agents/agent.py", line 792, in _take_next_step
output = self.agent.plan(
File "/home/yockgenm/.local/lib/python3.10/site-packages/langchain/agents/openai_functions_agent/base.py", line 212, in plan
agent_decision = _parse_ai_message(predicted_message)
File "/home/yockgenm/.local/lib/python3.10/site-packages/langchain/agents/openai_functions_agent/base.py", line 114, in _parse_ai_message
raise OutputParserException(
langchain.schema.OutputParserException: Could not parse tool input: {'name': 'python', 'arguments': 'len(df)'} because the `arguments` is not valid JSON.
```
### Expected behavior
Langchain provided answer of total row | Bug of csv agent, basically all query failed with json error | https://api.github.com/repos/langchain-ai/langchain/issues/7001/comments | 13 | 2023-07-01T01:11:10Z | 2024-02-15T16:11:26Z | https://github.com/langchain-ai/langchain/issues/7001 | 1,783,315,774 | 7,001 |
[
"hwchase17",
"langchain"
]
| ### System Info
Most recent versions of all
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [x] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.embeddings.openai import OpenAIEmbeddings
File "C:\Users\hasaa\anaconda3\lib\site-packages\langchain\__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "C:\Users\hasaa\anaconda3\lib\site-packages\langchain\agents\__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "C:\Users\hasaa\anaconda3\lib\site-packages\langchain\agents\agent.py", line 30, in <module>
from langchain.prompts.few_shot import FewShotPromptTemplate
File "C:\Users\hasaa\anaconda3\lib\site-packages\langchain\prompts\__init__.py", line 12, in <module>
from langchain.prompts.example_selector import (
File "C:\Users\hasaa\anaconda3\lib\site-packages\langchain\prompts\example_selector\__init__.py", line 4, in <module>
from langchain.prompts.example_selector.semantic_similarity import (
File "C:\Users\hasaa\anaconda3\lib\site-packages\langchain\prompts\example_selector\semantic_similarity.py", line 10, in <module>
from langchain.vectorstores.base import VectorStore
File "C:\Users\hasaa\anaconda3\lib\site-packages\langchain\vectorstores\__init__.py", line 2, in <module>
from langchain.vectorstores.alibabacloud_opensearch import (
File "C:\Users\hasaa\anaconda3\lib\site-packages\langchain\vectorstores\alibabacloud_opensearch.py", line 9, in <module>
from langchain.vectorstores.base import VectorStore
File "C:\Users\hasaa\anaconda3\lib\site-packages\langchain\vectorstores\base.py", line 372, in <module>
class VectorStoreRetriever(BaseRetriever, BaseModel):
File "C:\Users\hasaa\anaconda3\lib\site-packages\langchain\vectorstores\base.py", line 388, in VectorStoreRetriever
def validate_search_type(cls, values: Dict) -> Dict:
File "pydantic\class_validators.py", line 126, in pydantic.class_validators.root_validator.dec
File "pydantic\class_validators.py", line 144, in pydantic.class_validators._prepare_validator
pydantic.errors.ConfigError: duplicate validator function "langchain.vectorstores.base.VectorStoreRetriever.validate_search_type"; if this is intended, set `allow_reuse=True`
### Expected behavior
The error I am facing is due to a duplicate validator function in the VectorStoreRetriever | Unable to load OPENAI Embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/6999/comments | 1 | 2023-06-30T22:26:08Z | 2023-10-06T16:05:33Z | https://github.com/langchain-ai/langchain/issues/6999 | 1,783,222,113 | 6,999 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, and ChatMessage -- ChatMessage takes in an arbitrary role parameter. Most of the time, you'll just be dealing with HumanMessage, AIMessage, and SystemMessage.
```
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
chat = ChatOpenAI(temperature=0)
chat.predict_messages([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
# >> AIMessage(content="J'aime programmer.", additional_kwargs={})
```
It is useful to understand how chat models are different from a normal LLM, but it can often be handy to just be able to treat them the same. LangChain makes that easy by also exposing an interface through which you can interact with a chat model as you would a normal LLM. You can access this through the predict interface.
```
chat.predict("Translate this sentence from English to French. I love programming.")
# >> J'aime programmer
```
### Idea or request for content:
This code gives me the error:
```
AttributeError: 'ChatOpenAI' object has no attribute 'predict'
```
Is there really an exposed interface for predict? | DOC: <Please write a comprehensive title after the 'DOC: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/6998/comments | 0 | 2023-06-30T20:58:25Z | 2023-06-30T21:02:55Z | https://github.com/langchain-ai/langchain/issues/6998 | 1,783,160,731 | 6,998 |
[
"hwchase17",
"langchain"
]
| ### System Info
Macbook OSX
Python 3.11.4 (main, Jun 25 2023, 18:18:14) [Clang 14.0.3 (clang-1403.0.22.14.1)] on darwin
LangChain Environment:
sdk_version:0.0.17
library:langchainplus_sdk
platform:macOS-13.3.1-arm64-arm-64bit
runtime:python
runtime_version:3.11.4
I'm getting this error, and no matter what configurations I try, I cannot get past it.
```
AttributeError: 'Credentials' object has no attribute 'with_scopes'
```
My abbreviated code snippet:
```
from langchain.document_loaders import GoogleDriveLoader
loader = GoogleDriveLoader(
folder_id="xxxx", recursive=True,
)
docs = loader.load()
```
```
cat ~/.credentials/credentials.json
{
"installed": {
"client_id": "xxxxxxxxx.apps.googleusercontent.com",
"project_id": "xxxxxxxxx",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_secret": "xxxxxxxxx",
"redirect_uris": [
"http://localhost"
]
}
}
```
```
#gcloud info
Google Cloud SDK [437.0.1]
Platform: [Mac OS X, arm] uname_result(system='Darwin', node='xxxxs-MBP-3', release='22.4.0', version='Darwin Kernel Version 22.4.0: Mon Mar 6 20:59:28 PST 2023; root:xnu-8796.101.5~3/RELEASE_ARM64_T6000', machine='arm64')
Locale: (None, 'UTF-8')
Python Version: [3.11.4 (main, Jun 25 2023, 18:18:14) [Clang 14.0.3 (clang-1403.0.22.14.1)]]
Python Location: [/Users/xxxx/.asdf/installs/python/3.11.4/bin/python3]
OpenSSL: [OpenSSL 1.1.1u 30 May 2023]
Requests Version: [2.25.1]
urllib3 Version: [1.26.9]
Default CA certs file: [/Users/xxxx/google-cloud-sdk/lib/third_party/certifi/cacert.pem]
Site Packages: [Disabled]
Installation Root: [/Users/xxxx/google-cloud-sdk]
Installed Components:
gsutil: [5.24]
core: [2023.06.30]
bq: [2.0.93]
gcloud-crc32c: [1.0.0]
System PATH: [/Users/xxxx/.asdf/plugins/python/shims:/Users/xxxx/.asdf/installs/python/3.11.4/bin:/Users/xxxx/.rvm/gems/ruby-3.0.3/bin:/Users/xxxx/.rvm/gems/ruby-3.0.3@global/bin:/Users/xxxx/.rvm/rubies/ruby-3.0.3/bin:/Users/xxxx/.rvm/bin:/Users/xxxx/google-cloud-sdk/bin:/Users/xxxx/.asdf/shims:/opt/homebrew/opt/asdf/libexec/bin:/Users/xxxx/.sdkman/candidates/gradle/current/bin:/Users/xxxx/.jenv/shims:/Users/xxxx/.jenv/bin:/Users/xxxx/.nvm/versions/node/v17.6.0/bin:/opt/homebrew/opt/[email protected]/bin:/opt/homebrew/opt/mysql-client/bin:/opt/homebrew/bin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Applications/Postgres.app/Contents/Versions/latest/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin:/Users/xxxx/Library/Android/sdk/emulator:/Users/xxxx/Library/Android/sdk/platform-tools]
Python PATH: [/Users/xxxx/google-cloud-sdk/lib/third_party:/Users/xxxx/google-cloud-sdk/lib:/Users/xxxx/.asdf/installs/python/3.11.4/lib/python311.zip:/Users/xxxx/.asdf/installs/python/3.11.4/lib/python3.11:/Users/xxxx/.asdf/installs/python/3.11.4/lib/python3.11/lib-dynload]
Cloud SDK on PATH: [True]
Kubectl on PATH: [/usr/local/bin/kubectl]
Installation Properties: [/Users/xxxx/google-cloud-sdk/properties]
User Config Directory: [/Users/xxx/.config/gcloud]
Active Configuration Name: [default]
Active Configuration Path: [/Users/xxxx/.config/gcloud/configurations/config_default]
Account: [[email protected]]
Project: [project-name-replaced]
Current Properties:
[compute]
zone: [europe-west1-b] (property file)
region: [europe-west1] (property file)
[core]
account: [[email protected]] (property file)
disable_usage_reporting: [True] (property file)
project: [project-name-replaced] (property file)
Logs Directory: [/Users/xxx/.config/gcloud/logs]
Last Log File: [/Users/xxx/.config/gcloud/logs/xxxx.log]
git: [git version 2.39.2]
ssh: [OpenSSH_9.0p1, LibreSSL 3.3.6]
```
### Who can help?
@eyurtsev I think?
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
It's happening with a base app, nothing special. Using this template:
https://github.com/hwchase17/langchain-streamlit-template
### Expected behavior
Expected to get the oAuth window? | GoogleDriveLoader: AttributeError: 'Credentials' object has no attribute 'with_scopes' | https://api.github.com/repos/langchain-ai/langchain/issues/6997/comments | 3 | 2023-06-30T20:25:55Z | 2024-03-11T07:56:07Z | https://github.com/langchain-ai/langchain/issues/6997 | 1,783,124,885 | 6,997 |
[
"hwchase17",
"langchain"
]
| ### System Info
running google/flan-tf-xxl on ghcr.io/huggingface/text-generation-inference:0.8
langchain==0.0.220
text-generation==0.6.0
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`HuggingFaceTextGenInference` does not accept a temperature parameter of 0 or None.
```
llm = HuggingFaceTextGenInference(
inference_server_url="http://localhost:8010/",
temperature=0.
)
llm("What did foo say about bar?")
```
ValidationError: `temperature` must be strictly positive
```
llm = HuggingFaceTextGenInference(
inference_server_url="http://localhost:8010/",
temperature=None
)
llm("What did foo say about bar?")
```
ValidationError: 1 validation error for HuggingFaceTextGenInference
temperature none is not an allowed value (type=type_error.none.not_allowed)
### Expected behavior
I expect to be able to pass a parameter to `HuggingFaceTextGenInference` that instructs the model to do greedy decoding without getting an error.
It seems like the issue is that LangChain enforces that temperature be an integer while the text-generation [client requires temperature](https://github.com/huggingface/text-generation-inference/blob/main/clients/python/text_generation/types.py#L74C4-L74C4) be None for greedy decoding or >0 for sampling.
| HuggingFaceTextGenInference does not accept 0 temperature | https://api.github.com/repos/langchain-ai/langchain/issues/6993/comments | 4 | 2023-06-30T18:51:33Z | 2023-09-29T16:20:59Z | https://github.com/langchain-ai/langchain/issues/6993 | 1,783,014,565 | 6,993 |
[
"hwchase17",
"langchain"
]
| ### System Info
Macos system
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
loading malfunction
| langchain takes much time to reload | https://api.github.com/repos/langchain-ai/langchain/issues/6991/comments | 3 | 2023-06-30T18:17:21Z | 2023-10-07T16:05:43Z | https://github.com/langchain-ai/langchain/issues/6991 | 1,782,974,926 | 6,991 |
[
"hwchase17",
"langchain"
]
| `class VectaraRetriever` does not use the parameters passed in by `Vectara.as_retriever()`. The class definition sets up `vectorstore` and `search_kwargs` but then there is no check for values coming in from `Vectara.as_retriever()`.
https://github.com/hwchase17/langchain/blob/e3b7effc8f39333076c2bedd7306de81ce988de6/langchain/vectorstores/vectara.py#L307 | VectaraRetriever does not use the parameters passed in by `Vectara.as_retriever()` | https://api.github.com/repos/langchain-ai/langchain/issues/6984/comments | 4 | 2023-06-30T16:42:27Z | 2023-10-08T16:05:51Z | https://github.com/langchain-ai/langchain/issues/6984 | 1,782,833,784 | 6,984 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
The links in the documentation to make contributions to pages point to the wrong file location within the repository. For example, the link on the "Quickstart" page points to the filepath `docs/docs/get_started/quickstart.mdx`, when it should point to `docs/docs_skeleton/docs/get_started/quickstart.mdx`. Presumably this is a result of #6300 and the links not getting updated.
### Idea or request for content:
If someone more familiar with the structure of the docs could tell me where to find the base template where the link templates are specified, and preferably a bit of guidance on how the documentation is structured now to make it easier to figure out the correct links, then I can make the updates. | DOC: "Edit this page" returns 404 | https://api.github.com/repos/langchain-ai/langchain/issues/6983/comments | 1 | 2023-06-30T16:33:59Z | 2023-10-06T16:05:48Z | https://github.com/langchain-ai/langchain/issues/6983 | 1,782,823,112 | 6,983 |
[
"hwchase17",
"langchain"
]
| ### System Info
Here is my code to initialize the Chain, it should come with the default prompt template:
qa_source_chain = RetrievalQAWithSourcesChain.from_chain_type(
llm=llm_chat,
chain_type="stuff",
retriever=db_test.as_retriever()
)
Here is the source chain object, I guess default template for this chain is hacked?
RetrievalQAWithSourcesChain(memory=None, callbacks=None, callback_manager=None, verbose=False, combine_documents_chain=StuffDocumentsChain(memory=None, callbacks=None, callback_manager=None, verbose=False, input_key='input_documents', output_key='output_text', llm_chain=LLMChain(memory=None, callbacks=None, callback_manager=None, verbose=False, prompt=PromptTemplate(input_variables=['summaries', 'question'], output_parser=None, partial_variables={}, template='Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES"). \nIf you don\'t know the answer, just say that you don\'t know. Don\'t try to make up an answer.\nALWAYS return a "SOURCES" part in your answer.\n\nQUESTION: Which state/country\'s law governs the interpretation of the contract?\n=========\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an injunction or other relief to protect its Intellectual Property Rights.\nSource: 28-pl\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other) right or remedy.\n\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation in force of the remainder of the term (if any) and this Agreement.\n\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any kind between the parties.\n\n11.9 No Third-Party Beneficiaries.\nSource: 30-pl\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as defined in Clause 8.5) or that such a violation is reasonably likely to occur,\nSource: 4-pl\n=========\nFINAL ANSWER: This Agreement is governed by English law.\nSOURCES: 28-pl\n\nQUESTION: What did the president say about Michael Jackson?\n=========\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\nSource: 0-pl\nContent: And we won’t stop. \n\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \n\nLet’s use this moment to reset. Let’s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease. \n\nLet’s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans. \n\nWe can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\nSource: 24-pl\nContent: And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards. \n\nTo all Americans, I will be honest with you, as I’ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \n\nAnd I’m taking robust action to make sure the pain of our sanctions is targeted at Russia’s economy. And I will use every tool at our disposal to protect American businesses and consumers. \n\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. \n\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n\nThese steps will help blunt gas prices here at home. And I know the news about what’s happening can seem alarming. \n\nBut I want you to know that we are going to be okay.\nSource: 5-pl\nContent: More support for patients and families. \n\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \n\nIt’s based on DARPA—the Defense Department project that led to the Internet, GPS, and so much more. \n\nARPA-H will have a singular purpose—to drive breakthroughs in cancer, Alzheimer’s, diabetes, and more. \n\nA unity agenda for the nation. \n\nWe can do this. \n\nMy fellow Americans—tonight , we have gathered in a sacred space—the citadel of our democracy. \n\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \n\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \n\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \n\nNow is the hour. \n\nOur moment of responsibility. \n\nOur test of resolve and conscience, of history itself. \n\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \n\nWell I know this nation.\nSource: 34-pl\n=========\nFINAL ANSWER: The president did not mention Michael Jackson.\nSOURCES:\n\nQUESTION: {question}\n=========\n{summaries}\n=========\nFINAL ANSWER:', template_format='f-string', validate_template=True), llm=ChatOpenAI(verbose=False, callbacks=None, callback_manager=None, client=<class 'openai.api_resources.chat_completion.ChatCompletion'>, model_name='gpt-3.5-turbo', temperature=0.0, model_kwargs={}, openai_api_key='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', openai_api_base=None, openai_organization=None, openai_proxy=None, request_timeout=None, max_retries=6, streaming=False, n=1, max_tokens=2000), output_key='text'), document_prompt=PromptTemplate(input_variables=['page_content', 'source'], output_parser=None, partial_variables={}, template='Content: {page_content}\nSource: {source}', template_format='f-string', validate_template=True), document_variable_name='summaries', document_separator='\n\n'), question_key='question', input_docs_key='docs', answer_key='answer', sources_answer_key='sources', return_source_documents=False, retriever=VectorStoreRetriever(vectorstore=<langchain.vectorstores.chroma.Chroma object at 0x14fa7cb50>, search_type='similarity', search_kwargs={}), reduce_k_below_max_tokens=False, max_tokens_limit=3375)
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. import and initialize a RetrievalQAWithSourcesChain, using RetrievalQAWithSourcesChain.from_chain_type() method
2. check the default prompt template of the RetrievalQAWithSourcesChain object
### Expected behavior
The default template should be very similar to a RetrievalQA chain object | RetrievalQAWithSourcesChain object default prompt is problematic | https://api.github.com/repos/langchain-ai/langchain/issues/6982/comments | 2 | 2023-06-30T16:22:47Z | 2023-10-06T16:05:53Z | https://github.com/langchain-ai/langchain/issues/6982 | 1,782,809,247 | 6,982 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Everything was working fine 10 min ago, but now when I use the ConversationalRetrievalQAChain I always get a 401 error, I tried 2 difrent API keys and the error persists, don't really know what the issue mght be, any ideas?
This is the error log:
[chain/error] [1:chain:conversational_retrieval_chain] [315ms] Chain run errored with error: "Request failed with status code 401"
error Error: Request failed with status code 401
at createError (EDITED\node_modules\axios\lib\core\createError.js:16:15)
at settle (EDITED\node_modules\axios\lib\core\settle.js:17:12)
at IncomingMessage.handleStreamEnd (EDITED\node_modules\axios\lib\adapters\http.js:322:11)
at IncomingMessage.emit (node:events:525:35)
at IncomingMessage.emit (node:domain:489:12)
at endReadableNT (node:internal/streams/readable:1359:12)
at processTicksAndRejections (node:internal/process/task_queues:82:21) {
config: {
transitional: {
silentJSONParsing: true,
forcedJSONParsing: true,
clarifyTimeoutError: false
},
adapter: [Function: httpAdapter],
transformRequest: [ [Function: transformRequest] ],
transformResponse: [ [Function: transformResponse] ],
timeout: 0,
xsrfCookieName: 'XSRF-TOKEN',
xsrfHeaderName: 'X-XSRF-TOKEN',
maxContentLength: -1,
maxBodyLength: -1,
validateStatus: [Function: validateStatus],
headers: {
Accept: 'application/json, text/plain, */*',
'Content-Type': 'application/json',
'User-Agent': 'OpenAI/NodeJS/3.2.1',
Authorization: 'Bearer EDITED',
'Content-Length': 53
},
method: 'post',
data: '{"model":"text-embedding-ada-002","input":"hi there"}',
url: 'https://api.openai.com/v1/embeddings'
},
request: <ref *1> ClientRequest {
_events: [Object: null prototype] {
abort: [Function (anonymous)],
aborted: [Function (anonymous)],
connect: [Function (anonymous)],
error: [Function (anonymous)],
socket: [Function (anonymous)],
timeout: [Function (anonymous)],
finish: [Function: requestOnFinish]
},
_eventsCount: 7,
_maxListeners: undefined,
outputData: [],
outputSize: 0,
writable: true,
destroyed: false,
_last: true,
chunkedEncoding: false,
shouldKeepAlive: false,
maxRequestsOnConnectionReached: false,
_defaultKeepAlive: true,
useChunkedEncodingByDefault: true,
sendDate: false,
_removedConnection: false,
_removedContLen: false,
_removedTE: false,
strictContentLength: false,
_contentLength: 53,
_hasBody: true,
_trailer: '',
finished: true,
_headerSent: true,
_closed: false,
socket: TLSSocket {
_tlsOptions: [Object],
_secureEstablished: true,
_securePending: false,
_newSessionPending: false,
_controlReleased: true,
secureConnecting: false,
_SNICallback: null,
servername: 'api.openai.com',
alpnProtocol: false,
authorized: true,
authorizationError: null,
encrypted: true,
_events: [Object: null prototype],
_eventsCount: 10,
connecting: false,
_hadError: false,
_parent: null,
_host: 'api.openai.com',
_closeAfterHandlingError: false,
_readableState: [ReadableState],
_maxListeners: undefined,
_writableState: [WritableState],
allowHalfOpen: false,
_sockname: null,
_pendingData: null,
_pendingEncoding: '',
server: undefined,
_server: null,
ssl: [TLSWrap],
_requestCert: true,
_rejectUnauthorized: true,
parser: null,
_httpMessage: [Circular *1],
[Symbol(res)]: [TLSWrap],
[Symbol(verified)]: true,
[Symbol(pendingSession)]: null,
[Symbol(async_id_symbol)]: 56,
[Symbol(kHandle)]: [TLSWrap],
[Symbol(lastWriteQueueSize)]: 0,
[Symbol(timeout)]: null,
[Symbol(kBuffer)]: null,
[Symbol(kBufferCb)]: null,
[Symbol(kBufferGen)]: null,
[Symbol(kCapture)]: false,
[Symbol(kSetNoDelay)]: false,
[Symbol(kSetKeepAlive)]: true,
[Symbol(kSetKeepAliveInitialDelay)]: 60,
[Symbol(kBytesRead)]: 0,
[Symbol(kBytesWritten)]: 0,
[Symbol(connect-options)]: [Object]
},
_header: 'POST /v1/embeddings HTTP/1.1\r\n' +
'Accept: application/json, text/plain, */*\r\n' +
'Content-Type: application/json\r\n' +
'User-Agent: OpenAI/NodeJS/3.2.1\r\n' +
'Authorization: Bearer EDITED' +
'Content-Length: 53\r\n' +
'Host: api.openai.com\r\n' +
'Connection: close\r\n' +
'\r\n',
_keepAliveTimeout: 0,
_onPendingData: [Function: nop],
agent: Agent {
_events: [Object: null prototype],
_eventsCount: 2,
_maxListeners: undefined,
defaultPort: 443,
protocol: 'https:',
options: [Object: null prototype],
requests: [Object: null prototype] {},
sockets: [Object: null prototype],
freeSockets: [Object: null prototype] {},
keepAliveMsecs: 1000,
keepAlive: false,
maxSockets: Infinity,
maxFreeSockets: 256,
scheduling: 'lifo',
maxTotalSockets: Infinity,
totalSocketCount: 1,
maxCachedSessions: 100,
_sessionCache: [Object],
[Symbol(kCapture)]: false
},
socketPath: undefined,
method: 'POST',
maxHeaderSize: undefined,
insecureHTTPParser: undefined,
joinDuplicateHeaders: undefined,
path: '/v1/embeddings',
_ended: true,
res: IncomingMessage {
_readableState: [ReadableState],
_events: [Object: null prototype],
_eventsCount: 4,
_maxListeners: undefined,
socket: [TLSSocket],
httpVersionMajor: 1,
httpVersionMinor: 1,
httpVersion: '1.1',
complete: true,
rawHeaders: [Array],
rawTrailers: [],
joinDuplicateHeaders: undefined,
aborted: false,
upgrade: false,
url: '',
method: null,
statusCode: 401,
statusMessage: 'Unauthorized',
client: [TLSSocket],
_consuming: false,
_dumped: false,
req: [Circular *1],
responseUrl: 'https://api.openai.com/v1/embeddings',
redirects: [],
[Symbol(kCapture)]: false,
[Symbol(kHeaders)]: [Object],
[Symbol(kHeadersCount)]: 22,
[Symbol(kTrailers)]: null,
[Symbol(kTrailersCount)]: 0
},
aborted: false,
timeoutCb: null,
upgradeOrConnect: false,
parser: null,
maxHeadersCount: null,
reusedSocket: false,
host: 'api.openai.com',
protocol: 'https:',
_redirectable: Writable {
_writableState: [WritableState],
_events: [Object: null prototype],
_eventsCount: 3,
_maxListeners: undefined,
_options: [Object],
_ended: true,
_ending: true,
_redirectCount: 0,
_redirects: [],
_requestBodyLength: 53,
_requestBodyBuffers: [],
_onNativeResponse: [Function (anonymous)],
_currentRequest: [Circular *1],
_currentUrl: 'https://api.openai.com/v1/embeddings',
[Symbol(kCapture)]: false
},
[Symbol(kCapture)]: false,
[Symbol(kBytesWritten)]: 0,
[Symbol(kEndCalled)]: true,
[Symbol(kNeedDrain)]: false,
[Symbol(corked)]: 0,
[Symbol(kOutHeaders)]: [Object: null prototype] {
accept: [Array],
'content-type': [Array],
'user-agent': [Array],
authorization: [Array],
'content-length': [Array],
host: [Array]
},
[Symbol(errored)]: null,
[Symbol(kUniqueHeaders)]: null
},
response: {
status: 401,
statusText: 'Unauthorized',
headers: {
date: 'Fri, 30 Jun 2023 16:16:14 GMT',
'content-type': 'application/json; charset=utf-8',
'content-length': '301',
connection: 'close',
vary: 'Origin',
'x-request-id': '6cebdf4cc24074e40d3c1c80d9ab9fff',
'strict-transport-security': 'max-age=15724800; includeSubDomains',
'cf-cache-status': 'DYNAMIC',
server: 'cloudflare',
'cf-ray': '7df7b66a7cdbba73-EZE',
'alt-svc': 'h3=":443"; ma=86400'
},
config: {
transitional: [Object],
adapter: [Function: httpAdapter],
transformRequest: [Array],
transformResponse: [Array],
timeout: 0,
xsrfCookieName: 'XSRF-TOKEN',
xsrfHeaderName: 'X-XSRF-TOKEN',
maxContentLength: -1,
maxBodyLength: -1,
validateStatus: [Function: validateStatus],
headers: [Object],
method: 'post',
data: '{"model":"text-embedding-ada-002","input":"hi there"}',
url: 'https://api.openai.com/v1/embeddings'
},
request: <ref *1> ClientRequest {
_events: [Object: null prototype],
_eventsCount: 7,
_maxListeners: undefined,
outputData: [],
outputSize: 0,
writable: true,
destroyed: false,
_last: true,
chunkedEncoding: false,
shouldKeepAlive: false,
maxRequestsOnConnectionReached: false,
_defaultKeepAlive: true,
useChunkedEncodingByDefault: true,
sendDate: false,
_removedConnection: false,
_removedContLen: false,
_removedTE: false,
strictContentLength: false,
_contentLength: 53,
_hasBody: true,
_trailer: '',
finished: true,
_headerSent: true,
_closed: false,
socket: [TLSSocket],
_header: 'POST /v1/embeddings HTTP/1.1\r\n' +
'Accept: application/json, text/plain, */*\r\n' +
'Content-Type: application/json\r\n' +
'User-Agent: OpenAI/NodeJS/3.2.1\r\n' +
'Authorization: Bearer EDITED' +
'Content-Length: 53\r\n' +
'Host: api.openai.com\r\n' +
'Connection: close\r\n' +
'\r\n',
_keepAliveTimeout: 0,
_onPendingData: [Function: nop],
agent: [Agent],
socketPath: undefined,
method: 'POST',
maxHeaderSize: undefined,
insecureHTTPParser: undefined,
joinDuplicateHeaders: undefined,
path: '/v1/embeddings',
_ended: true,
res: [IncomingMessage],
aborted: false,
timeoutCb: null,
upgradeOrConnect: false,
parser: null,
maxHeadersCount: null,
reusedSocket: false,
host: 'api.openai.com',
protocol: 'https:',
_redirectable: [Writable],
[Symbol(kCapture)]: false,
[Symbol(kBytesWritten)]: 0,
[Symbol(kEndCalled)]: true,
[Symbol(kNeedDrain)]: false,
[Symbol(corked)]: 0,
[Symbol(kOutHeaders)]: [Object: null prototype],
[Symbol(errored)]: null,
[Symbol(kUniqueHeaders)]: null
},
data: { error: [Object] }
},
isAxiosError: true,
toJSON: [Function: toJSON],
attemptNumber: 1,
retriesLeft: 6
}
### Suggestion:
_No response_ | Issue: 401 error when using ConversationalRetrievalQAChain | https://api.github.com/repos/langchain-ai/langchain/issues/6981/comments | 3 | 2023-06-30T16:22:20Z | 2023-10-06T16:05:58Z | https://github.com/langchain-ai/langchain/issues/6981 | 1,782,808,760 | 6,981 |
[
"hwchase17",
"langchain"
]
| ### System Info
Traceback (most recent call last):
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/teacher/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py", line 85, in validate_environment
from llama_cpp import Llama
ModuleNotFoundError: No module named 'llama_cpp'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/mine.py", line 3, in <module>
llama = LlamaCppEmbeddings(model_path="./models/ggml-gpt4all-j.bin")
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic/main.py", line 1102, in pydantic.main.validate_model
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/teacher/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py", line 89, in validate_environment
raise ModuleNotFoundError(
ModuleNotFoundError: Could not import llama-cpp-python library. Please install the llama-cpp-python library to use this embedding model: pip install llama-cpp-python
### Who can help?
@sirrrik
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
pip install the latest langChain package from pypy on mac
### Expected behavior
Traceback (most recent call last):
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/teacher/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py", line 85, in validate_environment
from llama_cpp import Llama
ModuleNotFoundError: No module named 'llama_cpp'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/mine.py", line 3, in <module>
llama = LlamaCppEmbeddings(model_path="./models/ggml-gpt4all-j.bin")
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic/main.py", line 1102, in pydantic.main.validate_model
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/teacher/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py", line 89, in validate_environment
raise ModuleNotFoundError(
ModuleNotFoundError: Could not import llama-cpp-python library. Please install the llama-cpp-python library to use this embedding model: pip install llama-cpp-python | llma Embeddings error | https://api.github.com/repos/langchain-ai/langchain/issues/6980/comments | 3 | 2023-06-30T16:22:20Z | 2023-12-01T16:09:18Z | https://github.com/langchain-ai/langchain/issues/6980 | 1,782,808,752 | 6,980 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain v0.0.220
Python 3.11.3
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Using `BraveSearch` with an agent always returns an error 422
[I've created a collab notebook with an example displaying this error.](https://colab.research.google.com/drive/1mUr6KWND4ZYmvFnPywbJevZLzssJMvBR?usp=sharing)
I've tried many times with both `ZERO_SHOT_REACT_DESCRIPTION` and `OPENAI_FUNCTIONS` agents
### Expected behavior
Agents should be able to use this tool | Error 422 when using BraveSearch | https://api.github.com/repos/langchain-ai/langchain/issues/6974/comments | 1 | 2023-06-30T15:12:25Z | 2023-10-06T16:06:03Z | https://github.com/langchain-ai/langchain/issues/6974 | 1,782,703,377 | 6,974 |
[
"hwchase17",
"langchain"
]
| ### Feature request
It would be great if the [JSONLinesLoader](https://js.langchain.com/docs/modules/indexes/document_loaders/examples/file_loaders/jsonlines) that's available in the JS version of Langchain could be ported to the Python version.
### Motivation
I find working with jsonl files to be frequently easier than json files.
### Your contribution
Not sure---I'm quite new to Python and so don't how to implement this. | Using JSONLinesLoader in Python | https://api.github.com/repos/langchain-ai/langchain/issues/6973/comments | 6 | 2023-06-30T13:31:03Z | 2023-07-05T13:33:14Z | https://github.com/langchain-ai/langchain/issues/6973 | 1,782,539,266 | 6,973 |
[
"hwchase17",
"langchain"
]
| ### System Info
Traceback (most recent call last):
File "/tmp/pycharm_project_93/test_qa_generation.py", line 50, in <module>
result = qa.run({"question": query, "vectordbkwargs": vectordbkwargs })
File "/root/.cache/pypoetry/virtualenvs/irc-llm-0Bb4gJSe-py3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 273, in run
return self(args[0], callbacks=callbacks, tags=tags)[_output_key]
File "/root/.cache/pypoetry/virtualenvs/irc-llm-0Bb4gJSe-py3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 151, in __call__
final_outputs: Dict[str, Any] = self.prep_outputs(
File "/root/.cache/pypoetry/virtualenvs/irc-llm-0Bb4gJSe-py3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 216, in prep_outputs
self.memory.save_context(inputs, outputs)
File "/root/.cache/pypoetry/virtualenvs/irc-llm-0Bb4gJSe-py3.10/lib/python3.10/site-packages/langchain/memory/chat_memory.py", line 34, in save_context
input_str, output_str = self._get_input_output(inputs, outputs)
File "/root/.cache/pypoetry/virtualenvs/irc-llm-0Bb4gJSe-py3.10/lib/python3.10/site-packages/langchain/memory/chat_memory.py", line 21, in _get_input_output
prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)
File "/root/.cache/pypoetry/virtualenvs/irc-llm-0Bb4gJSe-py3.10/lib/python3.10/site-packages/langchain/memory/utils.py", line 11, in get_prompt_input_key
raise ValueError(f"One input key expected got {prompt_input_keys}")
ValueError: One input key expected got ['question', 'vectordbkwargs']
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Milvus
from langchain.text_splitter import CharacterTextSplitter
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT
from langchain.document_loaders import TextLoader
from langchain.memory import ConversationBufferWindowMemory
from pymilvus import connections, FieldSchema, CollectionSchema, DataType, Collection, utility
MILVUS_HOST = "_milvus.XXXX.com"
MILVUS_PORT = 19530
connections.connect(
alias="default",
host=MILVUS_HOST,
port=MILVUS_PORT
)
utility.drop_collection('LangChainCollection')
import os
os.environ['OPENAI_API_KEY'] = "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"
loader = TextLoader("/tmp/qa_chain1.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vectorstore = Milvus.from_documents(documents, embeddings,
connection_args={"host": "_milvus.XXXX.com", "port": "19530"}, search_params={"metric_type": "IP", "params": {"nprobe": 200}, "offset": 0})
memory = ConversationBufferWindowMemory(k=2 , memory_key="chat_history", return_messages=True)
vectordbkwargs = {"search_distance": 0.9}
retriver = vectorstore.as_retriever()
retriver.search_kwargs = search_kwargs={"k":5 }
qa = ConversationalRetrievalChain.from_llm(ChatOpenAI(temperature=0, model='gpt-3.5-turbo-16k'), retriver , memory = memory, condense_question_prompt = CONDENSE_QUESTION_PROMPT, verbose=True)
query = "Introduce Microsoft"
result = qa.run({"question": query, "vectordbkwargs": vectordbkwargs })
print(result)
### Expected behavior
return the relevant document with a score bigger than 0.9 when matching vectors by IP metric. | Cannot pass search_distance key to ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/6971/comments | 2 | 2023-06-30T12:34:14Z | 2023-07-03T04:37:06Z | https://github.com/langchain-ai/langchain/issues/6971 | 1,782,453,316 | 6,971 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.219
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
>>> from langchain import HuggingFacePipeline
>>> mod = HuggingFacePipeline(model_id="tiiuae/falcon-7b-instruct", task="text-generation")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/guillem.garcia/miniconda3/envs/rene/lib/python3.10/site-packages/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for HuggingFacePipeline
task
extra fields not permitted (type=value_error.extra)
### Expected behavior
I expect it to work. When I instance a Pipeline with falcon using hugginface it works.
Also, Is there a way to deactivate pydantic? It makes it impossible to modify anything or make langchain compatible with our code | HuggingFacePipeline not working | https://api.github.com/repos/langchain-ai/langchain/issues/6970/comments | 2 | 2023-06-30T11:37:16Z | 2023-06-30T11:59:52Z | https://github.com/langchain-ai/langchain/issues/6970 | 1,782,380,145 | 6,970 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.208
penai==0.27.8
python==3.10.6
### Who can help?
@hwchase17 Could you please help me in resolving the error whenver I run an gaent with the LLM as ChatOpenAI() i get the response as **""Do I need to use a tool? No.""**
While the agent returns the response perfectly fine with llm= OpenAI().
The irony is two days prior the ChatOpenAI() function was working fine and returned the results perfectly fine.
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`import langchain
from langchain.prompts.base import StringPromptTemplate
from langchain.prompts import PromptTemplate,StringPromptTemplate
from langchain.agents import Tool, AgentExecutor, AgentOutputParser,LLMSingleActionAgent,initialize_agent
from langchain import OpenAI,LLMChain
from langchain.chat_models import ChatOpenAI
from langchain.tools import DuckDuckGoSearchRun
from langchain.schema import AgentAction,AgentFinish
import re
from typing import List,Union
import os
from getpass import getpass
os.environ["OPENAI_API_KEY"] = getpass()
#
search = DuckDuckGoSearchRun()
#3640 def duck_wrapper(input_text):
search_results = search.run(f"site:webmd.com {input_text}")
return search_results
tools = [
Tool(
name = "Search WebMD",
func=duck_wrapper,
description="useful for when you need to answer medical and pharmalogical questions"
)
]
# Set up the base template
template = """Answer the following questions as best you can, but speaking as compasionate medical professional. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin! Remember to answer as a compansionate medical professional when giving your final answer.
Previous conversation history:
{history}
Question: {input}
"""
tool_names = [tool.name for tool in tools]
history = None
prompt = PromptTemplate(template=template,input_variables=["tools","tool_names","history","input"])
from langchain.agents.agent_types import AgentType
agent = initialize_agent(tools=tools,
llm=ChatOpenAI(),
agent = AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=False,
return_intermediate_steps=True
)
try:
response = agent({"input":"how to treat acid reflux?",
"tools":tools,
"tool_names":tool_names,
"history":None,
"chat_history":None},
return_only_outputs=True)
except ValueError as e:
response = str(e)
if not response.startswith("Could not parse LLM output: `"):
raise e
response = response.removeprefix("Could not parse LLM output: `").removesuffix("`")
print(response)
`
### Expected behavior
{'output': 'There are a few lifestyle changes and home remedies that can help treat acid reflux, such as avoiding food triggers, eating smaller and more frequent meals, elevating the head of the bed while sleeping, and avoiding lying down after eating. Over-the-counter medications such as antacids and proton pump inhibitors can also be effective in treating acid reflux. However, it is always best to consult with a doctor if symptoms persist or worsen.', 'intermediate_steps': [(AgentAction(tool='Search WebMD', tool_input='how to treat acid reflux', log='Thought: Do I need to use a tool? Yes\nAction: Search WebMD\nAction Input: how to treat acid reflux'), 'Eat Earlier. Going to bed on a full stomach makes nighttime heartburn more likely. A full stomach puts pressure on the valve at the top of the stomach, which is supposed to keep stomach acid out ... Trim the fat off of meat and poultry, and cut the skin off chicken. Tweaks like these might be enough to tame your heartburn. Tomatoes (including foods like salsa and marinara sauce) and citrus ... Abdominal bloating. Abdominal pain. Vomiting. Indigestion. Burning or gnawing feeling in the stomach between meals or at night. Hiccups. Loss of appetite. Vomiting blood or coffee ground-like ... Esophagitis Symptoms. Symptoms of esophagitis include: Difficult or painful swallowing. Acid reflux. Heartburn. A feeling of something of being stuck in the throat. Chest pain. Nausea. Vomiting. This backflow (acid reflux) can irritate the lining of the esophagus. People with GERD may have heartburn, a burning sensation in the back of the throat, chronic cough, laryngitis, and nausea.')]} | langchain.chat_models.ChatOpenAI does not returns a response while langchain.OpeAI does rteturn results | https://api.github.com/repos/langchain-ai/langchain/issues/6968/comments | 3 | 2023-06-30T10:55:01Z | 2023-10-23T16:07:22Z | https://github.com/langchain-ai/langchain/issues/6968 | 1,782,326,661 | 6,968 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
run langchain agent takes a lot of time,about 20s。the question is simple. Is there any good way reduce time?
### Suggestion:
_No response_ | langchain agent took a lot of time | https://api.github.com/repos/langchain-ai/langchain/issues/6965/comments | 5 | 2023-06-30T08:34:20Z | 2023-10-06T16:06:08Z | https://github.com/langchain-ai/langchain/issues/6965 | 1,782,126,816 | 6,965 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I'd love to be able to extend the request URL with parameters.
Currently I can only provide request headers. This does not cover my use case.
### Motivation
Some APIs don't authenticate via headers and instead use URL parameters to provide API keys and Tokens.
Example: [Trello API](https://developer.atlassian.com/cloud/trello/rest/api-group-actions/#api-group-actions)
View first code snippet there for the action and you'll see.
curl --request GET \
--url 'https://api.trello.com/1/actions/{id}?key=APIKey&token=APIToken'
### Your contribution
I would need some handholding to create my first contribution here due to the testing suite but I've got some code I've tested locally.
```python
class Requests(BaseModel):
"""Wrapper around requests to handle auth and async.
The main purpose of this wrapper is to handle authentication (by saving
headers) and enable easy async methods on the same base object.
"""
headers: Optional[Dict[str, str]] = None
aiosession: Optional[aiohttp.ClientSession] = None
url_params: Optional[Dict[str, str]] = None
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
def get(self, url: str, **kwargs: Any) -> requests.Response:
"""GET the URL and return the text."""
if self.url_params:
url = url + '&'.join([f'{k}={v}' for k, v in self.url_params.items()])
return requests.get(url, headers=self.headers, **kwargs)
def post(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:
"""POST to the URL and return the text."""
if self.url_params:
url = url + '&'.join([f'{k}={v}' for k, v in self.url_params.items()])
return requests.post(url, json=data, headers=self.headers, **kwargs)
def patch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:
"""PATCH the URL and return the text."""
if self.url_params:
url = url + '&'.join([f'{k}={v}' for k, v in self.url_params.items()])
return requests.patch(url, json=data, headers=self.headers, **kwargs)
def put(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:
"""PUT the URL and return the text."""
if self.url_params:
url = url + '&'.join([f'{k}={v}' for k, v in self.url_params.items()])
return requests.put(url, json=data, headers=self.headers, **kwargs)
def delete(self, url: str, **kwargs: Any) -> requests.Response:
"""DELETE the URL and return the text."""
if self.url_params:
url = url + '&'.join([f'{k}={v}' for k, v in self.url_params.items()])
return requests.delete(url, headers=self.headers, **kwargs)
``` | Adding support for URL Parameters in RequestsWrapper | https://api.github.com/repos/langchain-ai/langchain/issues/6963/comments | 3 | 2023-06-30T07:58:56Z | 2024-06-16T08:34:56Z | https://github.com/langchain-ai/langchain/issues/6963 | 1,782,079,348 | 6,963 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Please update the CSVLoader to take list of columns as source_column.
### Motivation
This update will be be helpful, if the documents are embedded and used with vectordbs like Qdrant. As vectordbs like Qdrant offers filtering option, if the metadata is of Document object is used as payload then it will provide a great enhancement.
### Your contribution
No | Enable CSVLoader to take list of columns as source_column (or metadata) | https://api.github.com/repos/langchain-ai/langchain/issues/6961/comments | 6 | 2023-06-30T07:44:50Z | 2023-12-19T00:50:23Z | https://github.com/langchain-ai/langchain/issues/6961 | 1,782,060,256 | 6,961 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from transformers import AutoModel, AutoTokenizer
model_name = ".\\langchain-models\\THUDM\\chatglm2-6b"
local_model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
local_tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
db = SQLDatabase.from_uri("mysql+pymysql://root:root@localhost/magic")
toolkit = SQLDatabaseToolkit(db=db, llm=local_model, tokenizer=local_tokenizer)
tables = toolkit.list_tables_sql_db()
print(tables)
Traceback (most recent call last):
File "D:\chat\langchain-ChatGLM\test_sql.py", line 8, in <module>
toolkit = SQLDatabaseToolkit(db=db, llm=local_model)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for SQLDatabaseToolkit
llm
value is not a valid dict (type=type_error.dict)
### Suggestion:
_No response_ | i try to connect mysql database,but it give me a error about llm value is not a valid dict(type=type_error.dict),how to solve the problem? | https://api.github.com/repos/langchain-ai/langchain/issues/6959/comments | 4 | 2023-06-30T07:02:07Z | 2023-10-09T16:06:21Z | https://github.com/langchain-ai/langchain/issues/6959 | 1,782,008,769 | 6,959 |
[
"hwchase17",
"langchain"
]
| ### Feature request
CharacterTextSplitter split a size of 1GB code base with warnings exceed the log buffer, like
```
Created a chunk of size 2140, which is longer than the specified 900
Created a chunk of size 1269, which is longer than the specified 900
Created a chunk of size 1955, which is longer than the specified 900
Created a chunk of size 3410, which is longer than the specified 900
Created a chunk of size 1192, which is longer than the specified 900
Created a chunk of size 1392, which is longer than the specified 900
Created a chunk of size 1540, which is longer than the specified 900
- very very long...
```
Walking through the relvent code
```
def _merge_splits(self, splits: Iterable[str], separator: str) -> List[str]:
# We now want to combine these smaller pieces into medium size
# chunks to send to the LLM.
separator_len = self._length_function(separator)
docs = []
current_doc: List[str] = []
total = 0
for d in splits:
_len = self._length_function(d)
if (
total + _len + (separator_len if len(current_doc) > 0 else 0)
> self._chunk_size
):
if total > self._chunk_size:
logger.warning(
f"Created a chunk of size {total}, "
f"which is longer than the specified {self._chunk_size}"
)
if len(current_doc) > 0:
doc = self._join_docs(current_doc, separator)
if doc is not None:
docs.append(doc)
# Keep on popping if:
# - we have a larger chunk than in the chunk overlap
# - or if we still have any chunks and the length is long
while total > self._chunk_overlap or (
total + _len + (separator_len if len(current_doc) > 0 else 0)
> self._chunk_size
and total > 0
):
total -= self._length_function(current_doc[0]) + (
separator_len if len(current_doc) > 1 else 0
)
current_doc = current_doc[1:]
current_doc.append(d)
total += _len + (separator_len if len(current_doc) > 1 else 0)
doc = self._join_docs(current_doc, separator)
if doc is not None:
docs.append(doc)
return docs
```
I know langchain try to keep semantic integrity of code language with language specific separtor.
### Motivation
Can we just pop the last split once the `total > self._chunk_size` in `def _merge_splits(self, splits: Iterable[str], separator: str) -> List[str]:`
### Your contribution
Maybe later I'll propose a PR | CharacterTextSplitter constanly generate chunks longer than given chunk_size | https://api.github.com/repos/langchain-ai/langchain/issues/6958/comments | 4 | 2023-06-30T06:46:26Z | 2023-12-13T16:08:43Z | https://github.com/langchain-ai/langchain/issues/6958 | 1,781,991,448 | 6,958 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
```
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", return_intermediate_steps=True, verbose=True)
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", return_intermediate_steps=True, memory=ConversationBufferMemory(memory_key="chat_history", input_key='input', output_key="output"), verbose=True)
```
https://python.langchain.com/docs/modules/agents/how_to/intermediate_steps
<img width="857" alt="image" src="https://github.com/hwchase17/langchain/assets/42615243/9ea12e4b-1775-4114-a750-0625e1cf8302">
### Suggestion:
I don't know how to solve it! | ValueError: `run` not supported when there is not exactly one output key. Got ['output', 'intermediate_steps'] | https://api.github.com/repos/langchain-ai/langchain/issues/6956/comments | 8 | 2023-06-30T05:52:27Z | 2024-05-19T05:48:47Z | https://github.com/langchain-ai/langchain/issues/6956 | 1,781,940,281 | 6,956 |
[
"hwchase17",
"langchain"
]
| ### System Info
ConversationalRetrievalChain with Question Answering with sources
```python
llm = OpenAI(temperature=0)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_with_sources_chain(llm, chain_type="map_reduce")
chain = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(),
question_generator=question_generator,
combine_docs_chain=doc_chain,
)
```
if i set the chain_type to "refine". it will not return sources.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
just like the turtial code.
### Expected behavior
i want the 'refine' chain return sources. | Conversational Retrieval QA with sources cannot return source | https://api.github.com/repos/langchain-ai/langchain/issues/6954/comments | 2 | 2023-06-30T03:17:04Z | 2023-07-03T07:34:15Z | https://github.com/langchain-ai/langchain/issues/6954 | 1,781,805,088 | 6,954 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Is there any way to store the Sentencebert/Bert/Spacy/Doc2vec based embeddings in the vector database using langchain
```
pages= "page content"
embeddings = OpenAIEmbeddings()
persist_directory = 'db'
vectordb = Chroma.from_documents(documents=pages, embedding=embeddings, persist_directory=persist_directory)
```
shall use **Sentencebert/Bert/Spacy/Doc2vec embeddings** instead of OpenAIEmbeddings() in the above code?
if possible then what is the syntax for that?
### Motivation
For using native embedding formats likeSentencebert/Bert/Spacy/Doc2vec in langchain
### Your contribution
For using native embedding formats likeSentencebert/Bert/Spacy/Doc2vec in langchain | Sentencebert/Bert/Spacy/Doc2vec embedding support | https://api.github.com/repos/langchain-ai/langchain/issues/6952/comments | 8 | 2023-06-30T03:13:49Z | 2023-10-06T16:06:18Z | https://github.com/langchain-ai/langchain/issues/6952 | 1,781,803,067 | 6,952 |
[
"hwchase17",
"langchain"
]
| my code:
```
from langchain.llms import HuggingFacePipeline
from langchain.chains import ConversationalRetrievalChain
pipe = pipeline(
"text-generation", # task type
model=model,
tokenizer=tokenizer,
max_new_tokens=1024,
device=0, # very trick, gpu rank
)
local_llm = HuggingFacePipeline(pipeline=pipe)
qa = ConversationalRetrievalChain.from_llm(local_llm, retriever=retriever)
questions = [
"void Plan::ScheduleWork(map<Edge*, Want>::iterator want_e) {",
]
chat_history = []
for question in questions:
result = qa({"question": question, "chat_history": chat_history})
# chat_history.append((question, result["answer"]))
print("**chat_history**", chat_history)
print(f"=>**Question**: {question} \n")
print(f"=>**Answer**: {result['answer']} \n\n")
```
which generate like this:
```
**chat_history** []
=>**Question**: void Plan::ScheduleWork(map<Edge*, Want>::iterator want_e) {
=>**Answer**: void Plan::ScheduleWork(map<Edge*, Want>::iterator want_e) {
if (want_e->second == kWantToFinish) {
// This edge has already been scheduled. We can get here again if an edge
// and one of its dependencies share an order-only input, or if a node
// duplicates an out edge (see https://github.question/: void Plan::ScheduleWork(map<Edge*, Want>::iterator want_e) {
Helpful Answer: void Plan::ScheduleWork(map<Edge*, Want>::iterator want_e) {
if (want_e->second == kWantToFinish) {
// This edge has already been scheduled. We can get here again if an edge
// and one of its dependencies share an order-only input, or if a node
// duplicates an out edge (see https://github.com/ninja-build/ninja/pull/519).
// Avoid scheduling the work again.
return;
}
assert(want_e->second == kWantToStart);
want_e->second = kWantToFinish;
```
my `model` is locally fine-tuned starcode with perf merged, `retriever` is DeepLake vectorstore with some args like cos etc.
I do not get the "Helpful Answer" block, WHERE ON EARTH THIS COME FROM???
### Suggestion:
_No response_ | Issue: Get Confused with ConversationalRetrievalChain + HuggingFacePipeline always gen wizard "Helpful answer: " | https://api.github.com/repos/langchain-ai/langchain/issues/6951/comments | 2 | 2023-06-30T02:41:25Z | 2023-10-06T16:06:23Z | https://github.com/langchain-ai/langchain/issues/6951 | 1,781,775,217 | 6,951 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Just curious if there is a feature to check whether the summarized output from a source document hallucinates or not by extending the checker chain.
### Motivation
LLMSummarizationCheckerChain checks facts of summaries using LLM knowledge. The motivation is to develop a feature that checks for facts from an augmented source document.
### Your contribution
Will submit a PR if this does not exist yet and if it is a nice to have feature | LLMSummarizationCheckerFromSource | https://api.github.com/repos/langchain-ai/langchain/issues/6950/comments | 1 | 2023-06-30T02:19:22Z | 2023-10-06T16:06:29Z | https://github.com/langchain-ai/langchain/issues/6950 | 1,781,757,970 | 6,950 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Genuinely wanted to know the reasonings behind the inclusion of "Mu" in the "PyMuPDFLoader". Coming from an Indian background my friend and I held 1-2 hours of discussion over what this represents and the conclusions were not so appropriate.
We landed on this discussion after we noticed the inconsistency in the naming of the single pdf loader (i.e. "PyMuPDFLoader") and the multiple pdf loader (i.e. "PyPDFDirectoryLoader").
(Being a pioneer in LLM Orchestration we admire the open-course revolution. LangChain has opened a wider scope for opensource collaborations and we thank @hwchase17 for it.)
### Suggestion:
_No response_ | Inconsistent naming of document loaders | https://api.github.com/repos/langchain-ai/langchain/issues/6947/comments | 1 | 2023-06-30T00:41:17Z | 2023-06-30T15:56:45Z | https://github.com/langchain-ai/langchain/issues/6947 | 1,781,674,685 | 6,947 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
From the [documentation](https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/few_shot_examples), it is not clear whether ‘FewShotPromptTemplate’ can be used with Chat models: in particular, it does not appear to be compatible with OpenAI’s suggested best practices (mentioned on the same page) of having alternating “name: example_user” and “name: example_$role” JSON passed to the OpenAI API for each few-shot example message passed in the call.
### Idea or request for content:
Is it planned to have a separate “ChatFewShotPromptTemplate” class? If so, perhaps a placeholder could be added to the documentation to make this clear. If not, perhaps the documentation could be updated to make the mismatch explicit so that the reader understands the limitations of the current FewShotPromptTemplate in the context of Chat models. | FewShotPromptTemplate with Chat models | https://api.github.com/repos/langchain-ai/langchain/issues/6946/comments | 1 | 2023-06-30T00:35:45Z | 2023-10-06T16:06:34Z | https://github.com/langchain-ai/langchain/issues/6946 | 1,781,669,365 | 6,946 |
[
"hwchase17",
"langchain"
]
| ### Feature request
[Brave Search](https://api.search.brave.com/app/dashboard) is a new interesting search engine
It can be used in place of the `Google Search`.
### Motivation
Users who are subscribers of Brave Search can use this.
### Your contribution
I can try to implement it if somebody is interested in this integration. | add `Brave` Search | https://api.github.com/repos/langchain-ai/langchain/issues/6939/comments | 2 | 2023-06-29T21:47:51Z | 2023-06-30T15:24:25Z | https://github.com/langchain-ai/langchain/issues/6939 | 1,781,557,474 | 6,939 |
[
"hwchase17",
"langchain"
]
| ### System Info
Chroma v0.2.36, python 3.9
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`
settings = Settings(
chroma_db_impl='duckdb+parquet',
persist_directory="db",
anonymized_telemetry=False
)
pages = self._load_single_document(file_path=file_path)
docs = text_splitter.split_documents(pages)
db = Chroma.from_documents(docs, embedding_function, client_settings=settings)
`
### Expected behavior
All files for the database should be created in the db directory. However, the parquet files are not being created when db.from_documents() is called.
However, this is not happening. All the files in the index directory are created at this time.
Later, when the application is killed (flask application, so in between requests the DB is torn down and thus should be persisited), then the parquet files show up. | Chroma (duckdb+parquet) DB isn't saving the parquet files for persisted DB until application is killed | https://api.github.com/repos/langchain-ai/langchain/issues/6938/comments | 9 | 2023-06-29T21:15:14Z | 2023-10-06T16:39:27Z | https://github.com/langchain-ai/langchain/issues/6938 | 1,781,527,088 | 6,938 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.218
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

### Expected behavior
The module should be found automatically. | Missing reference to o365_toolkit module in __init__.py of agent_toolkits | https://api.github.com/repos/langchain-ai/langchain/issues/6936/comments | 4 | 2023-06-29T19:23:25Z | 2023-09-28T16:20:50Z | https://github.com/langchain-ai/langchain/issues/6936 | 1,781,364,740 | 6,936 |
[
"hwchase17",
"langchain"
]
| ### System Info
The SageMaker Endpoint validation function currently throws a missing credential error if the region name is not provided in the input. However, it is crucial to perform input validation for the region name to provide the user with clear error information.
GitHub Issue Reference:
For the code reference related to this issue, please visit: [GitHub Link](https://github.com/hwchase17/langchain/blob/7dcc698ebf4ebf4eb7331d25cec279f402918629472b/langchain/llms/sagemaker_endpoint.py#L177)
### Who can help?
@3coins
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This can be reproduced by creating a SageMaker Endpoing LangChain without specifying region name. Please ensure that you have valid AWS credentials configured.
### Expected behavior
The error message must state
```Region name is missing. Please enter valid region name. E.g: us-west-2.``` | SageMaker Endpoint - Validations | https://api.github.com/repos/langchain-ai/langchain/issues/6934/comments | 3 | 2023-06-29T17:43:12Z | 2023-11-30T16:08:26Z | https://github.com/langchain-ai/langchain/issues/6934 | 1,781,255,515 | 6,934 |
[
"hwchase17",
"langchain"
]
| ### Feature request
A universal approach to add openai functions specifically for output format control to `LLMchain`. Create a class that takes in dict, json, pydantic, or even rail specs (from guardrails) to preprocess classes or dictionaries into openai functions (function, parameters, and properties) for the `llm_kwargs` in `LLMChain` class in conjunction with the openai function parsers.
### Motivation
It is hard to use the function calling feature right now if you are not working with agents (can access using `format_tool_to_openai_function`) or a specific use case, such as the Extractor or QA chains. The idea is to create a general class that takes in dict, json, pydantic, or even rail specs (from guardrails) to preprocess classes or dictionaries into openai functions for better control of the output format.
### Your contribution
I can help with this, should I put this in the `docs/modules` folder? | General OpenAI Function Mapping from Pydantic, Dicts directly to function, parameters, and properties | https://api.github.com/repos/langchain-ai/langchain/issues/6933/comments | 9 | 2023-06-29T17:07:09Z | 2023-07-08T01:50:27Z | https://github.com/langchain-ai/langchain/issues/6933 | 1,781,214,707 | 6,933 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello!
I am currently using Langchain to interface with a [customLLM](https://python.langchain.com/docs/modules/model_io/models/llms/how_to/custom_llm) async.
To do so, I am overriding the original `LLM` class as explained in the tutorial as such:
```
class LLMInterface(LLM):
@property
def _llm_type(self) -> str:
return "Ruggero's LLM"
async def send_request(self, payload):
return await some_function_that_queries_custom_llm(payload)
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
) -> str:
if stop is not None:
raise ValueError("stop kwargs are not permitted.")
payload = json.dumps({"messages": [{"role": "user", "content": prompt}]})
future = self.run_in_executor(self.send_request, payload)
raw_response = future.result() # get the actual result
print(raw_response)
# Parse JSON response
response_json = json.loads(raw_response)
# Extract content
for message in response_json["messages"]:
if message["role"] == "bot":
content = message["content"]
return content
# Return empty string if no bot message found
return ""
```
Unfortunately I get:
```
159 reductor = getattr(x, "__reduce_ex__", None)
160 if reductor is not None:
--> 161 rv = reductor(4)
162 else:
163 reductor = getattr(x, "__reduce__", None)
TypeError: cannot pickle '_queue.SimpleQueue' object
```
Even if a use an executor I get the similar problem
```
def run_in_executor(self, coro_func: Any, *args: Any) -> Any:
future_result = Future()
def wrapper():
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
result = loop.run_until_complete(coro_func(*args))
future_result.set_result(result)
except Exception as e:
future_result.set_exception(e)
finally:
loop.run_until_complete(loop.shutdown_asyncgens())
loop.close()
Thread(target=wrapper).start()
return future_result
```
In my code I use Langchain as such:
```
llm_chain = ConversationChain(llm=llm, memory=memory)
output = llm_chain.run(self.llm_input)
```
What is the appropriate way to interface a custom LLM that it is queried async?
i know that Langchain supports [async calls ](https://blog.langchain.dev/async-api/) but I was not able to make it work
Thank you!
### Suggestion:
providing a
```
async def _async_call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
) -> str:
```
method would be ideal
Any alternative work-around would be fine too
Thank you! | Issue: How to create async calls to custom LLMs | https://api.github.com/repos/langchain-ai/langchain/issues/6932/comments | 2 | 2023-06-29T17:06:20Z | 2023-06-29T22:20:21Z | https://github.com/langchain-ai/langchain/issues/6932 | 1,781,213,657 | 6,932 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Would like to ask how we should deal with multiple destination chains where the chains are expecting different input variables?
For e.g. in the tutorial for MultiPromptChain, i would like math questions to be directed to the PalChain instead of the standard LLMChain. With the initial LLMRouterChain, the router prompt uses `input` as the input_variables. however, when it has decided that the input `what is 2+2` is a math question and should be routed to PalChain, i am presented the error
```
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in _validate_inputs(self, inputs)
101 missing_keys = set(self.input_keys).difference(inputs)
102 if missing_keys:
--> 103 raise ValueError(f"Missing some input keys: {missing_keys}")
104
105 def _validate_outputs(self, outputs: Dict[str, Any]) -> None:
ValueError: Missing some input keys: {'question'}
```
manually replacing the MATH_PROMPT that PalChain uses from {question} to {input} works but I would like to know how I can specify the input variable that the destination chain is expecting when setting up the destination_chains array here:
```
chain = MultiPromptChain(router_chain=router_chain,
destination_chains=destination_chains,
default_chain=default_chain,
verbose=True
)
```
been at it for 2 nights so am seeking help. thanks!
### Suggestion:
_No response_ | Issue: How to handle RouterChain when 1 or more destination chain(s) which is expecting a different input variable? | https://api.github.com/repos/langchain-ai/langchain/issues/6931/comments | 8 | 2023-06-29T16:45:54Z | 2023-08-03T08:30:01Z | https://github.com/langchain-ai/langchain/issues/6931 | 1,781,186,169 | 6,931 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
The PRs wait for review (for any feedback) very long time.
- Some sort of triage would be helpful.
- Some sort of PR closure can be helpful. If something is wrong with PR this should be explained/discussed/addressed. Not providing **any feedback** is demoralizing for contributors.
Thanks!
PS Take a look at the PR list and check PRs that passed all checks and has no answer/feedback. Hundreds of such PRs?
### Suggestion:
_No response_ | PR-s wait the review forever | https://api.github.com/repos/langchain-ai/langchain/issues/6930/comments | 7 | 2023-06-29T16:19:06Z | 2023-11-19T22:54:48Z | https://github.com/langchain-ai/langchain/issues/6930 | 1,781,148,858 | 6,930 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Please add support to invoke [Amazon SageMaker Asynchronous Endpoints](https://docs.aws.amazon.com/sagemaker/latest/dg/async-inference.html).
This would require some code changes to the _call method of the SageMaker Endpoint class. Therefore, there is a need to create a new class, called SagemakerAsyncEndpoint, or introduce a logic in the SagemakerEndpoint class to check whether the endpoint is async or not.
### Motivation
They are a great way to have models running on expensive GPU instances, while keeping cost control especially during POC phase.
### Your contribution
Submitting a PR. | Support for Amazon SageMaker Asynchronous Endpoints | https://api.github.com/repos/langchain-ai/langchain/issues/6928/comments | 4 | 2023-06-29T15:31:14Z | 2024-01-30T00:46:57Z | https://github.com/langchain-ai/langchain/issues/6928 | 1,781,073,650 | 6,928 |
[
"hwchase17",
"langchain"
]
| ### System Info
* Langchain-0.0.215
* Python3.8.6
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The chain type in `RetrievalQA.from_chain_type()`:
* `stuff`: work successfully
* `refine`: not work unless uses correctly naming or parameter. #6912
* `map_rerank`: not work
* `map_reduce`: not work
All the error of each type like:
code:
```python
prompt_template = """
Use the following pieces of context to answer the question, if you don't know the answer, leave it blank don't try to make up an answer.
{context}
Question: {question}
Answer in JSON representations
"""
QA_PROMPT = PromptTemplate(
template=prompt_template,
input_variables=['context', 'question']
)
chain_type_kwargs = {
'prompt': QA_PROMPT,
'verbose': True
}
docs = PyMuPDFLoader('file.pdf').load()
splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size,
chunk_overlap=chunk_overlap
)
docs = splitter.split_documents(document)
embeddings = OpenAIEmbeddings()
db = Chroma.from_documents(
documents=docs,
embedding= embeddings
)
qa_cahin = RetrievalQA.from_chain_type(
llm=OpenAI(temperature=0.2),
chain_type='map_rerank',
retriever=db.as_retriever(),
chain_type_kwargs=chain_type_kwargs
)
```
result:
```
ValidationError Traceback (most recent call last)
[c:\Users\JunXiang\AppData\Local\Programs\Python\Python38\lib\site-packages\langchain\chains\retrieval_qa\base.py](file:///C:/Users/JunXiang/AppData/Local/Programs/Python/Python38/lib/site-packages/langchain/chains/retrieval_qa/base.py) in from_chain_type(cls, llm, chain_type, chain_type_kwargs, **kwargs)
89 """Load chain from chain type."""
90 _chain_type_kwargs = chain_type_kwargs or {}
---> 91 combine_documents_chain = load_qa_chain(
92 llm, chain_type=chain_type, **_chain_type_kwargs
93 )
[c:\Users\JunXiang\AppData\Local\Programs\Python\Python38\lib\site-packages\langchain\chains\question_answering\__init__.py](file:///C:/Users/JunXiang/AppData/Local/Programs/Python/Python38/lib/site-packages/langchain/chains/question_answering/__init__.py) in load_qa_chain(llm, chain_type, verbose, callback_manager, **kwargs)
236 f"Should be one of {loader_mapping.keys()}"
...
[c:\Users\JunXiang\AppData\Local\Programs\Python\Python38\lib\site-packages\pydantic\main.cp38-win_amd64.pyd](file:///C:/Users/JunXiang/AppData/Local/Programs/Python/Python38/lib/site-packages/pydantic/main.cp38-win_amd64.pyd) in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for MapRerankDocumentsChain
__root__
Output parser of llm_chain should be a RegexParser, got None (type=value_error)
```
### Expected behavior
Does not crash, when i try to run it. | ValidationError: 1 validation error for MapRerankDocumentsChain | https://api.github.com/repos/langchain-ai/langchain/issues/6926/comments | 11 | 2023-06-29T15:15:32Z | 2024-06-01T00:07:43Z | https://github.com/langchain-ai/langchain/issues/6926 | 1,781,037,761 | 6,926 |
[
"hwchase17",
"langchain"
]
| ### Feature request
A colleague and I would like to implement an iterator version of the AgentExecutor. That is, we'd like to expose each intermediate step of the plan/action/observation loop. This should look something like:
```python3
inputs = ... # complex question that requires tool usage, multi-step
for step in agent_executor.iter(inputs=inputs):
do_stuff_with(step)
```
### Motivation
This would be useful for a few applications. By hooking into agent steps we could:
- Route different thoughts/actions/observations conditionally to:
- different frontend components
- different postprocessing/analysis logic
- other agents
We could also "pause" agent execution and intervene.
### Your contribution
Yes, we have a PR ~ready. | Iterator version of AgentExecutor | https://api.github.com/repos/langchain-ai/langchain/issues/6925/comments | 2 | 2023-06-29T15:13:26Z | 2023-07-25T16:33:49Z | https://github.com/langchain-ai/langchain/issues/6925 | 1,781,033,687 | 6,925 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hello, I'd like to ask about supporting [BingNewsAPI](https://www.microsoft.com/en-us/bing/apis/bing-news-search-api), which the apis and response format is different from bing web search api, that I cannot utilize existing [BingSearchAPIWrapper](https://python.langchain.com/docs/modules/agents/tools/integrations/bing_search) result or run method.
I know that you provide separating host url in existing BingSearchWrapper, but as the response format is different, there is an parsing error on calling result method.
### Motivation
I cannot utilize existing [BingSearchAPIWrapper](https://python.langchain.com/docs/modules/agents/tools/integrations/bing_search) s result or run method on [BingNewsAPI](https://www.microsoft.com/en-us/bing/apis/bing-news-search-api)
### Your contribution
If you allow me, I'd like to create pr for BingNewsSearchAPIWrapper utilities. | Support BingNewsApiWrapper on utilities | https://api.github.com/repos/langchain-ai/langchain/issues/6924/comments | 2 | 2023-06-29T14:28:14Z | 2023-08-31T07:22:13Z | https://github.com/langchain-ai/langchain/issues/6924 | 1,780,951,536 | 6,924 |
[
"hwchase17",
"langchain"
]
| ### System Info
Trying to get an agent to describe the contents of a particular URL for me, however, **my agent does not execute past first step**. Attached image to show what my code looks like and where it freezes as it runs.
<img width="683" alt="freezing_agent" src="https://github.com/hwchase17/langchain/assets/97087128/b3888b6e-f016-479e-ac1a-3036b7f16d97">
Thanks <3 !
(also posted as discussion, was not sure where to add)
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from config import SERPAPI_API_KEY, OPENAI_API_KEY
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent
from langchain.agents.agent_toolkits import PlayWrightBrowserToolkit
from langchain.tools.playwright.utils import create_async_playwright_browser
import asyncio
async_browser = create_async_playwright_browser()
browser_toolkit = PlayWrightBrowserToolkit.from_browser(async_browser=async_browser)
tools = browser_toolkit.get_tools()
gpt_model = "gpt-4"
llm = ChatOpenAI(
temperature=0,
model_name=gpt_model,
openai_api_key=OPENAI_API_KEY
)
agent_chain = initialize_agent(tools, llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
async def main():
response = await agent_chain.arun(input="Browse to https://www.theverge.com/2023/3/14/23638033/openai-gpt-4-chatgpt-multimodal-deep-learning and describe it to me.")
return response
result = asyncio.run(main())
print(result)
### Expected behavior
Enter chain, navigate to browser, read content in browser, return description.
Based on: https://python.langchain.com/docs/modules/agents/agent_types/structured_chat.html | Structured Agent Search Not Working | https://api.github.com/repos/langchain-ai/langchain/issues/6923/comments | 5 | 2023-06-29T14:25:29Z | 2024-03-13T19:55:50Z | https://github.com/langchain-ai/langchain/issues/6923 | 1,780,945,041 | 6,923 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Hey,
Love your product, and just wanted to give some constructive feedback. Your new layout and organization strategy on "https://python.langchain.com/docs/get_started/introduction.html" is significantly harder to navigate
### Idea or request for content:
_No response_ | DOC: | https://api.github.com/repos/langchain-ai/langchain/issues/6921/comments | 4 | 2023-06-29T13:42:48Z | 2023-09-28T18:26:50Z | https://github.com/langchain-ai/langchain/issues/6921 | 1,780,846,938 | 6,921 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.200
Python 3.11.4
Windows 10
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Use with `load_qa_chain`
Error in console:
```
Error in on_chain_start callback: 'name'
```
Log file content is missing "Entering new chain"
### Expected behavior
1. No error in console
2. "Entering new chain" is present in log file
Fix: access dict with fallback like StdOutCallbackHandler does:
```python
class_name = serialized.get("name", "")
``` | FileCallbackHandler doesn't log entering new chain | https://api.github.com/repos/langchain-ai/langchain/issues/6920/comments | 3 | 2023-06-29T13:15:39Z | 2023-09-28T17:27:55Z | https://github.com/langchain-ai/langchain/issues/6920 | 1,780,800,182 | 6,920 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.200
Python 3.11.4
Windows 10
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
Use the handler with a prompt containing characters that aren't represented on the platform charset (e.g. cp1252)
Error appears in console:
```
[manager.py:207] Error in on_text callback: 'charmap' codec can't encode character '\u2010' in position 776: character maps to <undefined>
```
Target logfile only has "entering new chain" and "finished chain" lines.
### Expected behavior
1. No error
2. Log file has usual output
Fix: in constructor: ```self.file = cast(TextIO, open(file_path, "a", encoding="utf-8"))``` | FileCallbackHandler should open file in UTF-8 encoding | https://api.github.com/repos/langchain-ai/langchain/issues/6919/comments | 4 | 2023-06-29T13:10:04Z | 2023-09-28T17:29:58Z | https://github.com/langchain-ai/langchain/issues/6919 | 1,780,791,122 | 6,919 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I want to create a chain to make query against my database. Also I want to add memory to this chain.
Example of dialogue I want to see:
Query: Who is an owner of website with domain domain.com?
Answer: Boba Bobovich
Query: Tell me his email Answer:
Boba Bobovich's email is [email protected]
I have this code:
```
import os
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain, PromptTemplate
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory()
db = SQLDatabase.from_uri(os.getenv("DB_URI"))
llm = OpenAI(temperature=0, verbose=True)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, memory=memory)
db_chain.run("Who is owner of the website with domain https://damon.name")
db_chain.run("Tell me his email")
print(memory.load_memory_variables({}))
```
It gives:
```
> Entering new chain...
Who is owner of the website with domain https://damon.name
SQLQuery:SELECT first_name, last_name FROM owners JOIN websites ON owners.id = websites.owner_id WHERE domain = 'https://damon.name' LIMIT 5;
SQLResult: [('Geo', 'Mertz')]
Answer:Geo Mertz is the owner of the website with domain https://damon.name.
> Finished chain.
> Entering new chain...
Tell me his email
SQLQuery:SELECT email FROM owners WHERE first_name = 'Westley' AND last_name = 'Waters'
SQLResult: [('[email protected]',)]
Answer:Westley Waters' email is [email protected].
> Finished chain.
{'history': "Human: Who is owner of the website with domain https://damon.name\nAI: Geo Mertz is the owner of the website with domain https://damon.name.\nHuman: Tell me his email\nAI: Westley Waters' email is [email protected]."}
```
Well, it saves context to memory but chain doesn't use it to give a proper answer (wrong email). How to fix it?
Also I don't want to use an agent because I want to manage to do this with a simple chain first. Tell me if it's impossible with simple chain.
### Suggestion:
_No response_ | How to add memory to SQLDatabaseChain? | https://api.github.com/repos/langchain-ai/langchain/issues/6918/comments | 43 | 2023-06-29T13:02:50Z | 2024-07-13T03:00:41Z | https://github.com/langchain-ai/langchain/issues/6918 | 1,780,780,200 | 6,918 |
[
"hwchase17",
"langchain"
]
| ### User based chat history
How I do implementation of User based chat history management and thread management. Plaese give any suggestion regarding this.
### Suggestion:
_No response_ | Issue: User based chat history | https://api.github.com/repos/langchain-ai/langchain/issues/6917/comments | 3 | 2023-06-29T12:37:35Z | 2023-11-13T16:08:20Z | https://github.com/langchain-ai/langchain/issues/6917 | 1,780,741,688 | 6,917 |
[
"hwchase17",
"langchain"
]
| ### System Info
OS: `Ubuntu 22.04`
Langchain Version: `0.0.219`
Poetry Version: `1.5.1`
Python Version: `3.10.6`
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I would like to contribute to the project, I followed instructions for [contributing](https://github.com/hwchase17/langchain/blob/8502117f62fc4caa53d504ccc9d4e6a512006e7f/.github/CONTRIBUTING.md) and have installed poetry.
After setting up the virtual env, the command: `poetry install -E all` runs successfully :heavy_check_mark:
However when trying to do a `make test` I get the following error:
```
Traceback (most recent call last):
File "[redacted]/langchain/.venv/bin/make", line 5, in <module>
from scripts.proto import main
ModuleNotFoundError: No module named 'scripts'
```
I have the virtual env activated and the command `make` is using the one installed in the virtual env. (see above error)
I'm assuming I'm missing a dependency but it's not obvious to determine, can you help please?
### Expected behavior
`make` should run properly | make - ModuleNotFoundError: No module named 'scripts' | https://api.github.com/repos/langchain-ai/langchain/issues/6915/comments | 5 | 2023-06-29T11:40:35Z | 2023-11-25T16:08:49Z | https://github.com/langchain-ai/langchain/issues/6915 | 1,780,653,651 | 6,915 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain-0.0.219
python-3.9
macos-13.4.1 (22F82)
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
1. use async style request in stream mode
```python
ChatOpenAI(streaming=True).agenerate([...])
```
2. Got an error if openai sdk raises `openai.error.APIError`

3. `acompletion_with_retry` not effective because the exception was thrown by `"openai/api_requestor.py", line 763, in _interpret_response_line` which is after “request”
in: tenacity/_asyncio.py

### Expected behavior
Catch exception, and do retry. | Stream Mode does not recognize openai.error | https://api.github.com/repos/langchain-ai/langchain/issues/6907/comments | 1 | 2023-06-29T09:36:28Z | 2023-10-06T16:06:39Z | https://github.com/langchain-ai/langchain/issues/6907 | 1,780,466,588 | 6,907 |
[
"hwchase17",
"langchain"
]
| ### System Info
Using Google Colab, Python 3.10.12. Installing latest version of langchain as of today (v0.0.219, I believe).
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When running the following function to create a chain on a collection of text files , I get the following error:
```
def get_company_qa_chain(company_text_dir):
loader = DirectoryLoader(company_text_dir, glob="*.txt")
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size = 1000,
chunk_overlap = 100,
length_function = len,
)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings() # default model="text-embedding-ada-002"
docsearch = Chroma.from_documents(docs, embeddings)
model_name = "gpt-3.5-turbo"
temperature=0
chat = ChatOpenAI(model_name = model_name, temperature=temperature)
template="""You are a helpful utility for answering questions. Communicate in English only."""
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template = """Use the following context to answer the question that follows. If you don't know the answer,
say you don't know, don't try to make up an answer:
{context}
Question: {question}
"""
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template,
input_variables=["context", "question"])
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
chain_type_kwargs = {"prompt": chat_prompt}
qa = RetrievalQA.from_chain_type(llm=chat, chain_type="stuff",
retriever=docsearch.as_retriever(),
return_source_documents=True,
chain_type_kwargs=chain_type_kwargs)
return qa
```
```
/usr/local/lib/python3.10/dist-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.raise_parser_error()
ParserError: Error tokenizing data. C error: Expected 2 fields in line 4, saw 4
```
I have isolated the error to a particular text file, which has been produced by pymupdf. There are no clearly strange characters in the file, and everything has been cleaned using the following function before writing to text file:
```
def clean_str(text):
text = unicodedata.normalize('NFKC', text)
text = text.replace("{", "").replace("}", "").replace("[", "").replace("]", "")
return text
```
### Expected behavior
I would expect the code to handle this error and skip the problematic file. I would be also totally happy with a way of cleaning the input files to ensure this doesn't happen again. It was quite a bit of work to hunt down the problem file manually. | "ParserError: Error tokenizing data. C error: Expected 2 fields in line 4, saw 4" | https://api.github.com/repos/langchain-ai/langchain/issues/6906/comments | 3 | 2023-06-29T09:29:43Z | 2023-11-03T16:07:17Z | https://github.com/langchain-ai/langchain/issues/6906 | 1,780,456,921 | 6,906 |
[
"hwchase17",
"langchain"
]
| ### System Info
llm = OpenAI(temperature=0)
chain = LLMChain(llm=llm, prompt=load_prompt("coolapk_prompt.json"))
config = json.load(f)
UnicodeDecodeError: 'gbk' codec can't decode byte 0xaf in position 108: illegal multibyte sequence
### Who can help?
llm = OpenAI(temperature=0)
chain = LLMChain(llm=llm, prompt=load_prompt("coolapk_prompt.json"))
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
llm = OpenAI(temperature=0)
chain = LLMChain(llm=llm, prompt=load_prompt("coolapk_prompt.json"))
### Expected behavior
add set encoding for JSON files
| load_prompt Unable to set encoding for JSON files | https://api.github.com/repos/langchain-ai/langchain/issues/6900/comments | 5 | 2023-06-29T06:48:33Z | 2024-05-01T03:47:10Z | https://github.com/langchain-ai/langchain/issues/6900 | 1,780,233,553 | 6,900 |
[
"hwchase17",
"langchain"
]
| ### System Info
python 3.9
langchain 0.0.199
### Who can help?
@hwchase17 @agola11 @eyu
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The regex such as https://github.com/hwchase17/langchain/blob/6370808d41eda0d056375015fda9284e9f01280c/langchain/agents/mrkl/output_parser.py#L18
will raise a ReDOS when the model output something below
`'Action' + '\n'*n + ':' + '\t'*n + '\t0Input:' + '\t'*n + '\nAction' + '\t'*n + 'Input' + '\t'*n + '\t0Input:ActionInput:'`
full code:
```
import re
n=780
llm_output='Action' + '\n'*n + ':' + '\t'*n + '\t0Input:' + '\t'*n + '\nAction' + '\t'*n + 'Input' + '\t'*n + '\t0Input:ActionInput:'
regex = (
r"Action\s*\d*\s*:[\s]*(.*?)[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
)
match = re.search(regex, llm_output, re.DOTALL)
action = match.group(1).strip()
print(action)
action_input = match.group(2)
```
To slow the regex
### Expected behavior
```
Protection
There are several things you can do to protect yourself from ReDoS attacks.
1.Look at safer alternative libraries such as Facebook’s [pyre2](https://pypi.org/project/pyre2/), which is a python wrapper around Google’s C++ Regex Library, [re2](https://github.com/google/re2/).
2.Always double check all regex you add to your application and never blindly trust regex patterns you find online.
3.Utilize SAST and fuzzing tools to test your own code, and check out [Ochrona](https://ochrona.dev/) to make sure your dependencies are not vulnerable to ReDoS attacks.
4.If possible, limit the length of your input to avoid longer than necessary strings.
```
from https://medium.com/ochrona/python-dos-prevention-the-redos-attack-7267a8fa2d5c
Also you can check your regex online https://devina.io/redos-checker. | The ReDOS Attack for the regex in langchain | https://api.github.com/repos/langchain-ai/langchain/issues/6898/comments | 1 | 2023-06-29T06:18:22Z | 2023-10-05T16:06:45Z | https://github.com/langchain-ai/langchain/issues/6898 | 1,780,203,422 | 6,898 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.217
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The following code fails:
```python
def create_sqlite_db_file(db_dir):
# Connect to SQLite database (or create it if it doesn't exist)
conn = sqlite3.connect(db_dir)
# Create a cursor
c = conn.cursor()
# Create a dummy table
c.execute('''
CREATE TABLE IF NOT EXISTS employees(
id INTEGER PRIMARY KEY,
name TEXT,
salary REAL,
department TEXT,
position TEXT,
hireDate TEXT);
''')
# Insert dummy data into the table
c.execute('''
INSERT INTO employees (name, salary, department, position, hireDate)
VALUES ('John Doe', 80000, 'IT', 'Engineer', '2023-06-26');
''')
# Commit the transaction
conn.commit()
# Close the connection
conn.close()
def test_log_and_load_sql_database_chain(tmp_path):
# Create the SQLDatabaseChain
db_file_path = tmp_path / "my_database.db"
sqlite_uri = f"sqlite:///{db_file_path}"
llm = OpenAI(temperature=0)
create_sqlite_db_file(db_file_path)
db = SQLDatabase.from_uri(sqlite_uri)
db_chain = SQLDatabaseChain.from_llm(llm, db)
db_chain.save('/path/to/test_chain.yaml')
from langchain.chains import load_chain
loaded_chain = load_chain('/path/to/test_chain.yaml', database=db)
```
Error:
```
def load_llm_from_config(config: dict) -> BaseLLM:
"""Load LLM from Config Dict."""
> if "_type" not in config:
E TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
No error should occur. | SQLDatabaseChain cannot be loaded | https://api.github.com/repos/langchain-ai/langchain/issues/6889/comments | 3 | 2023-06-28T21:29:40Z | 2023-12-27T16:06:48Z | https://github.com/langchain-ai/langchain/issues/6889 | 1,779,800,725 | 6,889 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain-0.0.218
Python 3.10.10
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Cannot import MultiQueryRetriever, because when i run it, it says:
Traceback (most recent call last):
File "C:\langchain\JETPACK-FLOW-MULTIQUERY.py", line 14, in <module>
from langchain.retrievers.multi_query import MultiQueryRetriever
File "C:\Users\roman\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\retrievers\__init__.py", line 21, in <module>
from langchain.retrievers.self_query.base import SelfQueryRetriever
File "C:\Users\roman\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\retrievers\self_query\base.py", line 9, in <module>
from langchain.chains.query_constructor.base import load_query_constructor_chain
File "C:\Users\roman\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\query_constructor\base.py", line 14, in <module>
from langchain.chains.query_constructor.parser import get_parser
File "C:\Users\roman\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\query_constructor\parser.py", line 9, in <module>
raise ValueError(
ValueError: Lark should be at least version 1.1.5, got 0.12.0
There was same error reported in this bug report:
https://github.com/hwchase17/langchain/discussions/6068
Their solution was simply to install langchain version 0.0.201, but for me it does not help, because there is no MultiQueryReceiver in this version.
I am using: langchain-0.0.218
### Expected behavior
Does not crash, when i try to run it. | MultiQueryReceiver does not work | https://api.github.com/repos/langchain-ai/langchain/issues/6885/comments | 0 | 2023-06-28T20:24:26Z | 2023-06-28T20:37:18Z | https://github.com/langchain-ai/langchain/issues/6885 | 1,779,701,054 | 6,885 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Several memory storage options have a "buffer" type which includes a pruning function that allows users to specify a max_tokens_limit which stops token overflow issues. I'm requesting this be included with the VectorStoreRetrieverMemory as well.
### Motivation
Running into token overflow issues is annoying and a pruning option is a simple way to get around this when using vector store based memory.
### Your contribution
I'm happy to attempt a solution on this given I'm able to use my company computer this weekend. Otherwise I would appreciate any support people are able to give. | Include pruning on VectorStoreRetrieverMemory | https://api.github.com/repos/langchain-ai/langchain/issues/6884/comments | 0 | 2023-06-28T20:23:47Z | 2023-06-28T20:49:16Z | https://github.com/langchain-ai/langchain/issues/6884 | 1,779,699,848 | 6,884 |
[
"hwchase17",
"langchain"
]
| ### System Info
Massive client changes as of this pr:
https://github.com/anthropics/anthropic-sdk-python/commit/8d1d6af6527a3ef80b74dc3a466166ab7df057df
Installing the latest anthropic client will cause langchain Anthropic LLM utiltiies to fail.
Looks like api_url no longer exists (they've moved to base_url).
```
???
E pydantic.error_wrappers.ValidationError: 1 validation error for ChatAnthropic
E __root__
E __init__() got an unexpected keyword argument 'api_url' (type=type_error)
```
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Install latest version of anthropic.
2. Attempt to run Anthropic LLM and it will produce a runtime failure about api_url which no longer exists in the Anthropic client initializer.
### Expected behavior
Should work with latest release of Anthropic client. | Incompatibility with latest Anthropic Client | https://api.github.com/repos/langchain-ai/langchain/issues/6883/comments | 5 | 2023-06-28T19:50:00Z | 2023-10-17T16:06:25Z | https://github.com/langchain-ai/langchain/issues/6883 | 1,779,642,812 | 6,883 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Add a parameter to ConversationalRetrievalChain to skip the condense question prompt procedure.
### Motivation
Currently, when using ConversationalRetrievalChain (with the from_llm() function), we have to run the input through a LLMChain with a default "condense_question_prompt" which condenses the chat history and the input to make a standalone question out of it.
This could be a good idea, but in my test cases (in French) it performs very poorly when switching between topics. It mixes things up and hallucinates a new question which is nowhere near the original question.
The only solution I can think of is to just allow the actual chat history to be sent in the prompt but we can't skip the condense question prompt part. For now, a workaround could be to provide a custom condense prompt asking the LLM to do nothing but that would call the LLM twice for no reason...
### Your contribution
Since I am not sure why this isn't already an option, I'd assume it's for a good reason so I'm asking here before thinking of implementing anything. | Allow skipping "condense_question_prompt" when using ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/6879/comments | 9 | 2023-06-28T16:46:24Z | 2024-03-16T16:07:47Z | https://github.com/langchain-ai/langchain/issues/6879 | 1,779,327,153 | 6,879 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Can someone please create and integrate an in memory vector store based on purely numpy?
I don't like that ChromaDB downloads to much deps when I just want something really simple.
### Motivation
I saw this https://github.com/jdagdelen/hyperDB and it works so this should be in langchain
### Your contribution
I dont have the time rn but I want to publish this idea | In memory vector store as python object based on purely numpy | https://api.github.com/repos/langchain-ai/langchain/issues/6877/comments | 1 | 2023-06-28T16:17:22Z | 2023-10-05T16:06:50Z | https://github.com/langchain-ai/langchain/issues/6877 | 1,779,281,098 | 6,877 |
[
"hwchase17",
"langchain"
]
| ### System Info
0.0.218
OSX
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create or find a sitemap with url(s) that run into an infinite redirect loop.
2. Run `SitemapLoader` with that.
3. Observe it crashing
### Expected behavior
The loader to continue on such errors, or have a config flag to say so, like `UnstructuredURLLoader`'s `continue_on_failure`. | I think the SitemapLoader should be as robust with default options as the UnstructuredURLLoader, which has an option continue_on_failure set to true. | https://api.github.com/repos/langchain-ai/langchain/issues/6875/comments | 1 | 2023-06-28T15:16:30Z | 2023-10-05T16:06:55Z | https://github.com/langchain-ai/langchain/issues/6875 | 1,779,179,472 | 6,875 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
embeddings = OpenAIEmbeddings(model='text-embedding-ada-002',deployment='XXXXX',chunk_size=1)
db = Chroma.from_documents(texts, embeddings)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True,chat_memory=ChatMessageHistory(messages=[]))
model = ConversationalRetrievalChain.from_llm(llm=AzureOpenAI(model_name='gpt-35-turbo',deployment_name='XXXXX',temperature=0.7,openai_api_base=os.environ['OPENAI_API_BASE'],openai_api_key=os.environ['OPENAI_API_KEY']), retriever=db.as_retriever(), memory=memory)
I'm using ConversationalRetrievalChain to query from embeddings and also using memory to store chat history.
I've written a Flask api to fetch result from model.
I'm trying to store memory of each user in redis store, so that there wont be any mix and match in chat history.
For that I'm trying with below code, but I'm unable to store memory in redis.
`r = redis.Redis(host='redis-1XXX.c3XXX.ap-south-1-1.ec2.cloud.redislabs.com', port=17506, db=0, password='XXXX')`
`memory = {}`
`memory_key = f"memory_{employee_code}"`
`messages = r.get(memory_key)`
`messages = json.loads(messages)`
`memory[memory_key] = ConversationBufferMemory(memory_key="chat_history", return_messages=True, chat_memory=ChatMessageHistory(messages=messages))`
`model = ConversationalRetrievalChain.from_llm(llm=AzureOpenAI(model_name='gpt-35-turbo',deployment_name='XXXXXX',temperature=0.7,openai_api_base=os.environ['OPENAI_API_BASE'],openai_api_key=os.environ['OPENAI_API_KEY']), retriever=db.as_retriever(), memory=memory[memory_key])`
`res=model.run(query)`
`messages.append({query:res})`
`r.set(memory_key,json.dumps(messages))`
This is not working. Can anyone help me on this?
Thanks in advance.
### Suggestion:
_No response_ | Issue: Unable to store user specific chat history in redis. Using ConversationalRetrievalChain along with ConversationBufferMemory | https://api.github.com/repos/langchain-ai/langchain/issues/6872/comments | 5 | 2023-06-28T13:57:42Z | 2024-02-01T09:29:07Z | https://github.com/langchain-ai/langchain/issues/6872 | 1,779,010,819 | 6,872 |
[
"hwchase17",
"langchain"
]
| ### System Info
While trying to run example from https://python.langchain.com/docs/modules/chains/popular/sqlite I get the following error:
````
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[42], line 1
----> 1 db_chain.run("What is the expected assisted goals by Erling Haaland")
File ~/.local/lib/python3.10/site-packages/langchain/chains/base.py:273, in Chain.run(self, callbacks, tags, *args, **kwargs)
271 if len(args) != 1:
272 raise ValueError("`run` supports only one positional argument.")
--> 273 return self(args[0], callbacks=callbacks, tags=tags)[_output_key]
275 if kwargs and not args:
276 return self(kwargs, callbacks=callbacks, tags=tags)[_output_key]
File ~/.local/lib/python3.10/site-packages/langchain/chains/base.py:149, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
147 except (KeyboardInterrupt, Exception) as e:
148 run_manager.on_chain_error(e)
--> 149 raise e
150 run_manager.on_chain_end(outputs)
151 final_outputs: Dict[str, Any] = self.prep_outputs(
152 inputs, outputs, return_only_outputs
153 )
File ~/.local/lib/python3.10/site-packages/langchain/chains/base.py:143, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
137 run_manager = callback_manager.on_chain_start(
138 dumpd(self),
139 inputs,
140 )
141 try:
142 outputs = (
--> 143 self._call(inputs, run_manager=run_manager)
144 if new_arg_supported
145 else self._call(inputs)
146 )
147 except (KeyboardInterrupt, Exception) as e:
148 run_manager.on_chain_error(e)
File ~/.local/lib/python3.10/site-packages/langchain/chains/sql_database/base.py:105, in SQLDatabaseChain._call(self, inputs, run_manager)
103 # If not present, then defaults to None which is all tables.
104 table_names_to_use = inputs.get("table_names_to_use")
--> 105 table_info = self.database.get_table_info(table_names=table_names_to_use)
106 llm_inputs = {
107 "input": input_text,
108 "top_k": str(self.top_k),
(...)
111 "stop": ["\nSQLResult:"],
112 }
113 intermediate_steps: List = []
File ~/.local/lib/python3.10/site-packages/langchain/sql_database.py:289, in SQLDatabase.get_table_info(self, table_names)
287 table_info += f"\n{self._get_table_indexes(table)}\n"
288 if self._sample_rows_in_table_info:
--> 289 table_info += f"\n{self._get_sample_rows(table)}\n"
290 if has_extra_info:
291 table_info += "*/"
File ~/.local/lib/python3.10/site-packages/langchain/sql_database.py:311, in SQLDatabase._get_sample_rows(self, table)
308 try:
309 # get the sample rows
310 with self._engine.connect() as connection:
--> 311 sample_rows_result = connection.execute(command) # type: ignore
312 # shorten values in the sample rows
313 sample_rows = list(
314 map(lambda ls: [str(i)[:100] for i in ls], sample_rows_result)
315 )
File /opt/conda/lib/python3.10/site-packages/sqlalchemy/engine/base.py:1413, in Connection.execute(self, statement, parameters, execution_options)
1411 raise exc.ObjectNotExecutableError(statement) from err
1412 else:
-> 1413 return meth(
1414 self,
1415 distilled_parameters,
1416 execution_options or NO_OPTIONS,
1417 )
File /opt/conda/lib/python3.10/site-packages/sqlalchemy/sql/elements.py:483, in ClauseElement._execute_on_connection(self, connection, distilled_params, execution_options)
481 if TYPE_CHECKING:
482 assert isinstance(self, Executable)
--> 483 return connection._execute_clauseelement(
484 self, distilled_params, execution_options
485 )
486 else:
487 raise exc.ObjectNotExecutableError(self)
File /opt/conda/lib/python3.10/site-packages/sqlalchemy/engine/base.py:1629, in Connection._execute_clauseelement(self, elem, distilled_parameters, execution_options)
1621 schema_translate_map = execution_options.get(
1622 "schema_translate_map", None
1623 )
1625 compiled_cache: Optional[CompiledCacheType] = execution_options.get(
1626 "compiled_cache", self.engine._compiled_cache
1627 )
-> 1629 compiled_sql, extracted_params, cache_hit = elem._compile_w_cache(
1630 dialect=dialect,
1631 compiled_cache=compiled_cache,
1632 column_keys=keys,
1633 for_executemany=for_executemany,
1634 schema_translate_map=schema_translate_map,
1635 linting=self.dialect.compiler_linting | compiler.WARN_LINTING,
1636 )
1637 ret = self._execute_context(
1638 dialect,
1639 dialect.execution_ctx_cls._init_compiled,
(...)
1647 cache_hit=cache_hit,
1648 )
1649 if has_events:
File /opt/conda/lib/python3.10/site-packages/sqlalchemy/sql/elements.py:684, in ClauseElement._compile_w_cache(self, dialect, compiled_cache, column_keys, for_executemany, schema_translate_map, **kw)
682 else:
683 extracted_params = None
--> 684 compiled_sql = self._compiler(
685 dialect,
686 cache_key=elem_cache_key,
687 column_keys=column_keys,
688 for_executemany=for_executemany,
689 schema_translate_map=schema_translate_map,
690 **kw,
691 )
693 if not dialect._supports_statement_cache:
694 cache_hit = dialect.NO_DIALECT_SUPPORT
File /opt/conda/lib/python3.10/site-packages/sqlalchemy/sql/elements.py:288, in CompilerElement._compiler(self, dialect, **kw)
286 if TYPE_CHECKING:
287 assert isinstance(self, ClauseElement)
--> 288 return dialect.statement_compiler(dialect, self, **kw)
File ~/.local/lib/python3.10/site-packages/pybigquery/sqlalchemy_bigquery.py:137, in BigQueryCompiler.__init__(self, dialect, statement, column_keys, inline, **kwargs)
135 if isinstance(statement, Column):
136 kwargs['compile_kwargs'] = util.immutabledict({'include_table': False})
--> 137 super(BigQueryCompiler, self).__init__(dialect, statement, column_keys, inline, **kwargs)
TypeError: SQLCompiler.__init__() got multiple values for argument 'cache_key'
````
I use a VertexAI model (MODEL_TEXT_BISON_001) as the LLM.
Some essential library versions:
langchain == 0.0.206
SQLAlchemy == 2.0.11
ipython == 8.12.1
python == 3.10.10
google-cloud-bigquery == 3.10.0
google-cloud-bigquery-storage == 2.16.2
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`db` = SQLDatabase.from_uri(f"bigquery://{project_id}/{dataset}")
toolkit = SQLDatabaseToolkit(llm=llm, db=db)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
db_chain.run("What is the sport that generates highest revenue")`
### Expected behavior
When running the db_chain, I expect to get an answer from the bigquery database | SQLDatabaseChain - Questions/error | https://api.github.com/repos/langchain-ai/langchain/issues/6870/comments | 19 | 2023-06-28T13:24:59Z | 2024-08-02T23:29:36Z | https://github.com/langchain-ai/langchain/issues/6870 | 1,778,943,766 | 6,870 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
how to track token usage for OpenAIEmbeddings? it seems that get_openai_callback always return 0.
### Suggestion:
_No response_ | how to track token usage for OpenAIEmbeddings? it seems that get_openai_callback always return 0. | https://api.github.com/repos/langchain-ai/langchain/issues/6869/comments | 3 | 2023-06-28T13:14:02Z | 2023-10-06T16:06:44Z | https://github.com/langchain-ai/langchain/issues/6869 | 1,778,919,115 | 6,869 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.