issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| ### System Info
azure-search-documents==11.4.0b8, langchain
### Who can help?
@hwchase17
@agola11
@dosu-bot
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
fields in azure search index and their content is as below:
Code:
memory_vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=memory_index_name,
embedding_function=embeddings.embed_query
)
user_id = "dtiw"
session_id = "ZjBlNmM4M2UtOThkYS00YjgyLThhOTAtNTQ0YTU1MTA3NmVm"
relevant_docs = memory_vector_store.similarity_search(
query=query,
k=4,
search_type="similarity",
filters = f"user_id eq '{user_id}' and session_id eq '{session_id}'"
)
if relevant_docs:
prev_history = "\n".join([doc.page_content for doc in relevant_docs])
else:
logging.info(f"relevant docs not found")
prev_history = ""
logging.info(f" the relevant docs are {relevant_docs}")
logging.info(f"the previous history is {prev_history}")
### Expected behavior
expected answer:
[Document(page_content='User: who are you?\nAssistant: I am an AI assistant here to help you with any company-related questions you may have. How can I assist you today?', metadata={'id': 'ZHRpd2FyaUBoZW5kcmlja3Nvbi1pbnRsLmNvbWYwZTZjODNlLTk4ZGEtNGI4Mi04YTkwLTU0NGE1NTEwNzZlZjIwMjMxMDI2MTk0MzI4', 'session_id': 'ZjBlNmM4M2UtOThkYS00YjgyLThhOTAtNTQ0YTU1MTA3NmVm', 'user_id': 'dtiw', '@search.score': 0.78985536, '@search.reranker_score': None, '@search.highlights': None, '@search.captions': None}),
Document(page_content='User: Hi, whats up?\nAssistant: Please stick to the company-related questions. How can I assist you with any company-related queries?', metadata={'id': 'ZHRpd2FyaUBoZW5kcmlja3Nvbi1pbnRsLmNvbWYwZTZjODNlLTk4ZGEtNGI4Mi04YTkwLTU0NGE1NTEwNzZlZjIwMjMxMDI2MTk0MjU5', 'session_id': 'ZjBlNmM4M2UtOThkYS00YjgyLThhOTAtNTQ0YTU1MTA3NmVm', 'user_id': 'dtiw', '@search.score': 0.7848022, '@search.reranker_score': None, '@search.highlights': None, '@search.captions': None})]
'User: who are you?\nAssistant: I am an AI assistant here to help you with any company-related questions you may have. How can I assist you today?'
'User: Hi, whats up?\nAssistant: Please stick to the company-related questions. How can I assist you with any company-related queries?'
Given answer:
the relevant docs are <iterator object azure.core.paging.ItemPaged at 0x7d82ae149b10>
the previous history is | filter query within vector_store.similarity_search() is not working as expected. | https://api.github.com/repos/langchain-ai/langchain/issues/12366/comments | 10 | 2023-10-26T20:16:56Z | 2024-03-25T09:57:23Z | https://github.com/langchain-ai/langchain/issues/12366 | 1,964,306,930 | 12,366 |
[
"hwchase17",
"langchain"
]
| ### System Info
`langchain` = 0.0.324
`pydantic` = 2.4.2
`python` = 3.11
`platform` = macos
In the project I have class `MyChat` which inherits `BaseChatModel` and further customize/extends it. Adding additional fields works fine with both pydantic v1 and v2, however adding a field and/or model validator fails with pydantic v2, raising `TypeError`:
```
TypeError: cannot pickle 'classmethod' object
```
Full error:
```
TypeError Traceback (most recent call last)
.../test.ipynb Cell 1 line 1
9 from pydantic import ConfigDict, Extra, Field, field_validator, model_validator
11 logger = logging.getLogger(__name__)
---> 13 class MyChat(BaseChatModel):
14 client: Any #: :meta private:
15 model_name: str = Field(default="gpt-35-turbo", alias="model")
File .../.venv/lib/python3.11/site-packages/pydantic/v1/main.py:221, in ModelMetaclass.__new__(mcs, name, bases, namespace, **kwargs)
219 elif is_valid_field(var_name) and var_name not in annotations and can_be_changed:
220 validate_field_name(bases, var_name)
--> 221 inferred = ModelField.infer(
222 name=var_name,
223 value=value,
224 annotation=annotations.get(var_name, Undefined),
225 class_validators=vg.get_validators(var_name),
226 config=config,
227 )
228 if var_name in fields:
229 if lenient_issubclass(inferred.type_, fields[var_name].type_):
File .../.venv/lib/python3.11/site-packages/pydantic/v1/fields.py:506, in ModelField.infer(cls, name, value, annotation, class_validators, config)
503 required = False
504 annotation = get_annotation_from_field_info(annotation, field_info, name, config.validate_assignment)
--> 506 return cls(
507 name=name,
508 type_=annotation,
509 alias=field_info.alias,
510 class_validators=class_validators,
511 default=value,
512 default_factory=field_info.default_factory,
513 required=required,
514 model_config=config,
515 field_info=field_info,
516 )
File .../.venv/lib/python3.11/site-packages/pydantic/v1/fields.py:436, in ModelField.__init__(self, name, type_, class_validators, model_config, default, default_factory, required, final, alias, field_info)
434 self.shape: int = SHAPE_SINGLETON
435 self.model_config.prepare_field(self)
--> 436 self.prepare()
File .../.venv/lib/python3.11/site-packages/pydantic/v1/fields.py:546, in ModelField.prepare(self)
539 def prepare(self) -> None:
540 """
541 Prepare the field but inspecting self.default, self.type_ etc.
542
543 Note: this method is **not** idempotent (because _type_analysis is not idempotent),
544 e.g. calling it it multiple times may modify the field and configure it incorrectly.
545 """
--> 546 self._set_default_and_type()
547 if self.type_.__class__ is ForwardRef or self.type_.__class__ is DeferredType:
548 # self.type_ is currently a ForwardRef and there's nothing we can do now,
549 # user will need to call model.update_forward_refs()
550 return
...
--> 161 rv = reductor(4)
162 else:
163 reductor = getattr(x, "__reduce__", None)
TypeError: cannot pickle 'classmethod' object
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Install pydantic v2
2. Install langchain
3. Try running following in for example notebook:
```
import logging
from typing import ( Any,ClassVar,)
from langchain.chat_models.base import BaseChatModel
from pydantic import ConfigDict, Extra, Field, field_validator, model_validator
logger = logging.getLogger(__name__)
class MyChat(BaseChatModel):
client: Any #: :meta private:
model_name: str = Field(default="gpt-35-turbo", alias="model")
temperature: float = 0.0
model_kwargs: dict[str, Any] = Field(default_factory=dict)
request_timeout: float | tuple[float, float] | None = None
max_retries: int = 6
max_tokens: int | None = 2024
gpu: bool = False
model_config: ClassVar[ConfigDict] = ConfigDict(
populate_by_name=True, strict=False, extra="ignore"
)
@property
def _llm_type(self) -> str:
return "my-chat"
@field_validator("max_tokens", mode="before")
def check_max_tokens(cls, v: int, values: dict[str, Any]) -> int:
"""Validate max_tokens."""
if v is not None and v < 1:
raise ValueError("max_tokens must be greater than 0.")
return v
```
### Expected behavior
There should be no error. | Unable to extend BaseChatModel with pydantic 2 | https://api.github.com/repos/langchain-ai/langchain/issues/12358/comments | 3 | 2023-10-26T18:35:45Z | 2023-10-26T19:36:52Z | https://github.com/langchain-ai/langchain/issues/12358 | 1,964,138,560 | 12,358 |
[
"hwchase17",
"langchain"
]
| ### System Info
For this code:
from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, SQLDatabase
from langchain_experimental.sql import SQLDatabaseChain
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from decouple import config
oak = openai_secret_key (witheld for sake of security)
openai_api_key = oak
llm = ChatOpenAI(temperature=0, model="babbage-002", openai_api_key=openai_api_key)
llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)
db = SQLDatabase.from_uri("sqlite:///maids.db")
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
tools = [
Tool(
name="Yellowsense_Database",
func=db_chain.run,
description="useful for when you need to answer questions about maids/cooks/nannies."
)
]
agent = initialize_agent(
tools=tools, llm=llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True)
user_input = input(
"""You can now chat with your database.
Please enter your question or type 'quit' to exit: """
)
agent.run(user_input) I am getting this error:
InvalidRequestError: This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions?
What all models can I use for chatmodel?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This is the code I am using: from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, SQLDatabase
from langchain_experimental.sql import SQLDatabaseChain
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from decouple import config
a = "sk-wLQRwTr8RS"
b = "Z1ph7bIlovT3B"
c = "lbkFJe49s56n"
d = "KfeWgts8MpMQC"
oak = a + b + c + d
openai_api_key = oak
# create LLM model
llm = ChatOpenAI(temperature=0, model="babbage-002", openai_api_key=openai_api_key)
# create an LLM math tool
llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)
# connect to our database
db = SQLDatabase.from_uri("sqlite:///maids.db")
# create the database chain
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
tools = [
Tool(
name="Maid_Database",
func=db_chain.run,
description="useful for when you need to answer questions about maids/cooks/nannies."
)
]
# creating the agent
agent = initialize_agent(
tools=tools, llm=llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True)
user_input = input(
"""You can now chat with your database.
Please enter your question or type 'quit' to exit: """
)
# ask the LLM a question
agent.run(user_input)
### Expected behavior
I should be getting the answer using the model I selected | InvalidRequestError: This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions? | https://api.github.com/repos/langchain-ai/langchain/issues/12351/comments | 2 | 2023-10-26T16:32:02Z | 2024-02-06T16:12:46Z | https://github.com/langchain-ai/langchain/issues/12351 | 1,963,933,380 | 12,351 |
[
"hwchase17",
"langchain"
]
| ### Feature request
LangChain v0.0.323
Python 3.10.12
turbo-3.5
I use JsonKeyOutputFunctionsParser to parse JSON output:
```
extraction_functions = [convert_pydantic_to_openai_function(Information)]
extraction_model = llm.bind(functions=extraction_functions, function_call={"name": "Information"})
prompt = ChatPromptTemplate.from_messages([
("system", "Extract the relevant information, if not explicitly provided do not guess. Extract partial info"),
("human", "{input}")
])
extraction_chain = prompt | extraction_model | JsonKeyOutputFunctionsParser(key_name="people")
```
For some text (in my tests it's 1 from 500) we have wrong JSON, because there are wrong placed comma or JSON is wrapped by extra text and that can't be parsed by json.loads.
It seems we can use OutputFixingParser, but in most of cases it's not needed to call LLM - we can just fix it my regex.
I did special code to fix such JSONs:
```
class JsonFixOutputFunctionsParser(JsonOutputFunctionsParser):
"""Parse an output as the Json object."""
def get_fixed_json(self, text : str) -> str:
"""Fix LLM json"""
text = re.sub(r'",\s*}', '"}', text)
text = re.sub(r"},\s*]", "}]", text)
text = re.sub(r"}\s*{", "},{", text)
open_bracket = min(text.find('['), text.find('{'))
if open_bracket == -1:
return text
close_bracket = max(text.rfind(']'), text.rfind('}'))
if close_bracket == -1:
return text
return text[open_bracket:close_bracket+1]
def parse_result(self, result: List[Generation], *, partial: bool = False) -> Any:
.................
fixed_json = self.get_fixed_json(str(function_call["arguments"]))
return json.loads(fixed_json)
```
Now it works without error:
`extraction_chain_fixed = prompt | extraction_model | JsonFixOutputFunctionsParser()`
Code example with this bug and fix: https://colab.research.google.com/drive/1LiT6-ljO_g7xn4Y8qaXKZaUlcpVdEEGI?usp=sharing
### Motivation
With get_fixed_json we can avoid extra LLM call and easy and fast fix most of wrong Jsons.
I tested it on 9000 runs and it fixed all errors that I got.
### Your contribution
Feel free to use get_fixed_json function where we need to parse JSON, not only for function calls. | Fix rrror "Could not parse function call data" for JsonKeyOutputFunctionsParser (and other Json parsers) without additional LLM call | https://api.github.com/repos/langchain-ai/langchain/issues/12348/comments | 10 | 2023-10-26T15:34:55Z | 2024-02-14T16:08:49Z | https://github.com/langchain-ai/langchain/issues/12348 | 1,963,840,031 | 12,348 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Currently, there's only a specific output type for the evaluate_strings function in langchain.
{
reasoning: "blah blah"
score: 0 or 1
value: "Y" or "N"
}
Ideally users should be able to override this with the type of request they would want.
### Motivation
It really ties you down when you have to go with the same type of output parser.
### Your contribution
I can work on a PR for the change. | Langchain evaluate_strings should allow custom output parsing | https://api.github.com/repos/langchain-ai/langchain/issues/12343/comments | 2 | 2023-10-26T14:54:17Z | 2024-02-08T16:13:25Z | https://github.com/langchain-ai/langchain/issues/12343 | 1,963,761,221 | 12,343 |
[
"hwchase17",
"langchain"
]
| ### System Info
aws
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
here's my PROMPT and code:
from langchain.prompts.chat import ChatPromptTemplate
updated_prompt = ChatPromptTemplate.from_messages(
[
("system",
"""
You are a knowledgeable AI assistant specializing in extracting information from the 'inquiry' table in the MySQL Database.
Your primary task is to perform a single query on the schema of the 'inquiry' table and table and retrieve the data using SQL.
When formulating SQL queries, keep the following context in mind:
- Filter records based on exact column value matches.
- If the user inquires about the Status of the inquiry fetch all these columns: status, name, and time values, and inform the user about these specific values.
- Limit query results to a maximum of 3 unless the user specifies otherwise.
- Only query necessary columns.
- Avoid querying for non-existent columns.
- Place the 'ORDER BY' clause after 'WHERE.'
- Do not add a semicolon at the end of the SQL.
If the query results in an empty set, respond with "information not found"
Use this format:
Question: The user's query
Thought: Your thought process
Action: SQL Query
Action Input: SQL query
Observation: Query results
... (repeat for multiple queries)
Thought: Summarize what you've learned
Final Answer: Provide the final answer
Begin!
"""),
("user", "{question}\n ai: "),
]
)
llm = ChatOpenAI(model=os.getenv("OPENAI_CHAT_MODEL"), temperature=0) # best result
sql_toolkit = SQLDatabaseToolkit(db=db, llm=llm)
sql_toolkit.get_tools()
sqldb_agent = create_sql_agent(
llm=llm,
toolkit=sql_toolkit,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
handle_parsing_errors=True,
)
sqldb_agent.run(updated_prompt.format(
question="What is the status of inquiry 123?"
))
### Expected behavior
This is my current cost
response: The inquiry 123 is Completed on August 21, 2020, at 12 PM.
Total Tokens: 18566
Prompt Tokens: 18349
Completion Tokens: 217
Total Cost (USD): $0.055915
I want to reduce the cost to less than > $0.01
any suggestions will help. | i am trying to optimize the cost of SQLDatabaseToolkit calls | https://api.github.com/repos/langchain-ai/langchain/issues/12341/comments | 4 | 2023-10-26T14:49:05Z | 2024-02-10T16:09:37Z | https://github.com/langchain-ai/langchain/issues/12341 | 1,963,750,813 | 12,341 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am developing an app utilizing the streaming functionality in Python 3.10. I've crafted my own generator and callback for this purpose. However, I've stumbled upon an issue when setting llm.streaming to true and utilizing a callback - the OpenAI tokenizer appears to malfunction, displaying a token count of 0$ which is not the expected behavior.
### Who can help?
@agola11
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
def arun(self, g, chat_history, query:str, config:dict, agent_id:str):
try:
llm = ChatOpenAI(
temperature=0,
verbose=False,
model='gpt-4',
streaming=True,
callbacks=[ChainStreamHandler(g)]
)
with get_openai_callback() as cb:
custom_agent = customAgent.run(g, llm, chat_history, query, config, agent_id)
print(cb)
return custom_agent
finally:
g.close()
Tokens Used: 0
Prompt Tokens: 0
Completion Tokens: 0
Successful Requests: 0
Total Cost (USD): $0.0
### Expected behavior
I aim to have the tokenizer working correctly to gauge the token usage and its corresponding cost. Any guidance or suggestions to rectify this problem would be greatly appreciated | Issue with OpenAI tokenizer not functioning as expected in streaming mode | https://api.github.com/repos/langchain-ai/langchain/issues/12339/comments | 3 | 2023-10-26T13:26:24Z | 2024-04-23T17:02:28Z | https://github.com/langchain-ai/langchain/issues/12339 | 1,963,570,897 | 12,339 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Since version `v0.0.299` `LangChain` depends on `anyio = "<4.0"`.
### Suggestion:
Could you please relax the `anyio` dependency to support a wider range of versions `<5.0` as `4.0.0` has been out since August? | Issue: Update `anyio` dependency | https://api.github.com/repos/langchain-ai/langchain/issues/12337/comments | 2 | 2023-10-26T12:48:24Z | 2024-02-06T16:13:01Z | https://github.com/langchain-ai/langchain/issues/12337 | 1,963,492,115 | 12,337 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.275
Ubuntu 20.04.6 LTS
llama-2-7b-chat.ggmlv3.q4_0.bin
llama-cpp-python 0.1.78
I am getting a validation error when trying to use the document summary chain. It is giving an error "str type expected". I am passing Document objects to the model. I have checked the type of all page contents and they are indeed strings.
**Error**
```bash
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
[/home/waleedalfaris/localGPT/notebooks/doc_summary.ipynb](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/doc_summary.ipynb) Cell 9 line 5
[3](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a224c4c4d227d/home/waleedalfaris/localGPT/notebooks/doc_summary.ipynb#X34sdnNjb2RlLXJlbW90ZQ%3D%3D?line=2) docs = loader.load_and_split()
[4](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a224c4c4d227d/home/waleedalfaris/localGPT/notebooks/doc_summary.ipynb#X34sdnNjb2RlLXJlbW90ZQ%3D%3D?line=3) chain = load_summarize_chain(llm, chain_type='map_reduce')
----> [5](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a224c4c4d227d/home/waleedalfaris/localGPT/notebooks/doc_summary.ipynb#X34sdnNjb2RlLXJlbW90ZQ%3D%3D?line=4) summary = chain.run(docs[:1])
[6](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a224c4c4d227d/home/waleedalfaris/localGPT/notebooks/doc_summary.ipynb#X34sdnNjb2RlLXJlbW90ZQ%3D%3D?line=5) print(summary)
File [~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/base.py:475](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/base.py:475), in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
473 if len(args) != 1:
474 raise ValueError("`run` supports only one positional argument.")
--> 475 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
476 _output_key
477 ]
479 if kwargs and not args:
480 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
481 _output_key
482 ]
File [~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/base.py:282](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/base.py:282), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
280 except (KeyboardInterrupt, Exception) as e:
281 run_manager.on_chain_error(e)
--> 282 raise e
283 run_manager.on_chain_end(outputs)
284 final_outputs: Dict[str, Any] = self.prep_outputs(
285 inputs, outputs, return_only_outputs
286 )
File [~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/base.py:276](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/base.py:276), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
270 run_manager = callback_manager.on_chain_start(
271 dumpd(self),
272 inputs,
273 )
274 try:
275 outputs = (
--> 276 self._call(inputs, run_manager=run_manager)
277 if new_arg_supported
278 else self._call(inputs)
279 )
280 except (KeyboardInterrupt, Exception) as e:
281 run_manager.on_chain_error(e)
File [~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py:105](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py:105), in BaseCombineDocumentsChain._call(self, inputs, run_manager)
103 # Other keys are assumed to be needed for LLM prediction
104 other_keys = {k: v for k, v in inputs.items() if k != self.input_key}
--> 105 output, extra_return_dict = self.combine_docs(
106 docs, callbacks=_run_manager.get_child(), **other_keys
107 )
108 extra_return_dict[self.output_key] = output
109 return extra_return_dict
File [~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/combine_documents/map_reduce.py:209](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/combine_documents/map_reduce.py:209), in MapReduceDocumentsChain.combine_docs(self, docs, token_max, callbacks, **kwargs)
197 def combine_docs(
198 self,
199 docs: List[Document],
(...)
202 **kwargs: Any,
203 ) -> Tuple[str, dict]:
204 """Combine documents in a map reduce manner.
205
206 Combine by mapping first chain over all documents, then reducing the results.
207 This reducing can be done recursively if needed (if there are many documents).
208 """
--> 209 map_results = self.llm_chain.apply(
210 # FYI - this is parallelized and so it is fast.
211 [{self.document_variable_name: d.page_content, **kwargs} for d in docs],
212 callbacks=callbacks,
213 )
214 question_result_key = self.llm_chain.output_key
215 result_docs = [
216 Document(page_content=r[question_result_key], metadata=docs[i].metadata)
217 # This uses metadata from the docs, and the textual results from `results`
218 for i, r in enumerate(map_results)
219 ]
File [~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/llm.py:189](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/llm.py:189), in LLMChain.apply(self, input_list, callbacks)
187 except (KeyboardInterrupt, Exception) as e:
188 run_manager.on_chain_error(e)
--> 189 raise e
190 outputs = self.create_outputs(response)
191 run_manager.on_chain_end({"outputs": outputs})
File [~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/llm.py:186](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/llm.py:186), in LLMChain.apply(self, input_list, callbacks)
181 run_manager = callback_manager.on_chain_start(
182 dumpd(self),
183 {"input_list": input_list},
184 )
185 try:
--> 186 response = self.generate(input_list, run_manager=run_manager)
187 except (KeyboardInterrupt, Exception) as e:
188 run_manager.on_chain_error(e)
File [~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/llm.py:101](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/chains/llm.py:101), in LLMChain.generate(self, input_list, run_manager)
99 """Generate LLM result from inputs."""
100 prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)
--> 101 return self.llm.generate_prompt(
102 prompts,
103 stop,
104 callbacks=run_manager.get_child() if run_manager else None,
105 **self.llm_kwargs,
106 )
File [~/localGPT/venv/lib/python3.10/site-packages/langchain/llms/base.py:467](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/llms/base.py:467), in BaseLLM.generate_prompt(self, prompts, stop, callbacks, **kwargs)
459 def generate_prompt(
460 self,
461 prompts: List[PromptValue],
(...)
464 **kwargs: Any,
465 ) -> LLMResult:
466 prompt_strings = [p.to_string() for p in prompts]
--> 467 return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File [~/localGPT/venv/lib/python3.10/site-packages/langchain/llms/base.py:602](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/llms/base.py:602), in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, **kwargs)
593 raise ValueError(
594 "Asked to cache, but no cache found at `langchain.cache`."
595 )
596 run_managers = [
597 callback_manager.on_llm_start(
598 dumpd(self), [prompt], invocation_params=params, options=options
599 )[0]
600 for callback_manager, prompt in zip(callback_managers, prompts)
601 ]
--> 602 output = self._generate_helper(
603 prompts, stop, run_managers, bool(new_arg_supported), **kwargs
604 )
605 return output
606 if len(missing_prompts) > 0:
File [~/localGPT/venv/lib/python3.10/site-packages/langchain/llms/base.py:504](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/llms/base.py:504), in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
502 for run_manager in run_managers:
503 run_manager.on_llm_error(e)
--> 504 raise e
505 flattened_outputs = output.flatten()
506 for manager, flattened_output in zip(run_managers, flattened_outputs):
File [~/localGPT/venv/lib/python3.10/site-packages/langchain/llms/base.py:491](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/llms/base.py:491), in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
481 def _generate_helper(
482 self,
483 prompts: List[str],
(...)
487 **kwargs: Any,
488 ) -> LLMResult:
489 try:
490 output = (
--> 491 self._generate(
492 prompts,
493 stop=stop,
494 # TODO: support multiple run managers
495 run_manager=run_managers[0] if run_managers else None,
496 **kwargs,
497 )
498 if new_arg_supported
499 else self._generate(prompts, stop=stop)
500 )
501 except (KeyboardInterrupt, Exception) as e:
502 for run_manager in run_managers:
File [~/localGPT/venv/lib/python3.10/site-packages/langchain/llms/base.py:985](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/llms/base.py:985), in LLM._generate(self, prompts, stop, run_manager, **kwargs)
979 for prompt in prompts:
980 text = (
981 self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
982 if new_arg_supported
983 else self._call(prompt, stop=stop, **kwargs)
984 )
--> 985 generations.append([Generation(text=text)])
986 return LLMResult(generations=generations)
File [~/localGPT/venv/lib/python3.10/site-packages/langchain/load/serializable.py:74](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/langchain/load/serializable.py:74), in Serializable.__init__(self, **kwargs)
73 def __init__(self, **kwargs: Any) -> None:
---> 74 super().__init__(**kwargs)
75 self._lc_kwargs = kwargs
File [~/localGPT/venv/lib/python3.10/site-packages/pydantic/main.py:341](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a224c4c4d227d.vscode-resource.vscode-cdn.net/home/waleedalfaris/localGPT/notebooks/~/localGPT/venv/lib/python3.10/site-packages/pydantic/main.py:341), in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for Generation
text
str type expected (type=type_error.str)
```
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains.summarize import load_summarize_chain
from langchain.prompts import PromptTemplate
from langchain.document_loaders import PDFMinerLoader
from langchain.llms import LlamaCpp
kwargs = {
"model_path": model_path,
"n_ctx": 4096,
"max_tokens": 4096,
"seed": 42,
"n_gpu_layers": -1,
"verbose": True,
}
llm = LlamaCpp(**kwargs)
file = 'test_file.pdf'
loader = PDFMinerLoader(file)
docs = loader.load_and_split()
chain = load_summarize_chain(llm, chain_type='map_reduce')
summary = chain.run(docs)
print(summary)
```
### Expected behavior
The chain should provide a summary of the Documents provided | load_summarize_chain ValidationError, str type expected | https://api.github.com/repos/langchain-ai/langchain/issues/12336/comments | 4 | 2023-10-26T12:47:28Z | 2024-02-10T16:09:42Z | https://github.com/langchain-ai/langchain/issues/12336 | 1,963,490,327 | 12,336 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello Team,
I am trying to build a QA system with text files as its knowledge base. While creating embeddings I have provided different metadata tags to documents.
I want to use this metadata to filter documents before sending them to LLM to generate answer.
I am using as_retriever() method to filter the documents based on 'key':'value' pair. Code below
```
vectordb.as_retriever(
search_type="similarity",
search_kwargs = {
'filter' : {'key1' : 'value1'}
}
)
```
This method works great to filter out the documents when I am using ChromaDB as VectorStore, but does not work when I use Neo4j as VectorStore. It returns the same results with or without filter using Neo4j
To figure out the issue, I checked langchain's source code for implementation of ChromaDB and Neo4j Vectorstore.
In ChomaDB the similarity_search() method is implemented as follows -
https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/chroma.py

Here the filter parameter is passed to the next function - self.similarity_search_with_score(query, k, filter=filter).
But when I checked the same method for Neo4j
https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/neo4j_vector.py

Here the filter parameter or the **kwargs is not passed to the next function -
self.similarity_search_by_vector(
embedding=embedding,
k=k,
query=query,
)
Please check if this is what is causing the issue.
Also let me know if something is wrong in my implementation.
Thanks in Advance
### Suggestion:
_No response_ | Unable to filter results in Neo4j Vectorstore.as_retriever() method using filter paramter in search_kwargs | https://api.github.com/repos/langchain-ai/langchain/issues/12335/comments | 11 | 2023-10-26T11:08:53Z | 2024-08-06T16:07:32Z | https://github.com/langchain-ai/langchain/issues/12335 | 1,963,275,005 | 12,335 |
[
"hwchase17",
"langchain"
]
| I am trying to use langserve with langchain llamacpp like this (chain.py):
```from flask import Flask, jsonify, request
from langchain.chains import RetrievalQA,ConversationalRetrievalChain
from langchain.embeddings import HuggingFaceInstructEmbeddings
from langchain.llms import LlamaCpp
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from prompt_template_utils import get_prompt_template
from langchain.vectorstores import Chroma, FAISS
from werkzeug.utils import secure_filename
from constants import CHROMA_SETTINGS, EMBEDDING_MODEL_NAME, PERSIST_DIRECTORY, MODEL_ID, MODEL_BASENAME
DEVICE_TYPE = "cuda" if torch.cuda.is_available() else "cpu"
SHOW_SOURCES = True
logging.info(f"Running on: {DEVICE_TYPE}")
logging.info(f"Display Source Documents set to: {SHOW_SOURCES}")
MODEL_PATH = '../llama-2-7b-32k-instruct.Q6_K.gguf'
def load_model():
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
n_gpu_layers = 30 # Change this value based on your model and your GPU VRAM pool.
n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
llm = LlamaCpp(
model_path=MODEL_PATH,
n_ctx = 8192,
n_gpu_layers=n_gpu_layers,
n_batch=n_batch,
repeat_penalty=1,
temperature=0.2,
max_tokens=100,
top_p=0.9,
top_k = 50,
rope_freq_scale=0.125,
stop = ["[INST]"],
# callback_manager=callback_manager,
streaming=True,
verbose=True, # Verbose is required to pass to the callback manager
)
return llm
EMBEDDINGS = HuggingFaceInstructEmbeddings(model_name=EMBEDDING_MODEL_NAME,
model_kwargs={"device": DEVICE_TYPE})
DB = FAISS.load_local(PERSIST_DIRECTORY, EMBEDDINGS)
RETRIEVER = DB.as_retriever(search_kwargs={'k': 16})
template = """\
[INST] <<SYS>>
{context}
<</SYS>>
[/INST]
[INST]{question}[/INST]
"""
LLM = load_model()#load_model(device_type=DEVICE_TYPE, model_id=MODEL_ID, model_basename=MODEL_BASENAME)
QA_CHAIN_PROMPT = PromptTemplate.from_template(template)
QA = ConversationalRetrievalChain.from_llm(LLM, retriever=RETRIEVER,
combine_docs_chain_kwargs={"prompt": QA_CHAIN_PROMPT})
```
The app.py file has this:
```
from fastapi import FastAPI
from langserve import add_routes
from chain import QA
app = FastAPI(title="Retrieval App")
add_routes(app, QA)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)`
However when running /stream route, I get this warning:
NFO: Started server process [491117]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: 127.0.0.1:48024 - "GET /docs HTTP/1.1" 200 OK
INFO: 127.0.0.1:48024 - "GET /openapi.json HTTP/1.1" 200 OK
INFO: 127.0.0.1:53430 - "POST /stream HTTP/1.1" 200 OK
/home/zeeshan/miniconda3/envs/llama/lib/python3.10/site-packages/langchain/llms/llamacpp.py:352: RuntimeWarning: coroutine 'AsyncCallbackManagerForLLMRun.on_llm_new_token' was never awaited
run_manager.on_llm_new_token(
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
```
And the output is like this:

This seems not to be streaming output tokens. Does langserve even support langchain llamacpp or I am doing something wrong?
| langserve with Llamacpp | https://api.github.com/repos/langchain-ai/langchain/issues/12441/comments | 11 | 2023-10-26T10:44:09Z | 2024-05-02T16:04:29Z | https://github.com/langchain-ai/langchain/issues/12441 | 1,966,025,938 | 12,441 |
[
"hwchase17",
"langchain"
]
| ### Need Route Name : Coming in console but not in output
I am using Router chain to route my chain with MultiPromptChain. It gets routed and shows in console to which route it went but it doesn't come in output variable. It just shows in console. I want that in a variable. I used every available method like return_intermediate_steps=True and return_route_name=True, But it is not working
Below is the code:
`destination_chains = {}
for p_info in prompt_infos:
name = p_info["name"]
prompt_template = p_info["prompt_template"]
prompt = ChatPromptTemplate.from_template(template=prompt_template)
chain = LLMChain(llm=llm, prompt=prompt)
destination_chains[name] = chain
destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
destinations_str = "\n".join(destinations)
default_prompt = ChatPromptTemplate.from_template("{input}")
default_chain = LLMChain(llm=llm, prompt=default_prompt)
MULTI_PROMPT_ROUTER_TEMPLATE = """Given a text input to a \
language model select the model prompt best suited for the input. \
You will be given the names of the available prompts and a \
description of what the prompt is best suited for. \
<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
```json
{{{{
"destination": string \ name of the prompt to use or "DEFAULT"
"next_inputs": string \ a potentially modified version of the original input
}}}}
```
REMEMBER: "destination" MUST be one of the candidate prompt \
names specified below.
REMEMBER: "next_inputs" can just be the original input \
if you don't think any modifications are needed.
<< CANDIDATE PROMPTS >>
{destinations}
<< INPUT >>
{{input}}
Output should be only in below format:
Destination used: Output of it
"""
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(
destinations=destinations_str
)
router_prompt = PromptTemplate(
template=router_template,
input_variables=["input"],
output_parser=RouterOutputParser(),
)
router_chain = LLMRouterChain.from_llm(llm, router_prompt)
chain = MultiPromptChain(router_chain=router_chain,
destination_chains=destination_chains,
default_chain=default_chain, verbose=True
)
inputQues="How to turn on TV"
output = chain.run(input=inputQues)`
### Suggestion:
_No response_ | Issue: Can't get verbose output in a variable in python | https://api.github.com/repos/langchain-ai/langchain/issues/12330/comments | 4 | 2023-10-26T08:31:31Z | 2024-02-10T16:09:47Z | https://github.com/langchain-ai/langchain/issues/12330 | 1,962,991,552 | 12,330 |
[
"hwchase17",
"langchain"
]
| ### System Info
Mac os sonoma
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just use any chain that supports return_source_documents, try setting it to true.
### Expected behavior
I will say too many output keys and throw the error, please fix this otherwise metadata for items does not make sense, since you cannot return source docs anyway. | return_source documents does not work | https://api.github.com/repos/langchain-ai/langchain/issues/12329/comments | 4 | 2023-10-26T08:26:40Z | 2024-02-10T16:09:52Z | https://github.com/langchain-ai/langchain/issues/12329 | 1,962,983,060 | 12,329 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am using Langchain + Chroma to do a Q&A program with a csv document as its knowledge base.
The CSV file looks like below:
[![enter image description here][1]][1]
Here is the CSV file:
https://1drv.ms/u/s!Asflam6BEzhjgbkdegCGfZ7FI4O1Og?e=2X6ior
And code for creating embedding:
from langchain.document_loaders.csv_loader import CSVLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores.chroma import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter as RCTS
file_path = "Test.csv"
doc_pages = []
csv_loader = CSVLoader(file_path)
doc_pages = csv_loader.load()
print(f"Extracted {file_path} with {len(doc_pages)} pages...")
splitter = RCTS(chunk_size = 3000, chunk_overlap = 300)
splitted_docs = splitter.split_documents(doc_pages)
embedding = OpenAIEmbeddings()
persist_directory = "docs_t/chroma/"
vectordb = Chroma.from_documents(
documents=splitted_docs,
embedding=embedding,
persist_directory=persist_directory
)
vectordb.persist()
print(vectordb._collection.count())
Here is the Testing code:
result = vectordb.similarity_search("what is the Support Item Name for 01_003_0107_1_1", k=3)
for r in result:
print(r.page_content, end="\n\n")
And I see this testing code returns all other non-relevant information.
Which part leads to this issue?
[1]: https://i.stack.imgur.com/Ev3cn.png
### Suggestion:
_No response_ | Issue: Similarity_search on Vector Store does not work. | https://api.github.com/repos/langchain-ai/langchain/issues/12326/comments | 12 | 2023-10-26T07:37:18Z | 2024-02-15T16:08:11Z | https://github.com/langchain-ai/langchain/issues/12326 | 1,962,901,400 | 12,326 |
[
"hwchase17",
"langchain"
]
| ### System Info
GPT-4 can now read images. How to combine images and text in PDF or WORD when calling the API?
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
document loader?
### Expected behavior
maybe we can have a document loader that loads text and images at the same time. | GPT-4 reads the images | https://api.github.com/repos/langchain-ai/langchain/issues/12323/comments | 3 | 2023-10-26T06:38:05Z | 2024-02-10T16:10:02Z | https://github.com/langchain-ai/langchain/issues/12323 | 1,962,808,683 | 12,323 |
[
"hwchase17",
"langchain"
]
| I defined a custom LLM and a custom search, and used the agent for testing, but I found some problems in parsing the llm response, which was prone to result parsing errors after multiple rounds of dialogue. I suspect that there is something wrong with the stop logic of the custom llm, please help me analyze it.
```
if __name__ == '__main__':
gpt35_llm = gpt35_llm()
tools = [GoogleSearchTool(), BaiduSearchTool()]
agent = initialize_agent(tools, gpt35_llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, handle_parsing_errors=True,verbose=True)
print(agent.run('杭州亚运会的奖牌榜前三名是哪些国家?杭州亚运会的第一枚金牌是谁获得的?'))
```
```
class gpt35_llm(LLM):
model: str = "gpt-3.5"
@property
def _llm_type(self) -> str:
return "gpt-3.5"
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
response = gpt_35_chat(prompt, None)
if stop is not None:
print(f'stop:{stop}')
content = response['content']
output = content[:content.find(stop[0])]
return output
if response['code'] != 200:
raise ValueError(response['content'])
return response['content']
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {"model": self.model}``` | How do I define the stop logic for a custom llm in agent | https://api.github.com/repos/langchain-ai/langchain/issues/12322/comments | 2 | 2023-10-26T06:35:00Z | 2024-02-06T16:13:26Z | https://github.com/langchain-ai/langchain/issues/12322 | 1,962,804,701 | 12,322 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version: 0.0320
python version: 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I want to clear history of ConversationBufferMemory. I used ConversationBufferMemory to maintain chat history but right now I don't know how can I delete history of that . If anyone knows these please help me out of this.
### Expected behavior
ConversationBufferMemory history should be clear. | Conversation Buffer Memory clear histroy | https://api.github.com/repos/langchain-ai/langchain/issues/12319/comments | 2 | 2023-10-26T04:56:32Z | 2024-02-06T16:13:31Z | https://github.com/langchain-ai/langchain/issues/12319 | 1,962,704,317 | 12,319 |
[
"hwchase17",
"langchain"
]
| ### System Info
latest langchain code AzureSearch at langchain/libs/langchain/langchain/vectorstores/azuresearch.py:
semantic_hybrid_search_with_score function, when returning search result, the score should be @search.rerankerScore, not @search.score for hybrid semantic search, according to the latest Azure Doc here: https://learn.microsoft.com/en-us/azure/search/hybrid-search-ranking
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
the code is not following the latest Azure Search document: https://learn.microsoft.com/en-us/azure/search/hybrid-search-ranking
### Expected behavior
float(result["@search.rerankerScore"]) (instead of float(result["@search.score"])) | should be @search.rerankerScore, not @search.score for hybrid semantic search (Azure Cognitive Search) | https://api.github.com/repos/langchain-ai/langchain/issues/12317/comments | 2 | 2023-10-26T04:00:06Z | 2024-02-06T16:13:36Z | https://github.com/langchain-ai/langchain/issues/12317 | 1,962,658,507 | 12,317 |
[
"hwchase17",
"langchain"
]
| ### System Info
I got different embedding results using OpenAIEmbeddings and the original openai library. Besides the embeddings from both OpenAIEmbeddings and openai change from time to time. Sometimes it returns the same results but sometimes it returns differently , especially after I exceeds the time limit.
I am using langchain-0.0.321. I tried dig a little deeper in the OpenAIEmbeddings source code and find that there is a process using the tokenizer. I tried use the same tokenizer in the original openai library and get the same result compared to not using it, so I think this is not the problem. Can someone explain a little bit why the results in different libraries and different requests may change?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
import openai
from langchain.embeddings.openai import OpenAIEmbeddings
text = 'where are you going'
vec1 = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY).embed_query(text)
vec2 = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY).embed_query(text)
if not vec1 == vec2:
print('vec1:', vec1[:3])
print('vec2:', vec2[:3])
vec3 = openai.Embedding.create(input=[text], model='text-embedding-ada-002')['data'][0]['embedding']
if not (vec1 == vec3 or vec2 == vec3):
print('vec1:', vec1[:3])
print('vec2:', vec2[:3])
print('vec3:', vec3[:3])
```
### Expected behavior
the results are as below
```
vec1: [0.0004729258755920449, -0.011402048647760918, -0.012062848996486476]
vec2: [0.00046853933082822383, -0.01143268296919534, -0.012086534739296315]
vec1: [0.0004729258755920449, -0.011402048647760918, -0.012062848996486476]
vec2: [0.00046853933082822383, -0.01143268296919534, -0.012086534739296315]
vec3: [0.00040566621464677155, -0.011434691026806831, -0.012049459852278233]
```
| OpenAIEmbeddings vector different from the openai library and changes from time to time | https://api.github.com/repos/langchain-ai/langchain/issues/12314/comments | 4 | 2023-10-26T02:45:52Z | 2024-07-11T01:57:06Z | https://github.com/langchain-ai/langchain/issues/12314 | 1,962,600,617 | 12,314 |
[
"hwchase17",
"langchain"
]
| ### Feature request
The langchain API integration Elevenlabs should accomodate the very important parameter `voice`.
### Motivation
Without the voice parameter, the Elevenlabs langchain integration is unusable because `play()` switches the voice randomly with every execution.
### Your contribution
I tracked down the releveant code in the [Elevenlabs-langchain integration API](https://api.python.langchain.com/en/latest/_modules/langchain/tools/eleven_labs/text2speech.html#ElevenLabsText2SpeechTool.stream_speech) down to this section:
```
def _run(
self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None
) -> str:
"""Use the tool."""
elevenlabs = _import_elevenlabs()
try:
speech = elevenlabs.generate(text=query, model=self.model)
with tempfile.NamedTemporaryFile(
mode="bx", suffix=".wav", delete=False
) as f:
f.write(speech)
return f.name
except Exception as e:
raise RuntimeError(f"Error while running ElevenLabsText2SpeechTool: {e}")
[[docs]](https://api.python.langchain.com/en/latest/tools/langchain.tools.eleven_labs.text2speech.ElevenLabsText2SpeechTool.html#langchain.tools.eleven_labs.text2speech.ElevenLabsText2SpeechTool.play) def play(self, speech_file: str) -> None:
"""Play the text as speech."""
elevenlabs = _import_elevenlabs()
with open(speech_file, mode="rb") as f:
speech = f.read()
elevenlabs.play(speech)
[[docs]](https://api.python.langchain.com/en/latest/tools/langchain.tools.eleven_labs.text2speech.ElevenLabsText2SpeechTool.html#langchain.tools.eleven_labs.text2speech.ElevenLabsText2SpeechTool.stream_speech) def stream_speech(self, query: str) -> None:
"""Stream the text as speech as it is generated.
Play the text in your speakers."""
elevenlabs = _import_elevenlabs()
speech_stream = elevenlabs.generate(text=query, model=self.model, stream=True)
elevenlabs.stream(speech_stream)
```
The calls to `elevenlabs.generate` and `elevenlabs.play` must be extended with the `voice` parameter.
After this change, the langchain calls `run`, `play`, and importantly, `stream_speech`, could be made analogue the calls from the ElevenLabs API, e.g. seen [in this repo](https://github.com/langchain-ai/langchain/issues/1076#issuecomment-1552363555):
```
prediction = agent_chain.run(input=user_input.text)
audio = generate(
text=prediction,
voice="Adam",
model="eleven_monolingual_v1"
)
```
| Elevenlabs-langchain API should integrate very important parameter 'voice' | https://api.github.com/repos/langchain-ai/langchain/issues/12312/comments | 5 | 2023-10-26T02:08:00Z | 2024-05-21T16:41:13Z | https://github.com/langchain-ai/langchain/issues/12312 | 1,962,571,837 | 12,312 |
[
"hwchase17",
"langchain"
]
| ### System Info
python=`3.11`
langchain=`0.0.314`
pydantic=`2.3.0`
pydantic_core=`2.6.3`
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
From the documentation, I can create a fake tool that works as follows:
```python
# Define the tool
class FakeSearchTool(BaseTool):
name: str | None = "custom_search"
description: str | None = "useful for when you need to answer questions about current events"
def _run(
self, query: str
) -> str:
"""Use the tool."""
print("Query:", query)
return "Used the tool, and got the answer!"
async def _arun(
self, query: str
) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("custom_search does not support async")
# Test if the tool is working
search_tool = FakeSearchTool()
print(search_tool("What is my answer?"))
```
> Query: What is my answer?
> Used the tool, and got the answer!
My case however requires me to give an initial configuration object to every tool, something like this:
```python
from langchain.tools.base import BaseTool
# Create the config class
class Config:
def __init__(self, init_arg1: str, init_arg2: str):
self.init_arg1 = init_arg1
self.init_arg2 = init_arg2
# Create the tool
class FakeSearchTool(BaseTool):
name: str | None = "custom_search"
description: str | None = "useful for when you need to answer questions about current events"
config: Config
def __init__(self, config: Config):
self.config = config
def _run(
self, query: str
) -> str:
"""Use the tool."""
print("Query:", query)
print("Obtained config:", self.config.init_arg1, self.config.init_arg2)
return "Used the tool, and got the answer using the config!"
async def _arun(
self, query: str
) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("custom_search does not support async")
# Test if the tool is working
search_tool = FakeSearchTool(
config=Config(init_arg1="arg1", init_arg2="arg2")
)
print(search_tool("What is my answer?"))
```
However, this does not work and return the following error:
> AttributeError: 'FakeSearchTool' object has no attribute '__fields_set__'
<details>
<summary>Stack Trace</summary>
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/ai-service/notebooks/agent-with-elasticsearch-tool.ipynb Cell 18 line 2
20 """Use the tool asynchronously."""
21 raise NotImplementedError("custom_search does not support async")
---> 23 search_tool = FakeSearchTool(
24 config=Config(init_arg1="arg1", init_arg2="arg2")
25 )
26 print(search_tool("What is my answer?"))
/ai-service/notebooks/agent-with-elasticsearch-tool.ipynb Cell 18 line 7
6 def __init__(self, config: Config):
----> 7 self.config = config
File ~/.pyenv/versions/3.11.5/envs/ai-service/lib/python3.11/site-packages/pydantic/v1/main.py:405, in BaseModel.__setattr__(self, name, value)
402 else:
403 self.__dict__[name] = value
--> 405 self.__fields_set__.add(name)
AttributeError: 'FakeSearchTool' object has no attribute '__fields_set__'
```
</details>
### Expected behavior
I expect to get the following output out of the tool:
> Query: What is my answer?
> Obtained config: arg1 arg2
> Used the tool, and got the answer using the config! | AttributeError: 'CustomTool' object has no attribute 'fields_set' | https://api.github.com/repos/langchain-ai/langchain/issues/12304/comments | 6 | 2023-10-25T21:55:37Z | 2023-11-25T18:37:22Z | https://github.com/langchain-ai/langchain/issues/12304 | 1,962,296,260 | 12,304 |
[
"hwchase17",
"langchain"
]
| ### System Info
I start a jupyter notebook with
```python
file = 'OutdoorClothingCatalog_1000.csv'
loader = CSVLoader(file_path=file)
from langchain.indexes import VectorstoreIndexCreator
index = VectorstoreIndexCreator(
vectorstore_cls=DocArrayInMemorySearch
).from_loaders([loader])
query ="Please list all your shirts with sun protection \
in a table in markdown and summarize each one."
esponse = index.query(query)
```
Unfortunately this leads to
```log
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
[/workspaces/langchaintutorial/L4-QnA.ipynb](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/workspaces/langchaintutorial/L4-QnA.ipynb) Zelle 17 line 1
----> [1](vscode-notebook-cell://codespaces%2Bautomatic-funicular-qpwv7xqr7j249j6/workspaces/langchaintutorial/L4-QnA.ipynb#X20sdnNjb2RlLXJlbW90ZQ%3D%3D?line=0) response = index.query(query)
File [/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/indexes/vectorstore.py:45](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/indexes/vectorstore.py:45), in VectorStoreIndexWrapper.query(self, question, llm, retriever_kwargs, **kwargs)
41 retriever_kwargs = retriever_kwargs or {}
42 chain = RetrievalQA.from_chain_type(
43 llm, retriever=self.vectorstore.as_retriever(**retriever_kwargs), **kwargs
44 )
---> 45 return chain.run(question)
File [/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/base.py:505](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/base.py:505), in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
503 if len(args) != 1:
504 raise ValueError("`run` supports only one positional argument.")
--> 505 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
506 _output_key
507 ]
509 if kwargs and not args:
510 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
511 _output_key
512 ]
File [/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/base.py:310](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/base.py:310), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
308 except BaseException as e:
309 run_manager.on_chain_error(e)
--> 310 raise e
311 run_manager.on_chain_end(outputs)
312 final_outputs: Dict[str, Any] = self.prep_outputs(
313 inputs, outputs, return_only_outputs
314 )
File [/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/base.py:304](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/base.py:304), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
297 run_manager = callback_manager.on_chain_start(
298 dumpd(self),
299 inputs,
300 name=run_name,
301 )
302 try:
303 outputs = (
--> 304 self._call(inputs, run_manager=run_manager)
305 if new_arg_supported
306 else self._call(inputs)
307 )
308 except BaseException as e:
309 run_manager.on_chain_error(e)
File [/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py:136](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py:136), in BaseRetrievalQA._call(self, inputs, run_manager)
132 accepts_run_manager = (
133 "run_manager" in inspect.signature(self._get_docs).parameters
134 )
135 if accepts_run_manager:
--> 136 docs = self._get_docs(question, run_manager=_run_manager)
137 else:
138 docs = self._get_docs(question) # type: ignore[call-arg]
File [/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py:216](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py:216), in RetrievalQA._get_docs(self, question, run_manager)
209 def _get_docs(
210 self,
211 question: str,
212 *,
213 run_manager: CallbackManagerForChainRun,
214 ) -> List[Document]:
215 """Get docs."""
--> 216 return self.retriever.get_relevant_documents(
217 question, callbacks=run_manager.get_child()
218 )
File [/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/schema/retriever.py:211](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/schema/retriever.py:211), in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
209 except Exception as e:
210 run_manager.on_retriever_error(e)
--> 211 raise e
212 else:
213 run_manager.on_retriever_end(
214 result,
215 **kwargs,
216 )
File [/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/schema/retriever.py:204](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/schema/retriever.py:204), in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
202 _kwargs = kwargs if self._expects_other_args else {}
203 if self._new_arg_supported:
--> 204 result = self._get_relevant_documents(
205 query, run_manager=run_manager, **_kwargs
206 )
207 else:
208 result = self._get_relevant_documents(query, **_kwargs)
File [/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/schema/vectorstore.py:585](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/schema/vectorstore.py:585), in VectorStoreRetriever._get_relevant_documents(self, query, run_manager)
581 def _get_relevant_documents(
582 self, query: str, *, run_manager: CallbackManagerForRetrieverRun
583 ) -> List[Document]:
584 if self.search_type == "similarity":
--> 585 docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
586 elif self.search_type == "similarity_score_threshold":
587 docs_and_similarities = (
588 self.vectorstore.similarity_search_with_relevance_scores(
589 query, **self.search_kwargs
590 )
591 )
File [/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/vectorstores/docarray/base.py:127](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/vectorstores/docarray/base.py:127), in DocArrayIndex.similarity_search(self, query, k, **kwargs)
115 def similarity_search(
116 self, query: str, k: int = 4, **kwargs: Any
117 ) -> List[Document]:
118 """Return docs most similar to query.
119
120 Args:
(...)
125 List of Documents most similar to the query.
126 """
--> 127 results = self.similarity_search_with_score(query, k=k, **kwargs)
128 return [doc for doc, _ in results]
File [/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/vectorstores/docarray/base.py:106](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/vectorstores/docarray/base.py:106), in DocArrayIndex.similarity_search_with_score(self, query, k, **kwargs)
94 """Return docs most similar to query.
95
96 Args:
(...)
103 Lower score represents more similarity.
104 """
105 query_embedding = self.embedding.embed_query(query)
--> 106 query_doc = self.doc_cls(embedding=query_embedding) # type: ignore
107 docs, scores = self.doc_index.find(query_doc, search_field="embedding", limit=k)
109 result = [
110 (Document(page_content=doc.text, metadata=doc.metadata), score)
111 for doc, score in zip(docs, scores)
112 ]
File [/usr/local/python/3.10.8/lib/python3.10/site-packages/pydantic/main.py:164](https://vscode-remote+codespaces-002bautomatic-002dfunicular-002dqpwv7xqr7j249j6.vscode-resource.vscode-cdn.net/usr/local/python/3.10.8/lib/python3.10/site-packages/pydantic/main.py:164), in BaseModel.__init__(__pydantic_self__, **data)
162 # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
163 __tracebackhide__ = True
--> 164 __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)
ValidationError: 2 validation errors for DocArrayDoc
text
Field required [type=missing, input_value={'embedding': [0.00328089... -0.021201754016760964]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.4/v/missing
metadata
Field required [type=missing, input_value={'embedding': [0.00328089... -0.021201754016760964]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.4/v/missing
```
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
file = 'OutdoorClothingCatalog_1000.csv'
loader = CSVLoader(file_path=file)
from langchain.indexes import VectorstoreIndexCreator
index = VectorstoreIndexCreator(
vectorstore_cls=DocArrayInMemorySearch
).from_loaders([loader])
query ="Please list all your shirts with sun protection \
in a table in markdown and summarize each one."
esponse = index.query(query)
```
### Expected behavior
List of articles with sun protection. The script is an example from deeplearning.ai
| Error querying DocArrayInMemorySearch | https://api.github.com/repos/langchain-ai/langchain/issues/12302/comments | 8 | 2023-10-25T20:49:36Z | 2024-05-05T06:11:00Z | https://github.com/langchain-ai/langchain/issues/12302 | 1,962,205,964 | 12,302 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version == 0.0.323
python == 3.10
```
I have a simple code to query a postgres DB:
from langchain.llms import Bedrock
from langchain_experimental.sql import SQLDatabaseChain
db = SQLDatabase.from_uri(database_uri="postgresql://xxx:yyy:zzzzz")
llm = Bedrock(
credentials_profile_name="langchain",
model_id="anthropic.claude-v2"
)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
db_chain.run("how many lab reports are there?")
```
Get the following error:
```
Entering new SQLDatabaseChain chain...
how many lab reports are there?
SQLQuery:Here is the query and response for your question:
Question: how many lab reports are there?
SQLQuery: SELECT COUNT(*) FROM lab_reportsTraceback (most recent call last):
File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.SyntaxError: syntax error at or near "Here"
LINE 1: Here is the query and response for your question:
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ec2-user/environment/Langchain/langchain-sql.py", line 12, in <module>
db_chain.run("how many lab reports are there?")
File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 505, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/langchain_experimental/sql/base.py", line 198, in _call
raise exc
File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/langchain_experimental/sql/base.py", line 143, in _call
result = self.database.run(sql_cmd)
File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/langchain/utilities/sql_database.py", line 429, in run
result = self._execute(command, fetch)
File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/langchain/utilities/sql_database.py", line 407, in _execute
cursor = connection.execute(text(command))
File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1416, in execute
return meth(
File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 516, in _execute_on_connection
return connection._execute_clauseelement(
File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1639, in _execute_clauseelement
ret = self._execute_context(
File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1848, in _execute_context
return self._exec_single_context(
File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1988, in _exec_single_context
self._handle_dbapi_exception(
File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2343, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "/home/ec2-user/miniconda3/envs/langchain3.10/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.SyntaxError) syntax error at or near "Here"
LINE 1: Here is the query and response for your question:
^
[SQL: Here is the query and response for your question:
Question: how many lab reports are there?
SQLQuery: SELECT COUNT(*) FROM lab_reports]
(Background on this error at: https://sqlalche.me/e/20/f405)
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Execute the code with a postgres DB
### Expected behavior
Expect the query to execute and get results. | SQLDatabaseChain Error | https://api.github.com/repos/langchain-ai/langchain/issues/12295/comments | 2 | 2023-10-25T18:56:26Z | 2024-02-28T16:08:30Z | https://github.com/langchain-ai/langchain/issues/12295 | 1,962,036,029 | 12,295 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Terminal Output:
> Entering new SQLDatabaseChain chain...
how many tracks have pop genre?
SQLQuery:SELECT COUNT(*) FROM tracks WHERE "GenreId" = 3 LIMIT 5;
SQLResult: [(374,)]
Answer:374 tracks have pop genre.
> Finished chain.
How do I access and print the SQLQuery, SQLResult in my ui?
im only able to print the Answer here in response
with st.chat_message("bot"):
st.markdown(response)
### Suggestion:
_No response_ | Not able to print verbose result in interface. | https://api.github.com/repos/langchain-ai/langchain/issues/12290/comments | 4 | 2023-10-25T18:17:54Z | 2024-02-10T16:10:12Z | https://github.com/langchain-ai/langchain/issues/12290 | 1,961,978,458 | 12,290 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
How can I use csv_agent with langchain-experimental being that importing csv_agent from langchain.agents now produces this error:
On 2023-10-27 this module will be be deprecated from langchain, and will be available from the langchain-experimental package.This code is already available in langchain-experimental.See https://github.com/langchain-ai/langchain/discussions/11680.
### Idea or request for content:
_No response_ | langchain-experimental csv_agent how to? | https://api.github.com/repos/langchain-ai/langchain/issues/12287/comments | 2 | 2023-10-25T17:41:30Z | 2024-02-14T16:08:58Z | https://github.com/langchain-ai/langchain/issues/12287 | 1,961,919,942 | 12,287 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
There is an error in the code. [link](https://python.langchain.com/docs/use_cases/summarization#splitting-and-summarizing-in-a-single-chain)
```
from langchain.chains import AnalyzeDocumentChain
summarize_document_chain = AnalyzeDocumentChain(combine_docs_chain=chain, text_splitter=text_splitter)
summarize_document_chain.run(docs[0])
```
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[17], line 4
1 from langchain.chains import AnalyzeDocumentChain
3 summarize_document_chain = AnalyzeDocumentChain(combine_docs_chain=chain, text_splitter=text_splitter)
----> 4 summarize_document_chain.run(docs[0])
File ~/langchain/libs/langchain/langchain/chains/base.py:496, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
459 """Convenience method for executing chain.
460
461 The main difference between this method and `Chain.__call__` is that this
(...)
493 # -> "The temperature in Boise is..."
494 """
495 # Run at start to make sure this is possible/defined
--> 496 _output_key = self._run_output_key
498 if args and not kwargs:
499 if len(args) != 1:
File ~/langchain/libs/langchain/langchain/chains/base.py:445, in Chain._run_output_key(self)
442 @property
443 def _run_output_key(self) -> str:
444 if len(self.output_keys) != 1:
--> 445 raise ValueError(
446 f"`run` not supported when there is not exactly "
447 f"one output key. Got {self.output_keys}."
448 )
449 return self.output_keys[0]
ValueError: `run` not supported when there is not exactly one output key. Got ['output_text', 'intermediate_steps'].
```
### Idea or request for content:
There is nothing missing but the code needs to be accurate so the code returns the expected output. | DOC: Error in code for splitting and summarizing in a single chain | https://api.github.com/repos/langchain-ai/langchain/issues/12280/comments | 3 | 2023-10-25T15:56:28Z | 2024-02-10T16:10:22Z | https://github.com/langchain-ai/langchain/issues/12280 | 1,961,752,705 | 12,280 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain JS 0.0.172 on DEbian Linux 12 using Nodejs 18.15.0
With the following code I don't get the tokenUsage back from the chain/ call. I checked the docs, asked the AI and did a google search. Am I doing anything wrong?
`const chain = RetrievalQAChain.fromLLM(
model,
vectorStore.asRetriever(),
{
verbose: true,
}
)
const res = await chain.call({
context: "You assist a Systems Engineer. Be as concise as possible.",
query: prompt,
})
console.log('usage:', res.tokenUsage);
`
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import { DirectoryLoader } from "langchain/document_loaders/fs/directory";
import { TextLoader } from "langchain/document_loaders/fs/text";
import { JSONLoader } from "langchain/document_loaders/fs/json";
import { CSVLoader } from "langchain/document_loaders/fs/csv";
import { OpenAI } from "langchain/llms/openai";
import { RetrievalQAChain, loadQARefineChain } from "langchain/chains";
import { HNSWLib } from "langchain/vectorstores/hnswlib";
import { OpenAIEmbeddings } from "langchain/embeddings/openai";
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
import * as dotenv from 'dotenv';
dotenv.config()
const loader = new DirectoryLoader("./docs", {
".json": (path) => new JSONLoader(path),
".csv": (path) => new CSVLoader(path),
".txt": (path) => new TextLoader(path)
})
const normalizeDocuments = (docs) => {
return docs.map((doc) => {
if (typeof doc.pageContent === "string") {
return doc.pageContent;
} else if (Array.isArray(doc.pageContent)) {
return doc.pageContent.join("\n");
}
});
}
const VECTOR_STORE_PATH = "Documents.index";
export const run = async (params) => {
console.log("Loading docs...")
const docs = await loader.load();
console.log('Processing...')
const model = new OpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
modelName: "gpt-3.5-turbo",
maxTokens: 2500,
temperature: 0.1,
maxConcurrency: 1,
});
let vectorStore;
try{
console.log("Checking for local vectorStore");
const embeddings = new OpenAIEmbeddings(process.env.OPENAI_API_KEY);
vectorStore = await HNSWLib.load(VECTOR_STORE_PATH, embeddings);
}catch(e){
console.log('Creating new vector store...')
const textSplitter = new RecursiveCharacterTextSplitter({
chunkSize: 3000,
});
const normalizedDocs = normalizeDocuments(docs);
const splitDocs = await textSplitter.createDocuments(normalizedDocs);
console.log('splitdos : ',splitDocs)
// 8. Generate the vector store from the documents
vectorStore = await HNSWLib.fromDocuments(
splitDocs,
new OpenAIEmbeddings()
);
await vectorStore.save(VECTOR_STORE_PATH);
console.log("Vector store created.")
}
console.log("Creating retrieval chain...")
const chain = RetrievalQAChain.fromLLM(
model,
vectorStore.asRetriever(),
{
verbose: true,
}
)
//const promptStart = `rewrite all requirements using the following guidelines:`;
const promptStart = ``;
//const reWrite = `Rewrite the Requirement Text so that is does comply.`;
const reWrite = ``;
const prompts = [
`R3: Requirements should be consistent with each other and should not contain contradictions or conflicts. Report the Id\'s for those that do not comply. Rewrite the Requirement Text so that is does comply with R3 `,
];
prompts.forEach(async prompt => {
//console.log("Querying chain...")
const res = await chain.call({
context: "You assist a Systems Engineer. Be as concise as possible.",
query: prompt,
})
console.log('usage:', res.tokenUsage);
console.log(prompt);
console.log('Reply: ',res, '\n')
});
}
run(process.argv.slice(2))
### Expected behavior
I expect the res object to return the token usage | Langchain js - can't get token usage | https://api.github.com/repos/langchain-ai/langchain/issues/12279/comments | 4 | 2023-10-25T15:54:36Z | 2024-02-10T16:10:27Z | https://github.com/langchain-ai/langchain/issues/12279 | 1,961,749,463 | 12,279 |
[
"hwchase17",
"langchain"
]
| ### System Info
System:
LangChain 0.0.321
Python 3.10
I'm trying to build a MultiRetrievalQAChain using only Llama2 chat models served by vLLM (no OpenAI). For that end I have created a ConversationChain that acts as the default chain for the MultiRetrievalQAChain. I have customized the prompts for both chains to meet LLama 2 Chat format requirements. It looks like the routing chain works properly but I'm getting the following exception:
```
[chain/error] [1:chain:MultiRetrievalQAChain > 2:chain:LLMRouterChain] [7.00s] Chain run errored with error:
"OutputParserException(\"Parsing text ... raised following error:\\nGot invalid JSON object. Error: Expecting value: line 1 column 1 (char 0)\")"
```
Here the routing and generation trace:
```
[32;1m[1;3m[chain/start][0m [1m[1:chain:MultiRetrievalQAChain] Entering Chain run with input:
[0m{
"input": "What is prompt injection?"
}
[32;1m[1;3m[chain/start][0m [1m[1:chain:MultiRetrievalQAChain > 2:chain:LLMRouterChain] Entering Chain run with input:
[0m{
"input": "What is prompt injection?"
}
[32;1m[1;3m[chain/start][0m [1m[1:chain:MultiRetrievalQAChain > 2:chain:LLMRouterChain > 3:chain:LLMChain] Entering Chain run with input:
[0m{
"input": "What is prompt injection?"
}
[32;1m[1;3m[llm/start][0m [1m[1:chain:MultiRetrievalQAChain > 2:chain:LLMRouterChain > 3:chain:LLMChain > 4:llm:VLLMOpenAI] Entering LLM run with input:
[0m{
"prompts": [
"Given a query to a question answering system select the system best suited for the input. You will be given the names of the available systems and a description of what questions the system is best suited for. You may also revise the original input if you think that revising it will ultimately lead to a better response.\n\n<< FORMATTING >>\nReturn a markdown code snippet with a JSON object formatted to look like:\n```json\n{\n \"destination\": string \\ name of the question answering system to use or \"DEFAULT\"\n \"next_inputs\": string \\ a potentially modified version of the original input\n}\n```\n\nREMEMBER: \"destination\" MUST be one of the candidate prompt names specified below OR it can be \"DEFAULT\" if the input is not well suited for any of the candidate prompts.\nREMEMBER: \"next_inputs\" can just be the original input if you don't think any modifications are needed.\n\n<< CANDIDATE PROMPTS >>\nNIST AI Risk Management Framework: Guidelines provided by the NIST for organizations and people to manage risks associated with the use of AI. \n The NIST risk management framework consists of four cyclical tasks: Govern, Map, Measure and Manage.\nOWASP Top 10 for LLM Applications: Provides practical security guidance to navigate the complex and evolving terrain of LLM security focusing on the top 10 vulnerabilities of LLM applications. These are 1) Prompt Injection, 2) Insecure Output Handling, 3) Training Data Poisoning, 4) Model Denial of Service, 5) Supply Chain Vulnerabilities, 6) Sensitive Information Disclosure, 7) Insecure Plugin Design\n 8) Excessive Agency, 9) Overreliance, and 10) Model Theft\n \nThreat Modeling LLM Applications: A high-level example from Gavin Klondike on how to build a threat model for LLM applications utilizing the STRIDE modeling framework based on trust boundaries.\n\n<< INPUT >>\nWhat is prompt injection?\n\n<< OUTPUT >>"
]
}
/home/vmuser/miniconda3/envs/llm-env2/lib/python3.10/site-packages/langchain/chains/llm.py:280: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.
warnings.warn(
[36;1m[1;3m[llm/end][0m [1m[1:chain:MultiRetrievalQAChain > 2:chain:LLMRouterChain > 3:chain:LLMChain > 4:llm:VLLMOpenAI] [7.00s] Exiting LLM run with output:
[0m{
"generations": [
[
{
"text": "Prompt injection is a security vulnerability in LLM applications where an attacker can manipulate the input prompts to an LLM model to elicit a specific response from the model. This can be done by exploiting the lack of proper input validation and sanitization in the model's architecture. \n",
"generation_info": {
"finish_reason": "length",
"logprobs": null
}
}
]
],
"llm_output": {
"token_usage": {
"prompt_tokens": 488,
"completion_tokens": 512,
"total_tokens": 1000
},
"model_name": "meta-llama/Llama-2-7b-chat-hf"
},
"run": null
}
[36;1m[1;3m[chain/end][0m [1m[1:chain:MultiRetrievalQAChain > 2:chain:LLMRouterChain > 3:chain:LLMChain] [7.00s] Exiting Chain run with output:
[0m{
"text": "Prompt injection is a security vulnerability in LLM applications where an attacker can manipulate the input prompts to an LLM model to elicit a specific response from the model. This can be done by exploiting the lack of proper input validation and sanitization in the model's architecture.\n"
}
[31;1m[1;3m[chain/error][0m [1m[1:chain:MultiRetrievalQAChain > 2:chain:LLMRouterChain] [7.00s] Chain run errored with error:
[0m"OutputParserException(\"Parsing text\\nPrompt injection is a security vulnerability in LLM applications where an attacker can manipulate the input prompts to an LLM model to elicit a specific response from the model. This can be done by exploiting the lack of proper input validation and sanitization in the model's architecture.\\\n\\n raised following error:\\nGot invalid JSON object. Error: Expecting value: line 1 column 1 (char 0)\")"
[0;31m---------------------------------------------------------------------------[0m
[0;31mJSONDecodeError[0m Traceback (most recent call last)
File [0;32m~/miniconda3/envs/llm-env2/lib/python3.10/site-packages/langchain/output_parsers/json.py:163[0m, in [0;36mparse_and_check_json_markdown[0;34m(text, expected_keys)[0m
[1;32m 162[0m [38;5;28;01mtry[39;00m:
[0;32m--> 163[0m json_obj [38;5;241m=[39m [43mparse_json_markdown[49m[43m([49m[43mtext[49m[43m)[49m
[1;32m 164[0m [38;5;28;01mexcept[39;00m json[38;5;241m.[39mJSONDecodeError [38;5;28;01mas[39;00m e:
raised following error:
Got invalid JSON object. Error: Expecting value: line 1 column 1 (char 0)
```
The issue seems to be related to a warning that I'm also getting: `llm.py:280: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.`
Unfortunately it is unclear how one is supposed to implement an output parser for the LLM (ConversationChain) chain that meets expectations from the MultiRetrievalQAChain. The documentation for these chains relies a lot on OpenAI models to do the formatting but there's no much guidance on how to do it with other LLMs.
Any guidance on how to move forward would be appreciated.
Here my code:
```
import torch
import langchain
langchain.debug = True
from langchain.llms import VLLMOpenAI
from langchain.document_loaders import PyPDFLoader
from langchain.prompts import PromptTemplate
# Import for retrieval-augmented generation RAG
from langchain import hub
from langchain.chains import ConversationChain, MultiRetrievalQAChain
from langchain.vectorstores import Chroma
from langchain.text_splitter import SentenceTransformersTokenTextSplitter
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
#%%
# URL for the vLLM service
INFERENCE_SRV_URL = "http://localhost:8000/v1"
def setup_chat_llm(vllm_url, max_tokens=512, temperature=0):
"""
Intializes the vLLM service object.
:param vllm_url: vLLM service URL
:param max_tokens: Max number of tokens to get generated by the LLM
:param temperature: Temperature of the generation process
:return: The vLLM service object
"""
chat = VLLMOpenAI(
model_name="meta-llama/Llama-2-7b-chat-hf",
openai_api_key="EMPTY",
openai_api_base=vllm_url,
temperature=temperature,
max_tokens=max_tokens,
)
return chat
#%%
# Initialize LLM service
llm = setup_chat_llm(vllm_url=INFERENCE_SRV_URL)
#%%
%%time
# Set up the embedding encoder (Sentence Transformers) and vector store
model_name = "all-mpnet-base-v2"
model_kwargs = {'device': 'cuda' if torch.cuda.is_available() else 'cpu'}
encode_kwargs = {'normalize_embeddings': False}
embeddings = SentenceTransformerEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
# Set up the document splitter
text_splitter = SentenceTransformersTokenTextSplitter(chunk_size=500, chunk_overlap=0)
# Load PDF documents
loader = PyPDFLoader(file_path="../data/AI_RMF_Playbook.pdf")
rmf_splits = loader.load_and_split()
rmf_retriever = Chroma.from_documents(documents=rmf_splits, embedding=embeddings)
loader = PyPDFLoader(file_path="../data/OWASP-Top-10-for-LLM-Applications-v101.pdf")
owasp_splits = loader.load_and_split()
owasp_retriever = Chroma.from_documents(documents=owasp_splits, embedding=embeddings)
loader = PyPDFLoader(file_path="../data/Threat Modeling LLM Applications - AI Village.pdf")
ai_village_splits = loader.load_and_split()
ai_village_retriever = Chroma.from_documents(documents=ai_village_splits, embedding=embeddings)
#%%
retrievers_info = [
{
"name": "NIST AI Risk Management Framework",
"description": """Guidelines provided by the NIST for organizations and people to manage risks associated with the use of AI.
The NIST risk management framework consists of four cyclical tasks: Govern, Map, Measure and Manage.""",
"retriever": rmf_retriever.as_retriever()
},
{
"name": "OWASP Top 10 for LLM Applications",
"description": """Provides practical security guidance to navigate the complex and evolving terrain of LLM security focusing on the top 10 vulnerabilities of LLM applications. These are 1) Prompt Injection, 2) Insecure Output Handling, 3) Training Data Poisoning, 4) Model Denial of Service, 5) Supply Chain Vulnerabilities, 6) Sensitive Information Disclosure, 7) Insecure Plugin Design
8) Excessive Agency, 9) Overreliance, and 10) Model Theft
""",
"retriever": owasp_retriever.as_retriever()
},
{
"name": "Threat Modeling LLM Applications",
"description": "A high-level example from Gavin Klondike on how to build a threat model for LLM applications utilizing the STRIDE modeling framework based on trust boundaries.",
"retriever": ai_village_retriever.as_retriever()
}
]
#%%
prompt_template = (
""" [INST]<<SYS>> You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.<</SYS>>
Question: {query}
Context: {history}
Answer: [/INST]
""")
prompt = PromptTemplate(template=prompt_template, input_variables=['history', 'query'])
routing_prompt_template = (
"""
[INST]<<SYS>> Given a query to a question answering system select the system best suited for the input. You will be given the names of the available systems and a description of what questions the system is best suited for. You could also revise the original input if you think that revising it will ultimately lead to a better response.
Return a markdown code snippet with a JSON object formatted as follows:
\```json
\{
"destination": "destination_key_value"
"next_inputs": "revised_or_original_input"
\}
\```
WHERE:
- The "destination" key value MUST be a text string matching one of the candidate prompt names specified below OR it can be "DEFAULT" if the input is not well suited for any of the candidate prompts.
- The "next_inputs" key value can just be the original input string if you don't think any modifications are needed.
<</SYS>>
<< CANDIDATE PROMPTS >>
{destinations}
<< INPUT >>
{input}
<< OUTPUT >>
[/INST]
""")
routing_prompt = PromptTemplate(template=routing_prompt_template, input_variables=['destinations', 'input'])
#%%
default_chain = ConversationChain(
llm=llm, # Your own LLM
prompt=prompt, # Your own prompt
input_key="query",
output_key="result",
verbose=True,
)
multi_retriever_chain = MultiRetrievalQAChain.from_retrievers(
llm=llm,
retriever_infos=retrievers_info,
default_chain=default_chain,
default_prompt=routing_prompt,
default_retriever=rmf_retriever.as_retriever(),
verbose=True
)
#%%
question = "What is prompt injection?"
result = multi_retriever_chain.run(question)
#%%
result
#%%
#%%
from langchain.chains.router.multi_retrieval_prompt import (
MULTI_RETRIEVAL_ROUTER_TEMPLATE,
)
MULTI_RETRIEVAL_ROUTER_TEMPLATE
#%%
print(MULTI_RETRIEVAL_ROUTER_TEMPLATE)
#%%
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I included the entire script I'm using
### Expected behavior
Proper query/question routing to the retriever better suited to provide the content required by the LLM to answer a question. | LLM (chain) output parsing in MultiRetrievalQAChain: OutputParserException Got invalid JSON object | https://api.github.com/repos/langchain-ai/langchain/issues/12276/comments | 2 | 2023-10-25T15:07:03Z | 2024-02-06T16:14:01Z | https://github.com/langchain-ai/langchain/issues/12276 | 1,961,655,309 | 12,276 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Unlike other PDF loaders, I've found that only the PDFMinerLoader doesn't provide pages in metadata when loading. Digging deeper, the PDFMinerParser only gives the page metadata when extract_images=True. I wonder if this was intentional.
### Suggestion:
#12277 | Issue: `PDFMinerLoader` not gives `page` metadata when loading with `extract_images=False` - is it intended? | https://api.github.com/repos/langchain-ai/langchain/issues/12273/comments | 2 | 2023-10-25T14:48:30Z | 2023-11-02T05:03:37Z | https://github.com/langchain-ai/langchain/issues/12273 | 1,961,611,644 | 12,273 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version: 0.0320
python version: 3.10
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
User based memorization with ConversationBufferMemory and CombinedMemory. Attaching my code below, Please let me know where I am doing wrong:
session_id= "user2"
vect_memory = VectorStoreRetrieverMemory(retriever=retriever)
_DEFAULT_TEMPLATE = """
You are having a conversation with a human. Please interact like a human being and give an accurate answer.
Relevant pieces of the previous conversation:
{history}
(You do not need to use these pieces of information if not relevant)
Current conversation:
{session_id}
User: {input}
Chatbot:
"""
PROMPT = PromptTemplate(
input_variables=["history","session_id", "input"],
template=_DEFAULT_TEMPLATE
)
print(PROMPT)
message_history = RedisChatMessageHistory(url=redis_url, session_id=session_id)
conv_memory = ConversationBufferMemory(
memory_key="session_id",
chat_memory=message_history,
input_key="input",
prompt=PROMPT,
return_messages=True
)
memory = CombinedMemory(memories=[conv_memory, vect_memory])
llm = OpenAI(temperature=0, model_name='gpt-3.5-turbo') # Can be any valid LLM
conversation_with_summary = ConversationChain(
llm=llm,
prompt=PROMPT,
memory=memory,
verbose=True,
)
conversation_with_summary(input="hELLO")
It is not working on the base of session.
=================I am getting sthis error=======================
Traceback (most recent call last):
File "C:\Users\ankit\PycharmProjects\spendidchatbot\chatbot.py", line 306, in <module>
conversation_with_summary = ConversationChain(
File "C:\Users\ankit\PycharmProjects\spendidchatbot\env_spen\lib\site-packages\langchain\load\serializable.py", line 97, in _init_
super().__init__(**kwargs)
File "C:\Users\ankit\PycharmProjects\spendidchatbot\env_spen\lib\site-packages\pydantic\v1\main.py", line 341, in _init_
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for ConversationChain
_root_
Got unexpected prompt input variables. The prompt expects ['history', 'input', 'session_id'], but got ['user2', 'history'] as inputs from memory, and input as the normal input key. (type=value_error)
### Expected behavior
User based memorization with ConversationBufferMemory and CombinedMemory. Attaching my code below, Please let me know where I am doing wrong:
session_id= "user2"
vect_memory = VectorStoreRetrieverMemory(retriever=retriever)
_DEFAULT_TEMPLATE = """
You are having a conversation with a human. Please interact like a human being and give an accurate answer.
Relevant pieces of the previous conversation:
{history}
(You do not need to use these pieces of information if not relevant)
Current conversation:
{session_id}
User: {input}
Chatbot:
"""
PROMPT = PromptTemplate(
input_variables=["history","session_id", "input"],
template=_DEFAULT_TEMPLATE
)
print(PROMPT)
message_history = RedisChatMessageHistory(url=redis_url, session_id=session_id)
conv_memory = ConversationBufferMemory(
memory_key="session_id",
chat_memory=message_history,
input_key="input",
prompt=PROMPT,
return_messages=True
)
memory = CombinedMemory(memories=[conv_memory, vect_memory])
llm = OpenAI(temperature=0, model_name='gpt-3.5-turbo') # Can be any valid LLM
conversation_with_summary = ConversationChain(
llm=llm,
prompt=PROMPT,
memory=memory,
verbose=True,
)
conversation_with_summary(input="hELLO")
It is not working on the base of session.
=================I am getting sthis error=======================
Traceback (most recent call last):
File "C:\Users\ankit\PycharmProjects\spendidchatbot\chatbot.py", line 306, in <module>
conversation_with_summary = ConversationChain(
File "C:\Users\ankit\PycharmProjects\spendidchatbot\env_spen\lib\site-packages\langchain\load\serializable.py", line 97, in _init_
super().__init__(**kwargs)
File "C:\Users\ankit\PycharmProjects\spendidchatbot\env_spen\lib\site-packages\pydantic\v1\main.py", line 341, in _init_
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for ConversationChain
_root_
Got unexpected prompt input variables. The prompt expects ['history', 'input', 'session_id'], but got ['user2', 'history'] as inputs from memory, and input as the normal input key. (type=value_error) | User based memorization with ConversationBufferMemory and CombinedMemory | https://api.github.com/repos/langchain-ai/langchain/issues/12272/comments | 2 | 2023-10-25T14:45:46Z | 2024-02-06T16:14:06Z | https://github.com/langchain-ai/langchain/issues/12272 | 1,961,606,028 | 12,272 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain == 0.0.320
Following step by step procedures used in actual codebase followed by error message
```
llm = ChatOpenAI(openai_api_key=openai_api_key, model_name=engine)
```
```
db = Chroma(
collection_name=f'collection_1233',
embedding_function=OpenAIEmbeddings(openai_api_key=openai_api_key),
persist_directory=os.path.abspath('storage/vectors/')
)
```
```
chain = ConversationalRetrievalChain.from_llm(
retriever=db.as_retriever(),
llm=self.llm,
memory=SQLiteEntityStore(session_id=f'session_1233', db_file='storage/entities.db')
condense_question_prompt=PromptTemplate.from_template(template=template),
verbose=True
)
```
```
response = chain({"question": prompt})
```
The above step throws error as below
```
CRITICAL django Missing some input keys: {'chat_history'}
```
After providing ```chat_history```, the error is as follwed
```
CRITICAL django One input key expected got ['question', 'chat_history']
```
So, what to do now? I need the following entities to be used strictly:
- ChatOpenAI
- ConversationalRetrievalChain + Chroma
- ConversationEntityMemory + SQLiteEntityStore
Please help asap...
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Following step by step procedures used in actual codebase followed by error message
```
llm = ChatOpenAI(openai_api_key=openai_api_key, model_name=engine)
```
```
db = Chroma(
collection_name=f'collection_1233',
embedding_function=OpenAIEmbeddings(openai_api_key=openai_api_key),
persist_directory=os.path.abspath('storage/vectors/')
)
```
```
chain = ConversationalRetrievalChain.from_llm(
retriever=db.as_retriever(),
llm=self.llm,
memory=SQLiteEntityStore(session_id=f'session_1233', db_file='storage/entities.db')
condense_question_prompt=PromptTemplate.from_template(template=template),
verbose=True
)
```
```
response = chain({"question": prompt})
```
The above step throws error as below
```
CRITICAL django Missing some input keys: {'chat_history'}
```
After providing ```chat_history```, the error is as follwed
```
CRITICAL django One input key expected got ['question', 'chat_history']
```
So, what to do now? I need the following entities to be used strictly:
- ChatOpenAI
- ConversationalRetrievalChain + Chroma
- ConversationEntityMemory + SQLiteEntityStore
### Expected behavior
Conversations should be working with all chat history saved in mysqli db as well as returned to the user in the response. | ConversationalRetrievalChain doesn't work with ConversationEntityMemory + SQLiteEntityStore | https://api.github.com/repos/langchain-ai/langchain/issues/12266/comments | 3 | 2023-10-25T12:34:48Z | 2023-10-25T14:50:34Z | https://github.com/langchain-ai/langchain/issues/12266 | 1,961,319,260 | 12,266 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Langchain is currently using Pydantic V1, and I believe it would be good to upgrade this to V2
### Motivation
V2 is far more performant than V1 because of the python-rust binding
### Your contribution
I can create a PR here. | Upgrade to Pydantic V2 | https://api.github.com/repos/langchain-ai/langchain/issues/12265/comments | 11 | 2023-10-25T12:28:55Z | 2024-02-15T16:08:15Z | https://github.com/langchain-ai/langchain/issues/12265 | 1,961,308,651 | 12,265 |
[
"hwchase17",
"langchain"
]
| ### System Info
I've been using the conversation chain to retrieve answers from gpt3.5, vertex ai chat-bison. For the user's memory I've been passing the session memory appended to the conversation chain. But once the token limit exceeds I'm getting this following error from the llm
`[chain/error] [1:chain:LLMChain > 2:llm:ChatOpenAI] [1.69s] Llm run errored with error:
"InvalidRequestError(message=\"This model's maximum context length is 4097 tokens. However, your messages resulted in 9961 tokens. Please reduce the length of the messages.\", param='messages', code='context_length_exceeded', http_status=400, request_id=None)"
[chain/error] [1:chain:LLMChain] [1.72s] Chain run errored with error:
"InvalidRequestError(message=\"This model's maximum context length is 4097 tokens. However, your messages resulted in 9961 tokens. Please reduce the length of the messages.\", param='messages', code='context_length_exceeded', http_status=400, request_id=None)"
`
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Once you add history appended to the chain, with more than 4097 tokens this will appear.
### Expected behavior
The chat should work normally without any internal exception. Is there any langchain support for this issue? | Token limitation due to model's maximum context length | https://api.github.com/repos/langchain-ai/langchain/issues/12264/comments | 2 | 2023-10-25T11:34:43Z | 2024-02-06T16:14:11Z | https://github.com/langchain-ai/langchain/issues/12264 | 1,961,177,348 | 12,264 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
`ConversationalRetrievalChain` returns sources to questions without relevant documents. The returned sources are not related to the question or the answer.
Example with relevant document:
```
chain = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(),
question_generator=question_generator,
combine_docs_chain=doc_chain,
)
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = chain({"question": query, "chat_history": chat_history})
result['answer']
"""
The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, who is one of the nation's top legal minds ...
SOURCES: /content/state_of_the_union.txt
```
Example without relevant document:
```
query = "How are you doing?"
result = chain({"question": query, "chat_history": chat_history})
result['answer']
"""
I'm doing well, thank you.
SOURCES: /content/state_of_the_union.txt
"""
```
Another example without relevant document:
```
query = "Who is Elon Musk?"
result = chain({"question": query, "chat_history": chat_history})
result['answer']
"""
I don't know.
SOURCES: /content/state_of_the_union.txt
"""
```
### Suggestion:
Sources should not be returned for questions without relevant documents. | ConversationalRetrievalChain returns sources to questions without context | https://api.github.com/repos/langchain-ai/langchain/issues/12263/comments | 1 | 2023-10-25T11:25:40Z | 2024-07-04T12:48:23Z | https://github.com/langchain-ai/langchain/issues/12263 | 1,961,161,745 | 12,263 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am getting below error while trying to use Azure Cosmos DB as vector store.
**_No module named 'langchain.vectorstores.azure_cosmos_db_vector_search'_**
### Suggestion:
_No response_ | Issue: No module named 'langchain.vectorstores.azure_cosmos_db_vector_search' | https://api.github.com/repos/langchain-ai/langchain/issues/12262/comments | 2 | 2023-10-25T10:51:32Z | 2024-02-08T16:14:20Z | https://github.com/langchain-ai/langchain/issues/12262 | 1,961,103,330 | 12,262 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python 3.11.4
LangChain 0.0.321
Platform info (WSL2):
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.6 LTS"
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This is my code (based on https://python.langchain.com/docs/use_cases/question_answering/conversational_retrieval_agents and modified to use AzureOpenAI and OpenSearch):
```pyhton
import os
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.chat_models.azure_openai import AzureChatOpenAI
from langchain.vectorstores.opensearch_vector_search import OpenSearchVectorSearch
from langchain.memory import ConversationTokenBufferMemory
from langchain.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.chains import ConversationalRetrievalChain
# Load environment variables
host = os.environ['HOST']
auth = os.environ['AUTH_PASS']
index_uk = os.environ['INDEX_NAME_UK']
opensearch_url = os.environ['OPENSEARCH_URL']
embedding_model = os.environ['EMBEDDING_MODEL']
model_name = os.environ['MODEL_NAME']
openai_api_base = os.environ['OPENAI_API_BASE']
openai_api_key = os.environ['OPENAI_API_KEY']
openai_api_type = os.environ['OPENAI_API_TYPE']
openai_api_version = os.environ['OPENAI_API_VERSION']
# Define Azure OpenAI component
llm = AzureChatOpenAI(
openai_api_key = openai_api_key,
openai_api_base = openai_api_base,
openai_api_type = openai_api_type,
openai_api_version = openai_api_version,
deployment_name = model_name,
temperature=0
)
# Define Memory component for chat history
memory = ConversationTokenBufferMemory(llm=llm,memory_key="chat_history",return_messages=True, max_token_limit=1000)
# Build a Retriever
embeddings = OpenAIEmbeddings(deployment=embedding_model, chunk_size=1)
docsearch = OpenSearchVectorSearch(index_name=index_uk, embedding_function=embeddings,opensearch_url=opensearch_url, http_auth=('admin', auth))
doc_retriever = docsearch.as_retriever()
# Build a retrieval tool
from langchain.agents.agent_toolkits import create_retriever_tool
tool = create_retriever_tool(
doc_retriever,
"search_hr_documents",
"Searches and returns documents regarding HR questions."
)
tools = [tool]
# Build an Agent Constructor
from langchain.agents.agent_toolkits import create_conversational_retrieval_agent
agent_executor = create_conversational_retrieval_agent(llm, tools, verbose=True)
result = agent_executor({"input": "hi, im bob"})
```
When I execute it, it generates the log error below:
```
InvalidRequestError Traceback (most recent call last)
[c:\k8s-developer\git\lambda-hr-docQA\conv_retrieval_tool.ipynb](file:///C:/k8s-developer/git/lambda-hr-docQA/conv_retrieval_tool.ipynb) Cell 3 line 6
[58](vscode-notebook-cell:/c%3A/k8s-developer/git/lambda-hr-docQA/conv_retrieval_tool.ipynb#W2sZmlsZQ%3D%3D?line=57) from langchain.agents.agent_toolkits import create_conversational_retrieval_agent
[59](vscode-notebook-cell:/c%3A/k8s-developer/git/lambda-hr-docQA/conv_retrieval_tool.ipynb#W2sZmlsZQ%3D%3D?line=58) agent_executor = create_conversational_retrieval_agent(llm, tools, verbose=True)
---> [61](vscode-notebook-cell:/c%3A/k8s-developer/git/lambda-hr-docQA/conv_retrieval_tool.ipynb#W2sZmlsZQ%3D%3D?line=60) result = agent_executor({"input": "hi, im bob"})
File [c:\Users\fezzef\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py:310](file:///C:/Users/fezzef/AppData/Local/Programs/Python/Python311/Lib/site-packages/langchain/chains/base.py:310), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
308 except BaseException as e:
309 run_manager.on_chain_error(e)
--> 310 raise e
311 run_manager.on_chain_end(outputs)
312 final_outputs: Dict[str, Any] = self.prep_outputs(
313 inputs, outputs, return_only_outputs
314 )
File [c:\Users\fezzef\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\agents\openai_functions_agent\base.py:104](file:///C:/Users/fezzef/AppData/Local/Programs/Python/Python311/Lib/site-packages/langchain/agents/openai_functions_agent/base.py:104), in OpenAIFunctionsAgent.plan(self, intermediate_steps, callbacks, with_functions, **kwargs)
102 messages = prompt.to_messages()
103 if with_functions:
--> 104 predicted_message = self.llm.predict_messages(
105 messages,
106 functions=self.functions,
107 callbacks=callbacks,
108 )
109 else:
110 predicted_message = self.llm.predict_messages(
111 messages,
112 callbacks=callbacks,
113 )
InvalidRequestError: Unrecognized request argument supplied: functions
```
I suspect the `OpenAIFunctionsAgent` is not compatible with AzureOpenAI. I'm not sure. Please check and let me know.
### Expected behavior
It should be getting a response but LangChain throws an error | InvalidRequestError: Unrecognized request argument supplied: functions | https://api.github.com/repos/langchain-ai/langchain/issues/12260/comments | 2 | 2023-10-25T09:45:12Z | 2023-10-25T14:30:45Z | https://github.com/langchain-ai/langchain/issues/12260 | 1,960,983,120 | 12,260 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am currently using a chain with the following components:
FewshotPrompt ( with SemanticSimilarityExampleSelector) -> LLM -> Outputparser
By default, this will give me the parsed text output of my LLM.
For my purposes, I would need the output of this chain to include all intermediate steps (preferably in a dict).
For this chain that would be:
1. The samples selected by the Example Selector
2. The generated Fewshot Prompt
3. The raw LLM output
4. The parsed LLM output
From the documentation, it is not apparent to me how this could be achieved with the current state of the library.
(The documentation also does not outline details about the usage of Callbacks in the LCEL language)
### Suggestion:
If this is currently implemented then add a simple example to the documentation. If this is not the case then some type of callback that allows collecting this data from intermediate steps should be added. | Returning detailed results of intermediate components | https://api.github.com/repos/langchain-ai/langchain/issues/12257/comments | 4 | 2023-10-25T09:11:34Z | 2024-05-13T16:08:37Z | https://github.com/langchain-ai/langchain/issues/12257 | 1,960,921,158 | 12,257 |
[
"hwchase17",
"langchain"
]
| ### System Info
How i can implement user based memorization please can provide a link for refrence
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
How i can implement user based memorization please can provide a link for refrence
### Expected behavior
How i can implement user based memorization please can provide a link for refrence | User based memorization | https://api.github.com/repos/langchain-ai/langchain/issues/12254/comments | 4 | 2023-10-25T07:15:18Z | 2024-02-10T16:10:37Z | https://github.com/langchain-ai/langchain/issues/12254 | 1,960,713,093 | 12,254 |
[
"hwchase17",
"langchain"
]
| ### System Info
1
I have been working with LangChain for some time, and after updating my LangChain module to version 0.0.295 or higher, my Vertex AI models throw an error in my application. The error is
File "E:\Office\2023\AI-Platform\kaya-aia-general-bot\venv\Lib\site-packages\langchain\llms\vertexai.py", line 301, in _generate
generations.append([_response_to_generation(r) for r in res.candidates])
^^^^^^^^^^^^^^
AttributeError: 'TextGenerationResponse' object has no attribute 'candidates'
I was expecting my previous implementation to work seamlessly. This is how I initialized the chain with the vertex model.
case 'public_general_palm':
llm = ChatVertexAI(model_name="chat-bison", temperature=0)
chain = self.__initialize_basic_chain(llm, self.chain_memory)
return chain(request.msg, return_only_outputs=True)
This is the __initialize_basic_chain method.
def __initialize_basic_chain(self, llm, memory):
return ConversationChain(llm=llm, memory=memory)
However, I have been unable to find a solution for this error. Kindly assist in resolving this issue to ensure the smooth operation of the application.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Update langchain module 0.315 and higher.
1. Initialize llm with the ChatVertexAI(model_name="chat-bison", temperature=0).
2. Pass into a basic chain or an agent with the input messages
### Expected behavior
Throws an internal module error called
File "E:\Office\2023\AI-Platform\kaya-aia-general-bot\venv\Lib\site-packages\langchain\llms\vertexai.py", line 301, in _generate
generations.append([_response_to_generation(r) for r in res.candidates])
^^^^^^^^^^^^^^
AttributeError: 'TextGenerationResponse' object has no attribute 'candidates' | Vertex AI Models Throws AttributeError: After Langchain 0.0295 Version | https://api.github.com/repos/langchain-ai/langchain/issues/12250/comments | 2 | 2023-10-25T04:55:22Z | 2024-02-09T16:12:48Z | https://github.com/langchain-ai/langchain/issues/12250 | 1,960,535,933 | 12,250 |
[
"hwchase17",
"langchain"
]
| ### System Info
Under doctran_text_extract.py, DoctranPropertyExtractor class. The methods for the class contains errors.
The first method transform_documents() does not have any operations and simply returns a NotImplementedError. This method should be working and implemented, with the following code:
```
def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Extracts properties from text documents using doctran."""
try:
from doctran import Doctran, ExtractProperty
doctran = Doctran(
openai_api_key=self.openai_api_key, openai_model=self.openai_api_model
)
except ImportError:
raise ImportError(
"Install doctran to use this parser. (pip install doctran)"
)
properties = [ExtractProperty(**property) for property in self.properties]
for d in documents:
doctran_doc = (
doctran.parse(content=d.page_content)
.extract(properties=properties)
.execute()
)
d.metadata["extracted_properties"] = doctran_doc.extracted_properties
return documents
```
For the second method atransform_documents(), doctran currently does not support async so this should raise a NotImplementedError instead.
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
https://python.langchain.com/docs/integrations/document_transformers/doctran_extract_properties
official code does not reproduce the displayed behavior
### Expected behavior
official code does not reproduce the displayed behavior. Modified source code as mentioned above and used the class method transform_documents works with the displayed outcome. | Integration Doctran: DoctranPropertyExtractor class methods transform_documents() and atransform_documents() error | https://api.github.com/repos/langchain-ai/langchain/issues/12249/comments | 2 | 2023-10-25T03:58:43Z | 2024-02-06T16:14:31Z | https://github.com/langchain-ai/langchain/issues/12249 | 1,960,484,085 | 12,249 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Currently the `TelegramChatAPILoader` returns the Date, Sender, and Text of each message. Adding the URL of each message would very helpful for a Natural Language Search Engine for a large Channel.
SOURCE: `langchain.document_loaders.telegram`
### Motivation
We are building a Natural Language assistant for a large channel on Telegram. The users currently search by key words and hashtags. The Language model can only return a URL if it's contained in the text of the message. With the URL's we would be able to easily direct the user to the relevant information with a Natural Language query instead of the archaic methods of the days before Ai.
### Your contribution
I do not know Telethon, however we can see that the URL can be pulled in the `fetch_data_from_telegram` function in the [langchain.document_loaders.telegram ](https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/telegram.html#TelegramChatApiLoader.fetch_data_from_telegram)class.
We thank you for LangChain and wish we could spare the time to contribute more. We may have a pull request in the future regarding Telegram Loaders for media messages. It's a feature that we will have to incorporate as we get closer to a public release of what we're working on. | Add URL's to the output of TelegramChatAPILoader 🙏 | https://api.github.com/repos/langchain-ai/langchain/issues/12246/comments | 1 | 2023-10-25T02:22:45Z | 2024-02-06T16:14:36Z | https://github.com/langchain-ai/langchain/issues/12246 | 1,960,397,303 | 12,246 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain in /opt/conda/lib/python3.11/site-packages (0.0.322)
Python 3.11.6
installed on
image: jupyter/datascience-notebook
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
As per the documentation
retriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
retriever.get_relevant_documents("What are some movies about dinosaurs")
### Expected behavior
We would see verbose output such as
` query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)
`
however no verbose output is printed | retriever.get_relevant_documents | https://api.github.com/repos/langchain-ai/langchain/issues/12244/comments | 4 | 2023-10-25T00:47:28Z | 2023-10-25T23:49:14Z | https://github.com/langchain-ai/langchain/issues/12244 | 1,960,309,220 | 12,244 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Using langchain on AWS lambda function to call an AWS OpenSearch Serverless vector index. I did not file as a bug, as I suspect that this is a usage problem, but it might be a bug.
I have built an OpenSearch vector index, using AWS OpenSearch Serverless, and are using it to perform vector queries against document chunks. This is all working well. I would now like to add the ability to filter these searches by document (there may be hundreds or thousands of vector entries per document). There is metadata included with each entry, which includes a DocumentId field, which is a string-ized guid. I thought it would be simple to filter against this metadata attribute, but cannot get it working.
The call I am making is:
```
docs = vsearch.similarity_search_with_score(
theQuery,
index_name="vector-index2",
vector_field = "vector_field",
search_type="script_scoring",
pre_filter=filter_dict,
space_type="cosinesimil",
engine="faiss",
k = 3
)
```
Typical unfiltered output:
{"doc": {"page_content": "seat is installed ... facing", "metadata": {"DocumentId": "{8B5D5890-0000-C1E8-B0B6-678C8509665D}", "DocumentTitle": "2023-Toyota-RAV4-OM.pdf", "DocClassName": "Document"}}, "score": 1.8776696}]
For filter_dict, I have tried:
filter_dict = {'DocumentId':theDocId} <-- Gets an error
filter_dict = {'bool': {'must': {'term': {'DocumentId': theDocId} } } } <-- doesn’t find anything
filter_dict = {'bool': {'must': {'match': {'DocumentId': theDocId} } } } <-- doesn’t find anything
filter_dict = {'must': {'match': {'DocumentId': theDocId} } } <-- Gets an error
filter_dict = {'match': {'DocumentId': theDocId} } <-- doesn’t find anything
theDocId is a string set to an input parameter and looks like this: {8B5D5890-0000-C1E8-B0B6-678C8509665D}. Results are the same if I use a string constant (i.e. '{8B5D5890-0000-C1E8-B0B6-678C8509665D}')
Same results if I use search_type=”approximate_search” in conjunction with boolean_filter=filter_dict
Any suggestions on how to get this to work?
### Suggestion:
_No response_ | Issue: Problems getting filtering to work with langchain, OpenSearchVectorSearch, similarity_search_with_score | https://api.github.com/repos/langchain-ai/langchain/issues/12240/comments | 16 | 2023-10-24T23:46:34Z | 2024-03-14T00:47:50Z | https://github.com/langchain-ai/langchain/issues/12240 | 1,960,264,037 | 12,240 |
[
"hwchase17",
"langchain"
]
| ### System Info
n/a
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://github.com/langchain-ai/langchain/blob/v0.0.322/libs/langchain/langchain/output_parsers/structured.py#L86
This line doesn't append commas in the expected JSON schema.
```none
# What it does
{
"foo": List[string] // a list of strings
"bar": string // a string
}
# What it should do I think
{
"foo": List[string], // a list of strings
"bar": string // a string
}
```
### Expected behavior
I think the suggestion to a LLM to be close to valid JSON (even though it has JSON comments). | `StructuredOutputParser.get_format_instructions` missing commas | https://api.github.com/repos/langchain-ai/langchain/issues/12239/comments | 3 | 2023-10-24T23:31:54Z | 2024-05-07T16:06:18Z | https://github.com/langchain-ai/langchain/issues/12239 | 1,960,253,141 | 12,239 |
[
"hwchase17",
"langchain"
]
| ### System Info
https://github.com/langchain-ai/langchain/tree/d2cb95c39d5569019ab3c6aa368aa937d8dcc465
(just above `v0.0.322`)
MacBook Pro, M1 chip, macOS Ventura 13.5.2
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
With Python 3.10.11:
```bash
python -m venv venv
source venv/bin/activate
pip install poetry
poetry install --with test
make test
```
This gets:
```
poetry run pytest --disable-socket --allow-unix-socket tests/unit_tests/
Traceback (most recent call last):
File "/Users/james.braza/code/langchain/venv/bin/poetry", line 5, in <module>
from poetry.console.application import main
File "/Users/james.braza/code/langchain/venv/lib/python3.10/site-packages/poetry/console/application.py", line 11, in <module>
from cleo.application import Application as BaseApplication
File "/Users/james.braza/code/langchain/venv/lib/python3.10/site-packages/cleo/application.py", line 12, in <module>
from cleo.commands.completions_command import CompletionsCommand
File "/Users/james.braza/code/langchain/venv/lib/python3.10/site-packages/cleo/commands/completions_command.py", line 10, in <module>
from cleo import helpers
File "/Users/james.braza/code/langchain/venv/lib/python3.10/site-packages/cleo/helpers.py", line 5, in <module>
from cleo.io.inputs.argument import Argument
File "/Users/james.braza/code/langchain/venv/lib/python3.10/site-packages/cleo/io/inputs/argument.py", line 5, in <module>
from cleo.exceptions import CleoLogicError
File "/Users/james.braza/code/langchain/venv/lib/python3.10/site-packages/cleo/exceptions/__init__.py", line 3, in <module>
from cleo._utils import find_similar_names
File "/Users/james.braza/code/langchain/venv/lib/python3.10/site-packages/cleo/_utils.py", line 8, in <module>
from rapidfuzz.distance import Levenshtein
ModuleNotFoundError: No module named 'rapidfuzz'
```
### Expected behavior
I expected installation to install everything necessary to run tests | ModuleNotFoundError: No module named 'rapidfuzz' | https://api.github.com/repos/langchain-ai/langchain/issues/12237/comments | 12 | 2023-10-24T23:05:16Z | 2024-07-18T16:07:14Z | https://github.com/langchain-ai/langchain/issues/12237 | 1,960,229,561 | 12,237 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi, I was wondering if there is a mechanism that I can only use the ConversationSummaryMemory in a chain as the context of a conversation. After the chain is completed, I can retain the old memory without adding the new conversation to it.
### Suggestion:
_No response_ | Issue: can I only use ConversationSummaryMemory in a chain as context without renew it | https://api.github.com/repos/langchain-ai/langchain/issues/12227/comments | 2 | 2023-10-24T20:43:18Z | 2024-02-08T16:14:36Z | https://github.com/langchain-ai/langchain/issues/12227 | 1,960,043,531 | 12,227 |
[
"hwchase17",
"langchain"
]
| ### System Info
aws
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.prompts.chat import ChatPromptTemplate
updated_prompt = ChatPromptTemplate.from_messages(
[
("system",
"""
You are identifying information from the 'customer' table in the SQL Database. You should query the schema of the 'customer' table only once.
Here's the list of relevant column names and their column description from schema 'customer' and their descriptions:
- column name: "job"
column description: "Job Number or Job Identifier.
- column name: "job_type"
- column description: "This column provides information about the status of a job. you can check this column for any of the following job status values:
'Progress'
'Complete'
Use the following context to create the SQL query. Context:
1.Comprehend the column names and their respective descriptions provided for the purpose of selecting columns when crafting an SQL query.
2.If the user inquires about the type of job
select only the columns mapped to job_status and respond to the user:
job_type= 'Progress, then fetch 'name' and 'start time'
job_status = 'Complete' , then fetch 'name', 'completed', and 'date'
"""
),
("user", "{question}\n ai: "),
]
)
os.environ["OPENAI_CHAT_MODEL"] = 'gpt-3.5-turbo-16k'
llm = ChatOpenAI(model=os.getenv("OPENAI_CHAT_MODEL"), temperature=0)
def run_sql_query_prompt(query, db):
greeting_response = detect_greeting_or_gratitude(query)
if greeting_response:
return greeting_response
try:
llm = ChatOpenAI(model=os.getenv("OPENAI_CHAT_MODEL"), temperature=0)
sql_toolkit = SQLDatabaseToolkit(db=db, llm=llm)
sql_toolkit.get_tools()
sqldb_agent = create_sql_agent(
llm=llm,
toolkit=sql_toolkit,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
#verbose=True,
handle_parsing_errors=True,
use_query_checker=True,
)
formatted_prompt = updated_prompt.format(question=query)
result = sqldb_agent.run(formatted_prompt)
return result
except Exception as e:
return f"Error executing SQL query: {str(e)}"
### Expected behavior
it should first fetch the job_type to understand whether it's 'Progress' or 'Complete'
then if
job_type= 'Progress, then fetch 'name' and 'start time' columns
job_status = 'Complete' , then fetch 'name', 'completed', and 'date' columns | I encountered an issue with SQLDatabaseToolkit where I couldn't configure it to execute multiple SQL queries based on the previously retrieved value from a column in the initial query | https://api.github.com/repos/langchain-ai/langchain/issues/12222/comments | 2 | 2023-10-24T19:13:22Z | 2024-02-06T16:14:46Z | https://github.com/langchain-ai/langchain/issues/12222 | 1,959,912,928 | 12,222 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain HEAD
### Who can help?
@hwchase17 @ag
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
OpenAI uses a standard `OPENAI_API_KEY_PATH` environment variable that can be used to provide a key located in a file. This is useful for instance for generating short-lived, automatically renewed keys. Currently, Langchain complains that an API key must be provided even when this environment variable is set.
### Expected behavior
Langchain should use the value of OPENAPI_API_KEY_PATH with a higher priority than OPENAI_API_KEY when set. | OPENAI_API_KEY_PATH is not supported | https://api.github.com/repos/langchain-ai/langchain/issues/12218/comments | 2 | 2023-10-24T17:34:43Z | 2024-02-08T16:14:40Z | https://github.com/langchain-ai/langchain/issues/12218 | 1,959,747,839 | 12,218 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hi!
I have been trying to create a router chain which will route to StuffDocumentsChains.
It seems that default output_key for BaseCombineDocumentsChain is output_text instead of 'text' like in typical chains.
```py
class BaseCombineDocumentsChain(Chain, ABC):
"""Base interface for chains combining documents.
Subclasses of this chain deal with combining documents in a variety of
ways. This base class exists to add some uniformity in the interface these types
of chains should expose. Namely, they expect an input key related to the documents
to use (default `input_documents`), and then also expose a method to calculate
the length of a prompt from documents (useful for outside callers to use to
determine whether it's safe to pass a list of documents into this chain or whether
that will longer than the context length).
"""
input_key: str = "input_documents" #: :meta private:
output_key: str = "output_text" #: :meta private:
```
### Question: Is it done on purpose or is just inconsistency?
---
On the other hand
MultiPromptChain has hardcoded output_keys as 'text'.
```py
class MultiPromptChain(MultiRouteChain):
"""A multi-route chain that uses an LLM router chain to choose amongst prompts."""
@property
def output_keys(self) -> List[str]:
return ["text"]
```
So it is necessary to always change output_key in any CombineDocuments like chains.
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a router chain that routes to StuffDocumentsChain accordingly with documentation.
### Expected behavior
BaseCombineDocumentsChain should have conssistent default outputs keys as other chains.
Or
MultiPromptChain/MultiRouteChain should not have hardcoded output keys
| Router + BaseCombineDocumentsChain output_key | https://api.github.com/repos/langchain-ai/langchain/issues/12206/comments | 2 | 2023-10-24T14:38:00Z | 2024-02-06T16:14:56Z | https://github.com/langchain-ai/langchain/issues/12206 | 1,959,412,927 | 12,206 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Is it possible to support scenarios for using `RetryWithErrorOutputParser` when prompt is injected in a chain or a summary memory so this prompt is not directly formatted?
### Motivation
I am trying to support complex prompts, multi-chain and combined memory scenarios causing errors regularly when the LLM generate content response in a different structure as stablished in the format instructions.
### Your contribution
I could make the PR with some basic implementation guidelines. | RetryWithErrorOutputParser for chains and summary memory | https://api.github.com/repos/langchain-ai/langchain/issues/12205/comments | 2 | 2023-10-24T14:24:02Z | 2024-02-08T16:14:46Z | https://github.com/langchain-ai/langchain/issues/12205 | 1,959,382,324 | 12,205 |
[
"hwchase17",
"langchain"
]
| ### System Info
Here is a native python llama cpp example
```
"""
Ask to generate 4 questions related to a given content
"""
from llama_cpp import Llama
mpath="/home/mac/llama-2-7b-32k-instruct.Q5_K_S.gguf"
# define n_ctx manually to permit larger contexts
LLM = Llama(model_path=mpath, n_ctx=512,verbose=False)
# create a text prompt
context = """
Nous sommes dans les Yvelines
Il fait beau
Nous sommes a quelques km d'un joli lac
le boulanger est sympatique
Il y a également une boucherie
"""
prompt = """[INST]\nYou are a Teacher/ Professor. Your task is to setup
4 questions for an upcoming
quiz/examination. The questions should be diverse in nature
across the document. Restrict the questions to the
context provided.
Context is provided below :
%s
[/INST]\n\n
""" % context
print(prompt)
# set max_tokens to 0 to remove the response size limit
output = LLM(prompt, max_tokens=512,stop=["INST"])
# display the response
print(output["choices"][0]["text"])
print("---------------")
print(output)
```
and here is the "langchain version"
```
from langchain.llms import LlamaCpp
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
prompt_template ="""[INST]\nYou are a Teacher/ Professor. Your task is to setup
4 questions for an upcoming
quiz/examination. The questions should be diverse in nature
across the document. Restrict the questions to the
context provided.
Context is provided below :
{context}
[/INST]\n\n
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context"])
path_to_ggml_model = "/home/mac/llama-2-7b-32k-instruct.Q5_K_S.gguf"
context = """
Nous sommes dans les Yvelines
Il fait beau
Nous sommes a quelques km d'un joli lac
le boulanger est sympatique
Il y a également une boucherie
"""
llm = LlamaCpp(model_path=path_to_ggml_model,temperature=0.8,verbose=True,echo=True,n_ctx=512)
chain = LLMChain(llm=llm,prompt=PROMPT)
answer = chain.run(context=context,stop="[/INST]",max_tokens=512)
print(answer)
```
unfortunately the langchain version does not return anything
I suspect mismatching or defautlt overriden parameters sent to Llama
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.llms import LlamaCpp
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
prompt_template ="""[INST]\nYou are a Teacher/ Professor. Your task is to setup
4 questions for an upcoming
quiz/examination. The questions should be diverse in nature
across the document. Restrict the questions to the
context provided.
Context is provided below :
{context}
[/INST]\n\n
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context"])
path_to_ggml_model = "/home/mac/llama-2-7b-32k-instruct.Q5_K_S.gguf"
context = """
Nous sommes dans les Yvelines
Il fait beau
Nous sommes a quelques km d'un joli lac
le boulanger est sympatique
Il y a également une boucherie
"""
# echo : echo prompt
llm = LlamaCpp(model_path=path_to_ggml_model,temperature=0.8,verbose=True,echo=True,n_ctx=512)
chain = LLMChain(llm=llm,prompt=PROMPT)
answer = chain.run(context=context,stop="[/INST]",max_tokens=512)
print(answer)
```
### Expected behavior
should answer a quiz similar to this :
Sure, here are four questions that can be used for an upcoming quiz or examination related to the context provided:
1. What is the name of the lake located nearby?
2. Describe the type of bread sold at the boulanger?
3. How would you describe the personality of the baker?
4. What kind of meat do they sell at the boucherie?
| Unable to use LlamCpp from langchain | https://api.github.com/repos/langchain-ai/langchain/issues/12203/comments | 3 | 2023-10-24T14:21:08Z | 2024-02-09T16:12:53Z | https://github.com/langchain-ai/langchain/issues/12203 | 1,959,375,762 | 12,203 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
The `publish_date` field returned in the metadata seems unreliable and is very often empty for many news sites. The publish date can easily be parsed directly from the RSS feed from the `pubDate` tag which, although optional, is pretty standard and is much more reliable than trying to pull it from the article itself which is currently being done using the `NewsURLLoader`.
### Suggestion:
We can override the `publish_date` coming from the `NewsURLLoader` metadata with the `published` field from the feed entry in the `RSSFeedLoader` | Issue: RSSFeedLoader unreliable publish date | https://api.github.com/repos/langchain-ai/langchain/issues/12202/comments | 2 | 2023-10-24T14:16:40Z | 2024-02-06T16:15:11Z | https://github.com/langchain-ai/langchain/issues/12202 | 1,959,365,429 | 12,202 |
[
"hwchase17",
"langchain"
]
| @dosu-bot
I have 3 issues I need your help with and to fix the code:
1) My code is not remembering previous questions despite using ConversationBufferMemory
2) Not Returning Source Documents
3) When I ask it a question the first time, it writes the answer only but if I ask it any other time afterwards, it always rewrites the question then the answer. I just want it to always write the answer only.
Below is my code:
from langchain.chat_models import ChatOpenAI
from langchain.prompts.prompt import PromptTemplate
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationalRetrievalChain
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Qdrant
from langchain.callbacks import StreamingStdOutCallbackHandler
from langchain.embeddings import OpenAIEmbeddings
from langchain.callbacks import get_openai_callback
from langchain.vectorstores import Chroma
from langchain.vectorstores import DeepLake
from PyPDF2 import PdfReader
import PyPDF2
from langchain.document_loaders import TextLoader, DirectoryLoader, UnstructuredWordDocumentLoader
from langchain.document_loaders import PyPDFDirectoryLoader, PyPDFLoader
from dotenv import load_dotenv
import time
# Change DirectoryLoader since it has its own chunk size and overlap
load_dotenv()
directory_path = "C:\\Users\\Asus\\Documents\\Vendolista\\MasterFormat"
pdf_loader = DirectoryLoader(directory_path,
glob="**/*.pdf",
show_progress=True,
use_multithreading=True,
silent_errors=True,
loader_cls = PyPDFLoader)
documents = pdf_loader.load()
print(str(len(documents))+ " documents loaded")
def print_letter_by_letter(text):
for char in text:
print(char, end='', flush=True)
time.sleep(0.02)
llm = ChatOpenAI(temperature = 0, model_name='gpt-3.5-turbo', callbacks=[StreamingStdOutCallbackHandler()], streaming = True)
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=800,
chunk_overlap=80,
)
chunks = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
persist_directory = "C:\\Users\\Asus\\OneDrive\\Documents\\Vendolista"
knowledge_base = Chroma.from_documents(chunks, embeddings, persist_directory = persist_directory)
#save to disk
knowledge_base.persist()
#To delete the DB we created at first so that we can be sure that we will load from disk as fresh db
knowledge_base = None
new_knowledge_base = Chroma(persist_directory = persist_directory, embedding_function = embeddings)
prompt_template = """
You are a knowledgeable and ethical assistant with expertise in the construction field, including its
guidelines and operational procedures. You are always available to provide assistance and uphold the highest
ethical and moral standards in your work.
You will always and always and always only follow these set of rules and nothing else no matter what:
1) Answer the Question that is answering only based on the documents.
2) If the question is unrelated then say "sorry this question is completely not related.
If you think it is, email the staff and they will get back to you: [email protected]."
3) Do not ever answer with "I don't know" to any question no matter what kind of a question it is.
4) Everyime you answer a question, write on a new line "is there anything else you would like me to help you with?"
5) Remember to always take a deep breath and take this step by step.
6) Do not ever and under no circumstance do you rewrite the question asked by the customer when you are answering them.
7) If the question starts with the "Yazan," then you can ignore the steps above.
Text: {context}
Question: {question}
Answer :
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
memory = ConversationBufferMemory(llm=llm, memory_key='chat_history', input_key='question', output_key='answer', return_messages=True)
conversation = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=new_knowledge_base.as_retriever(),
memory=memory,
chain_type="stuff",
verbose=False,
combine_docs_chain_kwargs={"prompt":PROMPT}
)
def main():
chat_history = []
while True:
customer_input = input("Ask me anything about the files (type 'exit' to quit): ")
if customer_input.lower() in ["exit"] and len(customer_input) == 4:
end_chat = "Thank you for visiting us! Have a nice day"
print_letter_by_letter(end_chat)
break
#if the customer_input is not equal to an empty string meaning they have input
if customer_input != "":
with get_openai_callback() as cb:
response = conversation({"question": customer_input}, return_only_outputs = True)
# print(response['answer'])
print()
print(cb)
print()
if __name__ == "__main__":
main() | LangChain memory, not returning source doc, repeating question | https://api.github.com/repos/langchain-ai/langchain/issues/12199/comments | 6 | 2023-10-24T11:06:32Z | 2024-05-15T16:06:17Z | https://github.com/langchain-ai/langchain/issues/12199 | 1,959,011,802 | 12,199 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.320
Ubuntu 20.04
Python 3.11.5
### Who can help?
@SuperJokerayo @baskaryan
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When I try to parse a PDF and extract an image I get the following error:
```
File "<.....>/lib/python3.11/site-packages/langchain/document_loaders/parsers/pdf.py", line 106, in _extract_images_from_page
np.frombuffer(xObject[obj].get_data(), dtype=np.uint8).reshape(
ValueError: cannot reshape array of size 45532 into shape (4644,3282,newaxis)
```
The code I have to run the parser is:
```
loader = PyPDFLoader(doc["file"], extract_images=True)
pages = loader.load_and_split()
```
I can't give you a copy of the document since it's confidential. It's a scanned 11-page document.
### Expected behavior
I expected `load_and_split()` to return a list of 11 pages with the text parsed from the image in each page.
If I change to `extract_images=False` then it returns an empty list of 0 pages as expected. | Error reshaping array in _extract_images_from_page() in pdf.py | https://api.github.com/repos/langchain-ai/langchain/issues/12196/comments | 5 | 2023-10-24T10:05:32Z | 2024-04-27T16:34:18Z | https://github.com/langchain-ai/langchain/issues/12196 | 1,958,920,258 | 12,196 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I have only seen this "[Zero-shot ReAct](https://python.langchain.com/docs/modules/agents/agent_types/react)"
### Motivation
able to provide examples to agent, to understand how these tools area used, in what order
### Your contribution
nothing yet | Is there a few shots agent exists? | https://api.github.com/repos/langchain-ai/langchain/issues/12194/comments | 2 | 2023-10-24T09:32:52Z | 2024-02-06T16:15:21Z | https://github.com/langchain-ai/langchain/issues/12194 | 1,958,865,532 | 12,194 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Let's say you have a PDF file, an Excel file, and a PowerPoint file all indexed into a pinecone database, they are similar in that, a query can have information relevant in all sources, but now we want the user to indicate which source should be considered first when they ask a question, is there a way to do that?
For the conversational retrieval chain, you have this retriever there
retriever = vectorstore.as_retriever( search_kwargs = {"filter": {
"file_types": {"$in": file_types},
"genre": {"$in": genre},
"keys": {"$in": keys},
},
"k": 3
}),
how do i make sure i can prioritize sources such that that source comes first, if its not there, we check the others
### Suggestion:
_No response_ | Issue: Prioritze sources in the vector store to consider certain sources first | https://api.github.com/repos/langchain-ai/langchain/issues/12192/comments | 2 | 2023-10-24T09:09:10Z | 2024-02-06T16:15:26Z | https://github.com/langchain-ai/langchain/issues/12192 | 1,958,826,438 | 12,192 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I want to add a config chain prior to the ConversationalRetrievalChain that is going to process the query and set the retriever's search kwargs ( k, fetch_k, lambda_mult ) depending on the question, how can I do that and pass the parameters from the config chain output
### Motivation
The motivation is to have a dynamic ConversationalRetrievalChain that retrieves the documents the right way for each question.
### Your contribution
I have none for now, I don't know where to start | Add a config chain before the ConversationalRetrievalChain : search Kwargs | https://api.github.com/repos/langchain-ai/langchain/issues/12189/comments | 5 | 2023-10-24T08:36:14Z | 2024-03-27T13:44:00Z | https://github.com/langchain-ai/langchain/issues/12189 | 1,958,773,387 | 12,189 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python 3.11.4
LangChain 0.0.321
Platform info (WSL2):
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.6 LTS"
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This is the (partial) code snippet that I run:
```python
from langchain.chat_models import ChatOpenAI
prompt = hub.pull("hwchase17/react-chat-json")
llm = ChatOpenAI(max_retries=10, temperature=0,
max_tokens=400, **model_kwargs)
llm_with_stop = llm.bind(stop=["\nObservation"])
from langchain.agents.output_parsers import JSONAgentOutputParser
from langchain.agents.format_scratchpad import format_log_to_messages
# Connecting to Opensearch
docsearch = OpenSearchVectorSearch(index_name=index_uk, embedding_function=embeddings, opensearch_url=opensearch_url, http_auth=('admin', auth))
# We need some extra steering, or the chat model forgets how to respond sometimes
TEMPLATE_TOOL_RESPONSE = """TOOL RESPONSE:
---------------------
{observation}
USER'S INPUT
--------------------
Okay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else - even if you just want to respond to the user. Do NOT respond with anything except a JSON snippet no matter what!"""
from langchain.agents import AgentExecutor
from langchain.agents.agent_toolkits import create_retriever_tool
tool = create_retriever_tool(
docsearch.as_retriever(),
"search_hr_documents",
"Searches and returns documents regarding HR-related questions."
)
tool
print("tool", tool)
from langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent
agent = OpenAIFunctionsAgent(llm=llm, tools=[tool], prompt=prompt)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
agent_executor = AgentExecutor(agent=agent, tools=[tool,tool], verbose=True, memory=memory, return_intermediate_steps=True)
print("full_question ===>", full_question)
result = agent_executor({"input": full_question})
```
When I run it, it logs the below error:
```
tool name='search_hr_documents' description='Searches and returns documents regarding HR-related questions.' func=<bound method BaseRetriever.get_relevant_documents of VectorStoreRetriever(tags=['OpenSearchVectorSearch', 'OpenAIEmbeddings'], vectorstore=<langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch object at 0x00000247DC4ABA50>)> coroutine=<bound method BaseRetriever.aget_relevant_documents of VectorStoreRetriever(tags=['OpenSearchVectorSearch', 'OpenAIEmbeddings'], vectorstore=<langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch object at 0x00000247DC4ABA50>)>
full_question ===> Am I allowed to get a company car ?
c:\project\transformed_notebook.ipynb Cell 8 line 1
190 agent_executor = AgentExecutor(agent=agent, tools=[tool,tool], verbose=True, memory=memory, return_intermediate_steps=True)
192 print("full_question ===>", full_question)
--> 194 result = agent_executor({"input": full_question})
196 # print("result ===>", result)
197
198 # chain_type_kwargs = {"prompt": prompt,
(...)
211 # out = chain(full_question)
212 # Cleaning the answer
File c:\Users\raaz\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py:310, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
308 except BaseException as e:
309 run_manager.on_chain_error(e)
--> 310 raise e
311 run_manager.on_chain_end(outputs)
...
99 }
100 full_inputs = dict(**selected_inputs, agent_scratchpad=agent_scratchpad)
101 prompt = self.prompt.format_prompt(**full_inputs)
KeyError: 'tool_names'
```
How come this error is generated even though the `tools` argument is provided and is well-defined ?
### Expected behavior
The error should not be raised since tools are defined the correct way (as per the official LangChain docs) | KeyError: 'tool_names' | https://api.github.com/repos/langchain-ai/langchain/issues/12186/comments | 6 | 2023-10-24T06:44:44Z | 2024-02-12T16:10:25Z | https://github.com/langchain-ai/langchain/issues/12186 | 1,958,617,422 | 12,186 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am trying to add retriever tool to SQL Database Toolkit and run SQL Agent with Vertex AI Text Bison LLM, however I am facing error.
I am following below documentation for implementation
[https://python.langchain.com/docs/use_cases/qa_structured/sql?ref=blog.langchain.dev#case-3-sql-agents](url)
My Python Code
`llm = VertexAI(
model_name='text-bison@001',
max_output_tokens=256,
temperature=0,
top_p=0.8,
top_k=40,
verbose=True,
)
few_shots = {'a': sql a query;',
"b '.": "sql b query",
"c.": "sql c query;",
"d": "sql d query; "
}
from langchain.embeddings import VertexAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.schema import Document
embeddings = VertexAIEmbeddings()
few_shot_docs = [Document(page_content=question, metadata={'sql_query': few_shots[question]}) for question in few_shots.keys()]
vector_db = FAISS.from_documents(few_shot_docs, embeddings)
#retriever = vector_db.as_retriever(search_type="similarity_score_threshold", search_kwargs={"score_threshold": .5})
retriever = vector_db.as_retriever()
from langchain.agents import initialize_agent, Tool
from langchain.agents import load_tools
from langchain.agents import AgentType
from langchain.tools import BaseTool
from langchain.agents.agent_toolkits import create_retriever_tool
tool_description = """
This tool will help you understand similar examples to adapt them to the user question.
Input to this tool should be the user question.
"""
retriever_tool = create_retriever_tool(
retriever,
name='sql_get_similar_examples',
description=tool_description
)
custom_tool_list = [retriever_tool]
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from langchain.agents import AgentExecutor
from langchain.agents.agent_types import AgentType
from sqlalchemy import *
from sqlalchemy.engine import create_engine
from sqlalchemy.schema import *
import pandas as pd
table_uri = f"bigquery://{project}/{dataset_id}"
engine = create_engine(f"bigquery://{project}/{dataset_id}")
db = SQLDatabase(engine=engine,metadata=MetaData(bind=engine),include_tables=table_name,view_support=True)
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
custom_suffix = """Begin!
Question: {input}
Thought: I should first get the similar examples I know.
If the examples are enough to construct the query, I can build it.
Otherwise, I can then look at the tables in the database to see what I can query.
Then I should query the schema of the most relevant tables.
{agent_scratchpad}"""
agent = create_sql_agent(llm=llm,
toolkit=toolkit,
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
extra_tools=custom_tool_list,
suffix=custom_suffix
)
langchain.debug = True
Thought:The table does not have the column 'alloted_terminal'.
Action: sql_get_similar_examples
Action Input: {Question}
Observation: {Document(fewshots}
Thought:I don't have enough information to construct the query.
Action: sql_get_similar_examples
Action Input: {Question}
Observation: {Document(fewshots}
Thought:I don't have enough information to construct the query.
Action: sql_get_similar_examples
Action Input: {Question}
Observation: {Document(fewshots}
Thought:I don't have enough information to construct the query.
Action: sql_get_similar_examples
Action Input: {Question}
Observation: {Document(fewshots}
Thought:I don't have enough information to construct the query.
Action: sql_get_similar_examples
Action Input: {Question}
Observation: {Document(fewshots}
Thought:I don't have enough information to construct the query.
Action: sql_get_similar_examples
Action Input: {Question}
Observation: {Document(fewshots}
Thought:I don't have enough information to construct the query.
Action: sql_get_similar_examples
Action Input: {Question}
Observation: {Document(fewshots}
Thought:I don't have enough information to construct the query.
Action: sql_get_similar_examples
Action Input: {Question}
Observation: {Document(fewshots}
Thought:I don't have enough information to construct the query.
Action: sql_get_similar_examples
Action Input: {Question}
Observation: {Document(fewshots}
Thought:I don't have enough information to construct the query.
Action: sql_get_similar_examples
Action Input: {Question}
Observation: {Document(fewshots}
Thought:I don't have enough information to construct the query.
Action: sql_get_similar_examples
Action Input: {Question}
Observation: {Document(fewshots}
Thought:I don't have enough information to construct the query.
Action: sql_get_similar_examples
Action Input: {Question}
Observation: {Document(fewshots}
Thought:
> Finished chain.
Agent stopped due to iteration limit or time limit.
{
"output": "Agent stopped due to iteration limit or time limit."
}
'
Agent stopped due to iteration limit or time limit.
### Suggestion:
_No response_ | Retriever Tool for Vertex AI Model | https://api.github.com/repos/langchain-ai/langchain/issues/12185/comments | 2 | 2023-10-24T05:35:24Z | 2024-02-06T16:15:41Z | https://github.com/langchain-ai/langchain/issues/12185 | 1,958,553,638 | 12,185 |
[
"hwchase17",
"langchain"
]
| ### System Info
System:
langchain 0.0.321
Python 3.10
Hi,
I get an error related to a missing OpenAI token when trying to reuse the code to [dynamically select from multiple retrievers](https://python.langchain.com/docs/use_cases/question_answering/multi_retrieval_qa_router) at the MultiRetrievalQAChain initialization step,
```
multi_retriever_chain = MultiRetrievalQAChain.from_retrievers(
llm=llm, # Llama2 served by vLLM via VLLMOpenAI
retriever_infos=retrievers_info,
verbose=True)
```
I get an error because I'm not using OpenAI LLMs and the LangChain code (multi_retrieval_qa.py) hard wires ChatOpenAI() as the LLM for the _default_chain.
```
_default_chain = ConversationChain(
llm=ChatOpenAI(), prompt=prompt, input_key="query", output_key="result"
)
```
I think you need to assign the llm variable to the llm provided when initializing the class.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Including the Python code and the PDF files that get loaded by the retrievers.
```
#%%
import torch
from langchain.llms import VLLMOpenAI
from langchain.document_loaders import PyPDFLoader
# Import for retrieval-augmented generation RAG
from langchain import hub
from langchain.chains import RetrievalQA, MultiRetrievalQAChain
from langchain.vectorstores import Chroma
from langchain.text_splitter import SentenceTransformersTokenTextSplitter
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
#%%
# URL for the vLLM service
INFERENCE_SRV_URL = "http://localhost:8000/v1"
def setup_chat_llm(vllm_url, max_tokens=512, temperature=0.2):
"""
Intializes the vLLM service object.
:param vllm_url: vLLM service URL
:param max_tokens: Max number of tokens to get generated by the LLM
:param temperature: Temperature of the generation process
:return: The vLLM service object
"""
chat = VLLMOpenAI(
openai_api_key="EMPTY",
openai_api_base=vllm_url,
model_name="meta-llama/Llama-2-7b-chat-hf",
temperature=temperature,
max_tokens=max_tokens,
)
return chat
#%%
# Initialize LLM service
llm = setup_chat_llm(vllm_url=INFERENCE_SRV_URL)
#%%
%%time
# Set up the embedding encoder (Sentence Transformers) and vector store
model_name = "all-mpnet-base-v2"
model_kwargs = {'device': 'cuda' if torch.cuda.is_available() else 'cpu'}
encode_kwargs = {'normalize_embeddings': False}
embeddings = SentenceTransformerEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
# Set up the document splitter
text_splitter = SentenceTransformersTokenTextSplitter(chunk_size=500, chunk_overlap=0)
# Load PDF documents
loader = PyPDFLoader(file_path="../data/AI_RMF_Playbook.pdf")
rmf_doc = loader.load()
rmf_splits = text_splitter.split_documents(rmf_doc)
rmf_retriever = Chroma.from_documents(documents=rmf_splits, embedding=embeddings)
loader = PyPDFLoader(file_path="../data/OWASP-Top-10-for-LLM-Applications-v101.pdf")
owasp_doc = loader.load()
owasp_splits = text_splitter.split_documents(owasp_doc)
owasp_retriever = Chroma.from_documents(documents=owasp_splits, embedding=embeddings)
loader = PyPDFLoader(file_path="../data/Threat Modeling LLM Applications - AI Village.pdf")
ai_village_doc = loader.load()
ai_village_splits = text_splitter.split_documents(ai_village_doc)
ai_village_retriever = Chroma.from_documents(documents=ai_village_splits, embedding=embeddings)
#%%
retrievers_info = [
{
"name": "NIST AI Risk Management Framework",
"description": "Guidelines for organizations and people to manage risks associated with the use of AI ",
"retriever": rmf_retriever.as_retriever()
},
{
"name": "OWASP Top 10 for LLM Applications",
"description": "Provides practical, actionable, and concise security guidance to navigate the complex and evolving terrain of LLM security",
"retriever": owasp_retriever.as_retriever()
},
{
"name": "Threat Modeling LLM Applications",
"description": "A high-level example from Gavin Klondike on how to build a threat model for LLM applications",
"retriever": ai_village_retriever.as_retriever()
}
]
#%%
# Import default LLama RAG prompt
prompt = hub.pull("rlm/rag-prompt-llama")
print(prompt.dict()['messages'][0]['prompt']['template'])
#%%
multi_retriever_chain = MultiRetrievalQAChain.from_retrievers(
llm=llm,
retriever_infos=retrievers_info,
#default_retriever=owasp_retriever.as_retriever(),
#default_prompt=prompt,
#chain_type_kwargs={"prompt": prompt},
verbose=True)
#%%
question = "What is prompt injection?"
result = multi_retriever
```
[AI_RMF_Playbook.pdf](https://github.com/langchain-ai/langchain/files/13110652/AI_RMF_Playbook.pdf)
[OWASP-Top-10-for-LLM-Applications-v101.pdf](https://github.com/langchain-ai/langchain/files/13110653/OWASP-Top-10-for-LLM-Applications-v101.pdf)
[Threat Modeling LLM Applications - AI Village.pdf](https://github.com/langchain-ai/langchain/files/13110654/Threat.Modeling.LLM.Applications.-.AI.Village.pdf)
### Expected behavior
I was expecting to get results equivalent to what is shown at LangChain's documentation [dynamically select from multiple retrievers](https://python.langchain.com/docs/use_cases/question_answering/multi_retrieval_qa_router) | multi_retrieval_qa.py hardwires ChatOpenAI() at the _default_chain | https://api.github.com/repos/langchain-ai/langchain/issues/12184/comments | 6 | 2023-10-24T04:17:28Z | 2023-10-26T09:09:59Z | https://github.com/langchain-ai/langchain/issues/12184 | 1,958,479,132 | 12,184 |
[
"hwchase17",
"langchain"
]
| ### System Info
python=3.10
langchain=0.0.320
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Bug
https://python.langchain.com/docs/modules/agents/how_to/custom-functions-with-openai-functions-agent
I am trying to reproduce this example. But I got the below error.
The error happends in the below function https://api.python.langchain.com/en/latest/_modules/langchain/chat_models/openai.html#ChatOpenAI:~:text=%3Dchunk)-,def%20_generate(,-self%2C%0A%20%20%20%20%20%20%20%20messages
After I printed the "message_dicts" in the function I found that one item in the list has a None "content" value.
``` python
[{'role': 'system', 'content': 'You are a helpful AI assistant.'}, {'role': 'user', 'content': 'What is the current price of Microsoft stock? How it has performed over past 6 months?'}, {'role': 'assistant', 'content': None, 'function_call': {'name': 'get_current_stock_price', 'arguments': '{\n "ticker": "MSFT"\n}'}}, {'role': 'function', 'content': '{"price": 326.6700134277344, "currency": "USD"}', 'name': 'get_current_stock_price'}]
```
The **None** value will lead to an error comlainning that:
```
{
"error": {
"code": null,
"param": null,
"message": "'content' is a required property - 'messages.2'",
"type": "invalid_request_error"
}
}
```
# My findings:
If we use the function_call features of OPENAI, it will always return a None value in the content. But it seems like, the langchain implementation does not deal with this situation.
# My fix:
After I add one line of code in the _generate function to replace the `None` value with an empty string `''`:
```python
message_dicts = [{**item, 'content': ('' if item['content'] is None else item['content'])} for item in message_dicts] # this line is added
response = self.completion_with_retry(
messages=message_dicts, run_manager=run_manager, **params
)
print(response)
return self._create_chat_result(response)
```
Then the function works normally.
Is this a bug in using the langchain OPENAI_FUNCTIONS agent?
### Expected behavior
# My fix:
After I add one line of code in the _generate function to replace the `None` value with an empty string `''`:
```python
message_dicts = [{**item, 'content': ('' if item['content'] is None else item['content'])} for item in message_dicts] # this line is added
response = self.completion_with_retry(
messages=message_dicts, run_manager=run_manager, **params
)
print(response)
return self._create_chat_result(response)
```
Then the function works normally.
The output (with some printing) is like this:
```
[{'role': 'system', 'content': 'You are a helpful AI assistant.'}, {'role': 'user', 'content': 'What is the current price of Microsoft stock? How it has performed over past 6 months?'}]
{
"created": 1698074688,
"usage": {
"completion_tokens": 18,
"prompt_tokens": 199,
"total_tokens": 217
},
"model": "gpt-35-turbo",
"id": "chatcmpl-8Cr5M7Zi8wv40g9dEg4ZbkNV0KBSc",
"choices": [
{
"finish_reason": "function_call",
"index": 0,
"message": {
"role": "assistant",
"function_call": {
"name": "get_current_stock_price",
"arguments": "{\n \"ticker\": \"MSFT\"\n}"
},
"content": null
}
}
],
"object": "chat.completion"
}
Invoking: `get_current_stock_price` with `{'ticker': 'MSFT'}`
{'price': 328.9849853515625, 'currency': 'USD'}[{'role': 'system', 'content': 'You are a helpful AI assistant.'}, {'role': 'user', 'content': 'What is the current price of Microsoft stock? How it has performed over past 6 months?'}, {'role': 'assistant', 'content': None, 'function_call': {'name': 'get_current_stock_price', 'arguments': '{\n "ticker": "MSFT"\n}'}}, {'role': 'function', 'content': '{"price": 328.9849853515625, "currency": "USD"}', 'name': 'get_current_stock_price'}]
{
"created": 1698074692,
"usage": {
"completion_tokens": 24,
"prompt_tokens": 228,
"total_tokens": 252
},
"model": "gpt-35-turbo",
"id": "chatcmpl-8Cr5QoQfx5gkIS9k7mGn2fs7lMkPc",
"choices": [
{
"finish_reason": "function_call",
"index": 0,
"message": {
"role": "assistant",
"function_call": {
"name": "get_stock_performance",
"arguments": "{\n \"ticker\": \"MSFT\",\n \"days\": 180\n}"
},
"content": null
}
}
],
"object": "chat.completion"
}
Invoking: `get_stock_performance` with `{'ticker': 'MSFT', 'days': 180}`
{'percent_change': 7.626317098142607}[{'role': 'system', 'content': 'You are a helpful AI assistant.'}, {'role': 'user', 'content': 'What is the current price of Microsoft stock? How it has performed over past 6 months?'}, {'role': 'assistant', 'content': None, 'function_call': {'name': 'get_current_stock_price', 'arguments': '{\n "ticker": "MSFT"\n}'}}, {'role': 'function', 'content': '{"price": 328.9849853515625, "currency": "USD"}', 'name': 'get_current_stock_price'}, {'role': 'assistant', 'content': None, 'function_call': {'name': 'get_stock_performance', 'arguments': '{\n "ticker": "MSFT",\n "days": 180\n}'}}, {'role': 'function', 'content': '{"percent_change": 7.626317098142607}', 'name': 'get_stock_performance'}]
{
"created": 1698074694,
"usage": {
"completion_tokens": 36,
"prompt_tokens": 251,
"total_tokens": 287
},
"model": "gpt-35-turbo",
"id": "chatcmpl-8Cr5SRASyFLFbp80CJhx5hgdVXDLM",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"role": "assistant",
"content": "The current price of Microsoft stock is $328.98 USD. Over the past 6 months, the stock has performed with a 7.63% increase in price."
}
}
],
"object": "chat.completion"
}
The current price of Microsoft stock is $328.98 USD. Over the past 6 months, the stock has performed with a 7.63% increase in price.
```
Is this a bug in using the langchain OPENAI_FUNCTIONS agent?
| Bug when using OPENAI_FUNCTIONS agent | https://api.github.com/repos/langchain-ai/langchain/issues/12183/comments | 2 | 2023-10-24T03:10:22Z | 2024-04-01T16:05:25Z | https://github.com/langchain-ai/langchain/issues/12183 | 1,958,394,412 | 12,183 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python Version: Python 3.10.4
Langchain Version: 0.0.320
OS: Ubuntu 18.04
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When running the official cookbook code...
```python
from pydantic import BaseModel
from typing import Any, Optional
from unstructured.partition.pdf import partition_pdf
# Path to save images
path = "./Papers/LLaVA/"
# Get elements
raw_pdf_elements = partition_pdf(filename=path+"LLaVA.pdf",
# Using pdf format to find embedded image blocks
extract_images_in_pdf=True,
# Use layout model (YOLOX) to get bounding boxes (for tables) and find titles
# Titles are any sub-section of the document
infer_table_structure=True,
# Post processing to aggregate text once we have the title
chunking_strategy="by_title",
# Chunking params to aggregate text blocks
# Attempt to create a new chunk 3800 chars
# Attempt to keep chunks > 2000 chars
# Hard max on chunks
max_characters=4000,
new_after_n_chars=3800,
combine_text_under_n_chars=2000,
image_output_dir_path=path
)
# Create a dictionary to store counts of each type
category_counts = {}
for element in raw_pdf_elements:
category = str(type(element))
if category in category_counts:
category_counts[category] += 1
else:
category_counts[category] = 1
# Unique_categories will have unique elements
# TableChunk if Table > max chars set above
unique_categories = set(category_counts.keys())
category_counts
class Element(BaseModel):
type: str
text: Any
# Categorize by type
categorized_elements = []
for element in raw_pdf_elements:
if "unstructured.documents.elements.Table" in str(type(element)):
categorized_elements.append(Element(type="table", text=str(element)))
elif "unstructured.documents.elements.CompositeElement" in str(type(element)):
categorized_elements.append(Element(type="text", text=str(element)))
# Tables
table_elements = [e for e in categorized_elements if e.type == "table"]
print(len(table_elements))
# Text
text_elements = [e for e in categorized_elements if e.type == "text"]
print(len(text_elements))
from langchain.chat_models import ChatOllama
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser
# Prompt
prompt_text="""You are an assistant tasked with summarizing tables and text. \
Give a concise summary of the table or text. Table or text chunk: {element} """
prompt = ChatPromptTemplate.from_template(prompt_text)
# Summary chain
model = ChatOllama(model="llama2:13b-chat")
summarize_chain = {"element": lambda x:x} | prompt | model | StrOutputParser()
# Apply to text
texts = [i.text for i in text_elements if i.text != ""]
text_summaries = summarize_chain.batch(texts, {"max_concurrency": 5})
```
The following error was returned:
```bash
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
File ~/anaconda3/envs/langchain/lib/python3.8/site-packages/requests/models.py:971, in Response.json(self, **kwargs)
970 try:
--> 971 return complexjson.loads(self.text, **kwargs)
972 except JSONDecodeError as e:
973 # Catch JSON-related errors and raise as requests.JSONDecodeError
974 # This aliases json.JSONDecodeError and simplejson.JSONDecodeError
File ~/anaconda3/envs/langchain/lib/python3.8/json/__init__.py:357, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
354 if (cls is None and object_hook is None and
355 parse_int is None and parse_float is None and
356 parse_constant is None and object_pairs_hook is None and not kw):
--> 357 return _default_decoder.decode(s)
358 if cls is None:
File ~/anaconda3/envs/langchain/lib/python3.8/json/decoder.py:337, in JSONDecoder.decode(self, s, _w)
333 """Return the Python representation of ``s`` (a ``str`` instance
334 containing a JSON document).
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
File ~/anaconda3/envs/langchain/lib/python3.8/json/decoder.py:355, in JSONDecoder.raw_decode(self, s, idx)
354 except StopIteration as err:
--> 355 raise JSONDecodeError("Expecting value", s, err.value) from None
356 return obj, end
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
JSONDecodeError Traceback (most recent call last)
Cell In[6], line 3
1 # Apply to text
2 texts = [i.text for i in text_elements if i.text != ""]
----> 3 text_summaries = summarize_chain.batch(texts, {"max_concurrency": 5})
File ~/chatpdf-langchain/langchain/libs/langchain/langchain/schema/runnable/base.py:1287, in RunnableSequence.batch(self, inputs, config, return_exceptions, **kwargs)
1285 else:
1286 for i, step in enumerate(self.steps):
-> 1287 inputs = step.batch(
1288 inputs,
1289 [
1290 # each step a child run of the corresponding root run
1291 patch_config(
1292 config, callbacks=rm.get_child(f"seq:step:{i+1}")
1293 )
1294 for rm, config in zip(run_managers, configs)
1295 ],
1296 )
1298 # finish the root runs
1299 except BaseException as e:
File ~/chatpdf-langchain/langchain/libs/langchain/langchain/schema/runnable/base.py:323, in Runnable.batch(self, inputs, config, return_exceptions, **kwargs)
320 return cast(List[Output], [invoke(inputs[0], configs[0])])
322 with get_executor_for_config(configs[0]) as executor:
--> 323 return cast(List[Output], list(executor.map(invoke, inputs, configs)))
File ~/anaconda3/envs/langchain/lib/python3.8/concurrent/futures/_base.py:619, in Executor.map.<locals>.result_iterator()
616 while fs:
617 # Careful not to keep a reference to the popped future
618 if timeout is None:
--> 619 yield fs.pop().result()
620 else:
621 yield fs.pop().result(end_time - time.monotonic())
File ~/anaconda3/envs/langchain/lib/python3.8/concurrent/futures/_base.py:444, in Future.result(self, timeout)
442 raise CancelledError()
443 elif self._state == FINISHED:
--> 444 return self.__get_result()
445 else:
446 raise TimeoutError()
File ~/anaconda3/envs/langchain/lib/python3.8/concurrent/futures/_base.py:389, in Future.__get_result(self)
387 if self._exception:
388 try:
--> 389 raise self._exception
390 finally:
391 # Break a reference cycle with the exception in self._exception
392 self = None
File ~/anaconda3/envs/langchain/lib/python3.8/concurrent/futures/thread.py:57, in _WorkItem.run(self)
54 return
56 try:
---> 57 result = self.fn(*self.args, **self.kwargs)
58 except BaseException as exc:
59 self.future.set_exception(exc)
File ~/chatpdf-langchain/langchain/libs/langchain/langchain/schema/runnable/base.py:316, in Runnable.batch.<locals>.invoke(input, config)
314 return e
315 else:
--> 316 return self.invoke(input, config, **kwargs)
File ~/chatpdf-langchain/langchain/libs/langchain/langchain/chat_models/base.py:142, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
131 def invoke(
132 self,
133 input: LanguageModelInput,
(...)
137 **kwargs: Any,
138 ) -> BaseMessage:
139 config = config or {}
140 return cast(
141 ChatGeneration,
--> 142 self.generate_prompt(
143 [self._convert_input(input)],
144 stop=stop,
145 callbacks=config.get("callbacks"),
146 tags=config.get("tags"),
147 metadata=config.get("metadata"),
148 run_name=config.get("run_name"),
149 **kwargs,
150 ).generations[0][0],
151 ).message
File ~/chatpdf-langchain/langchain/libs/langchain/langchain/chat_models/base.py:459, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
451 def generate_prompt(
452 self,
453 prompts: List[PromptValue],
(...)
456 **kwargs: Any,
457 ) -> LLMResult:
458 prompt_messages = [p.to_messages() for p in prompts]
--> 459 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File ~/chatpdf-langchain/langchain/libs/langchain/langchain/chat_models/base.py:349, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
347 if run_managers:
348 run_managers[i].on_llm_error(e)
--> 349 raise e
350 flattened_outputs = [
351 LLMResult(generations=[res.generations], llm_output=res.llm_output)
352 for res in results
353 ]
354 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File ~/chatpdf-langchain/langchain/libs/langchain/langchain/chat_models/base.py:339, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
336 for i, m in enumerate(messages):
337 try:
338 results.append(
--> 339 self._generate_with_cache(
340 m,
341 stop=stop,
342 run_manager=run_managers[i] if run_managers else None,
343 **kwargs,
344 )
345 )
346 except BaseException as e:
347 if run_managers:
File ~/chatpdf-langchain/langchain/libs/langchain/langchain/chat_models/base.py:492, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
488 raise ValueError(
489 "Asked to cache, but no cache found at `langchain.cache`."
490 )
491 if new_arg_supported:
--> 492 return self._generate(
493 messages, stop=stop, run_manager=run_manager, **kwargs
494 )
495 else:
496 return self._generate(messages, stop=stop, **kwargs)
File ~/chatpdf-langchain/langchain/libs/langchain/langchain/chat_models/ollama.py:98, in ChatOllama._generate(self, messages, stop, run_manager, **kwargs)
80 """Call out to Ollama's generate endpoint.
81
82 Args:
(...)
94 ])
95 """
97 prompt = self._format_messages_as_text(messages)
---> 98 final_chunk = super()._stream_with_aggregation(
99 prompt, stop=stop, run_manager=run_manager, verbose=self.verbose, **kwargs
100 )
101 chat_generation = ChatGeneration(
102 message=AIMessage(content=final_chunk.text),
103 generation_info=final_chunk.generation_info,
104 )
105 return ChatResult(generations=[chat_generation])
File ~/chatpdf-langchain/langchain/libs/langchain/langchain/llms/ollama.py:156, in _OllamaCommon._stream_with_aggregation(self, prompt, stop, run_manager, verbose, **kwargs)
147 def _stream_with_aggregation(
148 self,
149 prompt: str,
(...)
153 **kwargs: Any,
154 ) -> GenerationChunk:
155 final_chunk: Optional[GenerationChunk] = None
--> 156 for stream_resp in self._create_stream(prompt, stop, **kwargs):
157 if stream_resp:
158 chunk = _stream_response_to_generation_chunk(stream_resp)
File ~/chatpdf-langchain/langchain/libs/langchain/langchain/llms/ollama.py:140, in _OllamaCommon._create_stream(self, prompt, stop, **kwargs)
138 response.encoding = "utf-8"
139 if response.status_code != 200:
--> 140 optional_detail = response.json().get("error")
141 raise ValueError(
142 f"Ollama call failed with status code {response.status_code}."
143 f" Details: {optional_detail}"
144 )
145 return response.iter_lines(decode_unicode=True)
File ~/anaconda3/envs/langchain/lib/python3.8/site-packages/requests/models.py:975, in Response.json(self, **kwargs)
971 return complexjson.loads(self.text, **kwargs)
972 except JSONDecodeError as e:
973 # Catch JSON-related errors and raise as requests.JSONDecodeError
974 # This aliases json.JSONDecodeError and simplejson.JSONDecodeError
--> 975 raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
```
After attempting:
```python
# Summary chain
model = ChatOllama(model="llama2:13b-chat")
summarize_chain = {"element": lambda x:x} | prompt | model
texts = [i.text for i in text_elements if i.text != ""]
text_summaries = summarize_chain.batch(texts)
```
Returned the same error as before.
It seems that the error is occurring in the ‘ChatOllama model’.
### Expected behavior
Attempting to reproduce the effects in the Cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb. | JSONDecodeError on Cookbook/Semi_structured_multi_mudal_RAG_LLaMA2.ipynb | https://api.github.com/repos/langchain-ai/langchain/issues/12180/comments | 7 | 2023-10-24T02:23:12Z | 2024-01-22T19:18:52Z | https://github.com/langchain-ai/langchain/issues/12180 | 1,958,354,853 | 12,180 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello Team,
We have a metadata field in our opensearch index which I would like use to filter the passages before sending it to LLM to generate the answer. Tried many options suggested in chat.langchain.com but none of them seems to work.
Here is an example:
I have an user with associated groups ['user_group1', 'user_group2'] and the OpenSearch index with data like
{
'text': 'Page Content......',
'metadata': {
'source_acl': [ {'group_name': 'user_group1', 'group_type': 'type1'}, {}, {}]
}
}
So far I have been trying this
```
chain = ConversationalRetrievalChain.from_llm(
llm=SagemakerEndpoint(
endpoint_name=constants.EMBEDDING_ENDPOINT_ARN_ALPHA,
region_name=constants.REGION,
content_handler=self.content_handler_llm,
client=self.sg_client
),
# for experimentation, you can change number of documents to retrieve here
retriever=opensearch_vectorstore.as_retriever(search_kwargs={'k': 3, 'filter': 'source_acl.group_name:user_group1'}),
memory=memory,
condense_question_prompt=custom_question_prompt,
return_source_documents=True
)
```
Am I missing anything here?
Thanks in Advance.
### Suggestion:
_No response_ | Issue: <Not able to filter OpenSearch vectorstore using Filter in search_kwargs> | https://api.github.com/repos/langchain-ai/langchain/issues/12179/comments | 2 | 2023-10-24T01:09:55Z | 2024-02-07T16:15:18Z | https://github.com/langchain-ai/langchain/issues/12179 | 1,958,283,958 | 12,179 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
when i try and run this script
```
class WebsiteSummary:
def _SummorizeWebsite(objective, websiteContent):
my_gpt_instance = MyGPT4ALL()
textSplitter = RecursiveCharacterTextSplitter(
separators=["\n\n", "\n"], chunk_size=10000, chunk_overlap=500
)
docs = textSplitter.create_documents([websiteContent])
print(docs)
mapPrompt = """"
write a summary of the following {objective}:
{text}
SUMMARY:
"""
map_prompt_template = PromptTemplate(
template=mapPrompt, input_variables=["text", "objective"]
)
chain = load_summarize_chain(llm=my_gpt_instance,
chain_type="map_reduce",
verbose=True,
map_prompt=objective,
combine_prompt=map_prompt_template)
output = summary_Chain.run(input_documents=docs, objective=objective)
return output
```
i get this issue
Exception has occurred: ValidationError
1 validation error for LLMChain
prompt
value is not a valid dict (type=type_error.dict)
File "E:\StormboxBrain\langchain\websiteSummary.py", line 27, in _SummorizeWebsite
chain = load_summarize_chain(llm=my_gpt_instance,
File "E:\StormboxBrain\langchain\app.py", line 12, in <module>
summary = websiteSummary._SummorizeWebsite(objective, data)
pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain
my model does load and can create prompt responses
mistral-7b-openorca.Q4_0.gguf
this is mygpt4all class
```
import os
from pydantic import Field
from typing import List, Mapping, Optional, Any
from langchain.llms.base import LLM
from gpt4all import GPT4All, pyllmodel
class MyGPT4ALL(LLM):
"""
A custom LLM class that integrates gpt4all models
Arguments:
model_folder_path: (str) Folder path where the model lies
model_name: (str) The name of the model to use (<model name>.gguf)
backend: (str) The backend of the model (Supported backends: llama/gptj)
n_threads: (str) The number of threads to use
n_predict: (str) The maximum numbers of tokens to generate
temp: (str) Temperature to use for sampling
top_p: (float) The top-p value to use for sampling
top_k: (float) The top k values use for sampling
n_batch: (int) Batch size for prompt processing
repeat_last_n: (int) Last n number of tokens to penalize
repeat_penalty: (float) The penalty to apply repeated tokens
"""
model_folder_path: str = "./models/"
model_name: str = "mistral-7b-openorca.Q4_0.gguf"
backend: Optional[str] = "llama"
temp: Optional[float] = 0.7
top_p: Optional[float] = 0.1
top_k: Optional[int] = 40
n_batch: Optional[int] = 8
n_threads: Optional[int] = 4
n_predict: Optional[int] = 256
max_tokens: Optional[int] = 200
repeat_last_n: Optional[int] = 64
repeat_penalty: Optional[float] = 1.18
# initialize the model
gpt4_model_instance: Any = None
def __init__(self, **kwargs):
super(MyGPT4ALL, self).__init__()
print("init " + self.model_folder_path + self.model_name)
self.gpt4_model_instance = GPT4All(
model_name=self.model_name,
model_path=self.model_folder_path,
device="cpu",
verbose=True,
)
print("initiolized " + self.model_folder_path + self.model_name)
@property
def _get_model_default_parameters(self):
print("get default params")
return {
"max_tokens": self.max_tokens,
"n_predict": self.n_predict,
"top_k": self.top_k,
"top_p": self.top_p,
"temp": self.temp,
"n_batch": self.n_batch,
"repeat_penalty": self.repeat_penalty,
"repeat_last_n": self.repeat_last_n,
}
@property
def _identifying_params(self) -> Mapping[str, Any]:
print(" Get all the identifying parameters")
return {
"model_name": self.model_name,
"model_path": self.model_folder_path,
"model_parameters": self._get_model_default_parameters,
}
@property
def _llm_type(self) -> str:
return "llama"
def _call(self, prompt: str, stop: Optional[List[str]] = None, **kwargs) -> str:
"""
Args:
prompt: The prompt to pass into the model.
stop: A list of strings to stop generation when encountered
Returns:
The string generated by the model
"""
params = {**self._get_model_default_parameters, **kwargs}
print("params:")
for key, value in params.items():
print(f"{key}: {value}")
print("params " + params.values())
return "initiolized"
```
### Suggestion:
is there any documentation on how to use another llm as base and not gpt | value is not a valid dict (type=type_error.dict) | https://api.github.com/repos/langchain-ai/langchain/issues/12176/comments | 1 | 2023-10-23T23:15:43Z | 2023-10-29T13:51:16Z | https://github.com/langchain-ai/langchain/issues/12176 | 1,958,182,263 | 12,176 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Is there a way to integrate `RetryOutputParser` with `ConversationalRetrievalChain`? Specifically, if I use OpenAI's function calling to generate the response in a specific format, the LLM model does not always follow the instructions and then it would fail the Pydantic model validation.
Therefore, it's important to build the retry mechanism for the `LLMChain` (aka, `doc_chain` argument).
### Motivation
Specifically, if I use OpenAI's function calling to generate the response in a specific format, the LLM model does not always follow the instructions and then it would fail the Pydantic model validation.
Therefore, it's important to build the retry mechanism for the `LLMChain` (aka, `doc_chain` argument).
### Your contribution
TBD | Integrate RetryOutputParser with ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/12175/comments | 8 | 2023-10-23T21:40:52Z | 2024-02-14T16:09:08Z | https://github.com/langchain-ai/langchain/issues/12175 | 1,958,069,230 | 12,175 |
[
"hwchase17",
"langchain"
]
| ### Feature request
The idea is simple. When using various models users have rate limits, both in requests per min/hour and Tokens. When using the raw OpenAI SDK I used the code below. It just uses the "usage" metrics provided in the response (as of now I dont know how to retrieve a 'raw response' when utilizing langchain queries'
### Motivation
This was inspired by the experience of trying to use the GPT api to generate documentaion for a long list of functions of varying length. While trying to run these requests en mass I repeatedly had my process error out because of RATE LIMIT Errors. I dont think the idea of needing to run queries asynchronously against OpenAI API is a common enough use case (though strongely enough not natively supported by the Python SDK)
### Your contribution
I would be Honored!!!! To contribute this to the project but I honestly would need some help figuring out how to integrate it. This is the base Idea though, Feel free to reachout to collab! [email protected] or @codeblackwell anywhere. (apologies for the formatting. It just wouldnt work with me).
`
class TokenManager:
def __init__(self, max_tokens_per_minute=10000):
self.max_tokens_per_minute = max_tokens_per_minute
self.tokens_used_in_last_minute = 0
self.last_request_time = time.time()
def tokens_available(self):
current_time = time.time()
time_since_last_request = current_time - self.last_request_time
if time_since_last_request >= 60:
self.tokens_used_in_last_minute = 0
self.last_request_time = current_time
return self.max_tokens_per_minute - self.tokens_used_in_last_minute
def update_tokens_used(self, tokens_used):
if not isinstance(tokens_used, int) or tokens_used < 0:
raise ValueError("Invalid token count")
self.tokens_used_in_last_minute += tokens_used
def get_usage_stats(self):
current_time = time.time()
time_since_last_request = current_time - self.last_request_time
if time_since_last_request >= 60:
self.tokens_used_in_last_minute = 0
self.last_request_time = current_time
absolute_usage = self.tokens_used_in_last_minute
percent_usage = (self.tokens_used_in_last_minute / self.max_tokens_per_minute) * 100
return absolute_usage, percent_usage`
token_manager = TokenManager()
`def fetch_response(code_snippet, system_prompt, token_manager):
"""
Fetches a response using the OpenAI API.
Parameters:
code_snippet (str): The code snippet to be processed.
system_prompt (str): The system prompt.
Returns:
dict: The response from the API.
"""
while token_manager.tokens_available() < 2500: # Assume a minimum of 10 tokens per request
time.sleep(1)
try:
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": code_snippet}
]
)
tokens_used = response['usage']['total_tokens']
token_manager.update_tokens_used(tokens_used)
return response
except Exception as e:
print(f"An error occurred: {str(e)}")`
`def main(df, system_prompt):
"""
Processes a DataFrame of code snippets.
Parameters:
df (pd.DataFrame): The DataFrame containing code snippets.
system_prompt (str): The system prompt.
Returns:
pd.DataFrame: The DataFrame with the added output column.
"""
responses = [fetch_response(row['code'], system_prompt) for _, row in df.iterrows()]
df['output'] = responses
return df`
# Run the main function
`new_df = asyncio.run(main(filtered_df, system_prompt_1))` | Token Usage Management System | https://api.github.com/repos/langchain-ai/langchain/issues/12172/comments | 3 | 2023-10-23T20:36:34Z | 2024-03-11T11:55:29Z | https://github.com/langchain-ai/langchain/issues/12172 | 1,957,982,299 | 12,172 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am trying to setup the local development environment with `poetry install --with test` but I am running into the error `Group(s) not found: test (via --with)`
```
> origin/master:master via 🐍 v3.11.6
> 57% langchain ❯ poetry install --with test
Group(s) not found: test (via --with)
```
Could you advise on what might be happening here?
### Who can help?
@eyurtsev, I think you might be able to advise here? 🙏🏼
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Clone repository
- With poetry installed, run `poetry install --with test`
### Expected behavior
Local development setup works without an error. | `poetry install --with test`: Group(s) not found: test (via --with) | https://api.github.com/repos/langchain-ai/langchain/issues/12170/comments | 3 | 2023-10-23T19:28:02Z | 2023-10-23T19:46:02Z | https://github.com/langchain-ai/langchain/issues/12170 | 1,957,880,716 | 12,170 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
How can I see the context that was used to answer a specific question?
from langchain.memory import ConversationBufferWindowMemory
memory = ConversationBufferWindowMemory(k=3, memory_key="chat_history", return_messages=True)
qa = ConversationalRetrievalChain.from_llm(llm_model, vector_store.as_retriever(search_type='similarity', search_kwargs={'k': 2}), memory=memory)
query = "Can I as a manager fly business class?"
##predict with the model
text = f"""[INST] <<SYS>> You are a HR-assistant that answers questions:
<</SYS>>
{query} [/INST]
"""
result = qa({"question": text})
result
### Suggestion:
_No response_ | Issue: View context used in query in ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/12167/comments | 3 | 2023-10-23T18:10:39Z | 2024-02-10T16:10:57Z | https://github.com/langchain-ai/langchain/issues/12167 | 1,957,733,689 | 12,167 |
[
"hwchase17",
"langchain"
]
| ### System Info
make lint errors
```
make lint
./scripts/check_pydantic.sh .
./scripts/check_imports.sh
poetry run ruff .
[ "." = "" ] || poetry run black . --check
All done! ✨ 🍰 ✨
1866 files would be left unchanged.
[ "." = "" ] || poetry run mypy .
langchain/document_loaders/parsers/pdf.py:95: error: "PdfObject" has no attribute "keys" [attr-defined]
langchain/document_loaders/parsers/pdf.py:98: error: Value of type "PdfObject" is not indexable [index]
Found 2 errors in 1 file (checked 1863 source files)
make: *** [lint] Error 1```
for line 95:
```
if not self.extract_images or "/XObject" not in page["/Resources"].keys():
```
and line 98:
```
xObject = page["/Resources"]["/XObject"].get_object()
```
on master 7c4f340cc, v0.320
commented with # type: ignore
for my PR, but keeping here as reference to fix
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```make lint```
### Expected behavior
no mypy error | mypy linting error in PyPDFParser with pypdf 3.16.4 | https://api.github.com/repos/langchain-ai/langchain/issues/12166/comments | 4 | 2023-10-23T17:34:36Z | 2024-02-01T13:46:22Z | https://github.com/langchain-ai/langchain/issues/12166 | 1,957,670,819 | 12,166 |
[
"hwchase17",
"langchain"
]
| Updated: 20233-12-06
Hello everyone! thank you all for your contributions! We've made a lot of progress with SecretStrs in the code base.
First time contributors -- hope you had fun learning how to work in the code base and thanks for putting in the time. All contributors -- thanks for all your efforts in improving LangChain.
We'll create a new first time issue in a few months.
------
Hello LangChain community,
We're always happy to see more folks getting involved in contributing to the LangChain codebase.
This is a good first issue if you want to learn more about how to set up
for development in the LangChain codebase.
## Goal
Your contribution will make it safer to print out a LangChain object
without having any secrets included in raw format in the string representation.
## Set up for development
Prior to making any changes in the code:
https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md
Make sure you're able to test, format, lint from `langchain/libs/langchain`
```sh
make test
```
```sh
make format
```
```sh
make lint
```
## Shall you accept
Shall you accept this challenge, please claim one (and only one) of the modules from the list
below as one that you will be working on, and respond to this issue.
Once you've made the required code changes, open a PR and link to this issue.
## Acceptance Criteria
- invoking str or repr on the the object does not show the secret key
Integration test for the code updated to include tests that:
- confirms the object can be initialized with an API key provided via the initializer
- confirms the object can be initialized with an API key provided via an env variable
Confirm that it works:
- either re-run notebook for the given object or else add an appropriate test
that confirms that the actual secret is used appropriately (i.e.,`.get_secret_value()`)
**If your code does not use `get_secret_value()` somewhere, then it probably contains a bug!**
## Modules
- [ ] langchain/chat_models/anyscale.py @aidoskanapyanov
- [ ] langchain/chat_models/azure_openai.py @onesolpark
- [x] langchain/chat_models/azureml_endpoint.py @fyasla
- [ ] langchain/chat_models/baichuan.py
- [ ] langchain/chat_models/everlyai.py @sfreisthler
- [x] langchain/chat_models/fireworks.py @nepalprabin
- [ ] langchain/chat_models/google_palm.py @faisalt14
- [ ] langchain/chat_models/javelin_ai_gateway.py
- [ ] langchain/chat_models/jinachat.py
- [ ] langchain/chat_models/konko.py
- [ ] langchain/chat_models/litellm.py
- [ ] langchain/chat_models/openai.py @AnasKhan0607
- [ ] langchain/chat_models/tongyi.py
- [ ] langchain/llms/ai21.py
- [x] langchain/llms/aleph_alpha.py @slangenbach
- [ ] langchain/llms/anthropic.py
- [ ] langchain/llms/anyscale.py @aidoskanapyanov
- [ ] langchain/llms/arcee.py
- [x] langchain/llms/azureml_endpoint.py
- [ ] langchain/llms/bananadev.py
- [ ] langchain/llms/cerebriumai.py
- [ ] langchain/llms/cohere.py @arunsathiya
- [ ] langchain/llms/edenai.py @kristinspenc
- [ ] langchain/llms/fireworks.py
- [ ] langchain/llms/forefrontai.py
- [ ] langchain/llms/google_palm.py @Harshil-Patel28
- [ ] langchain/llms/gooseai.py
- [ ] langchain/llms/javelin_ai_gateway.py
- [ ] langchain/llms/minimax.py
- [ ] langchain/llms/nlpcloud.py
- [ ] langchain/llms/openai.py @HassanA01
- [ ] langchain/llms/petals.py @akshatvishu
- [ ] langchain/llms/pipelineai.py
- [ ] langchain/llms/predibase.py
- [ ] langchain/llms/stochasticai.py
- [x] langchain/llms/symblai_nebula.py @praveenv
- [ ] langchain/llms/together.py
- [ ] langchain/llms/tongyi.py
- [ ] langchain/llms/writer.py @ommirzaei
- [ ] langchain/llms/yandex.py
### Motivation
Prevent secrets from being printed out when printing the given langchain object.
### Your contribution
Please sign up by responding to this issue and including the name of the module. | For New Contributors: Use SecretStr for api_keys | https://api.github.com/repos/langchain-ai/langchain/issues/12165/comments | 56 | 2023-10-23T17:29:51Z | 2024-01-07T21:13:38Z | https://github.com/langchain-ai/langchain/issues/12165 | 1,957,660,279 | 12,165 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello team!
I am new to the OpenAPI Specs Toolkit. I am trying to build an OpenAPI Agent Planner for the Microsoft Graph API. The OpenAPI specs can be found here: https://github.com/microsoftgraph/msgraph-metadata/blob/master/openapi/v1.0/openapi.yaml . I downloaded the YAML file and followed this notebook: https://python.langchain.com/docs/integrations/toolkits/openapi#1st-example-hierarchical-planning-agent .
```python
import os
import yaml
from langchain.agents.agent_toolkits.openapi.spec import reduce_openapi_spec
current_directory = os.path.abspath('')
data_directory = os.path.join(current_directory, "data")
msgraph_api_file = os.path.join(data_directory, "msgraph-openapi.yaml")
raw_msgraph_api_spec = yaml.load(open(msgraph_api_file,encoding='utf-8').read(), Loader=yaml.Loader)
msgraph_api_spec = reduce_openapi_spec(raw_msgraph_api_spec)
```
Does anyone know how to handle large OpenAPI Specs? I ran the following to read my OpenAPI YAML spec and when using the `reduce_openapi_spec` module, I get the following error below:
`RecursionError: maximum recursion depth exceeded while calling a Python object`
Is there a setting I need to change in LangChain? Please and thank you for your help in advance. I believe this issue is before I run into the following token limit, right? https://github.com/langchain-ai/langchain/issues/2786
### Suggestion:
_No response_ | OpenAPI Spec - reduce_openapi_spec - maximum recursion depth exceeded | https://api.github.com/repos/langchain-ai/langchain/issues/12163/comments | 6 | 2023-10-23T16:23:35Z | 2024-06-12T16:07:06Z | https://github.com/langchain-ai/langchain/issues/12163 | 1,957,544,509 | 12,163 |
[
"hwchase17",
"langchain"
]
| There's an issue when using JsonSpec that it can't handle the case where it indexes a list with `0` because in [line 52](https://github.com/langchain-ai/langchain/blob/d0505c0d4755d8cbe62a8ddee68f53a5cb330892/libs/langchain/langchain/tools/json/tool.py#L52) it checks `if i` and since `i==0` it returns `False`.
Instead, I believe this should check for purely existence of `i` as a string or int. | JsonSpec `keys` method doesn't handle indexing lists properly | https://api.github.com/repos/langchain-ai/langchain/issues/12160/comments | 1 | 2023-10-23T15:41:45Z | 2024-02-06T16:16:01Z | https://github.com/langchain-ai/langchain/issues/12160 | 1,957,467,852 | 12,160 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.320
Python 3.9.6
Issue:
The LLM output ` ```json \n{\n \"action\": \"...``` ` for a StructuredChatAgent using StructuredChatOutputParser.
The regex in the parser `re.compile(r"```(?:json)?\n(.*?)```", re.DOTALL)` does not allow for a space after `json`.
LLM: Claude v2
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
create an agent using StructuredChatAgent and a tool that is called by the agent.
If the output from the LLM contains a space after `json`, it will call `AgentFinish`
### Expected behavior
The expected behaviour is that the output should match the regex. | StructuredChatOutputParser regex not accounting for space | https://api.github.com/repos/langchain-ai/langchain/issues/12158/comments | 1 | 2023-10-23T14:15:02Z | 2023-10-27T09:37:53Z | https://github.com/langchain-ai/langchain/issues/12158 | 1,957,280,320 | 12,158 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hi,
I am working on a student teacher model . I am running into a problem where I agent does not generate final answer after the expected observations. Below is my code and output
Code
```
assignment_instruction_2 = """
1. Ask his name and then welcome him
2. Ask the topic he wants to understand
3. Ask about his understanding about the topic
3. Let him answer
4. Then take out the relevant information from the get_next_content tool
5. If it is incomplete then tell him complete answer
"""
FORMAT_INSTRUCTIONS = f"""Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{'get_topics', 'get_next_content', 'get_topic_tool'}]
Action Input: the input to the action
Observation: the detailed,at most comprehensive result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer based on my observation, i do not need anything more
Final Answer: the final answer to the original input question is the full detailed explanation from the Observation provided as bullet points.
You have to provide the answer maximum after 2 Thoughts.
"""
content_assignment = f"""You are the author of the book, You will be asked an assigenment by the user, which are typically students only.
help them to understand the topic according to the instructions delimited by /* */. follow them sequestially
/*{assignment_instruction_2} */
memory_assignment = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
llm_assignment = ChatOpenAI(openai_api_key=os.getenv("OPENAI_API_KEY"), temperature=0, model = "gpt-4")
agent_chain_assignment = initialize_agent(
tools,
llm_assignment,
agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
memory=memory_assignment,
max_iterations=3,
# early_stopping_method="generate",
format_instructions=FORMAT_INSTRUCTIONS,
agent_kwargs={"system_message": content_assignment},
handle_parsing_errors=_handle_error
)
print("<================= TOPIC Understand AGENT ====================>")
while(True):
user_message = input("User: ")
if user_message == 'exit':
break
response = agent_chain_assignment.run(user_message)
print("LLM : ",response)
print("===============================")
```
Ouput:
``` ```
<================= TOPIC Understand AGENT ====================>
User: hi, sagar
> Entering new AgentExecutor chain...
```json
{
"action": "Final Answer",
"action_input": "Hello Sagar, welcome! What topic would you like to understand better today?"
}
```
> Finished chain.
LLM : Hello Sagar, welcome! What topic would you like to understand better today?
===============================
User: farming
> Entering new AgentExecutor chain...
Could not parse LLM output: That's a broad topic, Sagar. Could you tell me what you already understand about farming? This will help me provide you with the most relevant information.
Observation: Could not parse LLM output: That's a broad topic,
Thought:Could not parse LLM output: I'm sorry for the confusion, Sagar. You mentioned that you want to understand more about farming. Could you please specify what exactly you know about farming? This will help me to provide you with the most relevant information.
Observation: Could not parse LLM output: I'm sorry for the conf
Thought:Could not parse LLM output: I'm sorry for the confusion, Sagar. You mentioned that you want to understand more about farming. Could you please specify what exactly you know about farming? This will help me to provide you with the most relevant information.
Observation: Could not parse LLM output: I'm sorry for the conf
Thought:
> Finished chain.
LLM : Agent stopped due to iteration limit or time limit.
===============================
``` ```
It was supposed to output the final answer after the first answer (right after the user said "farming"). But it is running in loop and sometime it also consider that the user have already answered.
Can I make some corrections in the code where I can agent produce final answer right where it needs too.
Thanks,
Sagar
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Provided in the question itself
### Expected behavior
provided in the explaination | Agent is not stopping after each answer | https://api.github.com/repos/langchain-ai/langchain/issues/12157/comments | 2 | 2023-10-23T13:32:40Z | 2024-02-08T16:15:15Z | https://github.com/langchain-ai/langchain/issues/12157 | 1,957,182,305 | 12,157 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hey folks, I think i stumbled on a bug (or i'm using langchain wrong)
Langchain version : 0.0.320
Platform Ubuntu 23.04
Python: 3.11.4
### Who can help?
@hwchase17 , @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run the following:
`
llm = llms.VertexAI(model_name="code-bison@001", max_output_tokens=1000, temperature=0.0)
prediction = llm.predict(""" write a fibonacci sequence in python""")
from pprint import pprint
pprint(prediction)`
### Expected behavior
We get a prediction
(adding more info since the form has ran out of fields)
I think the bug is in [llms/vertexai.py:301](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/vertexai.py#L301C23-L301C23) . Variable res is a TextGenerationResponse as opposed to MultiCandidateTextGenerationResponse
Hence there are no "candidates" as you would expect from a chat model.
This happens because: Google's sdk (vertexai/language_models_language_models.py)
method:
>_ChatSessionBase. _parse_chat_prediction_response
returns a `MultiCandidateTextGenerationResponse`
but both `CodeChatSession` and `CodeGenerationModel` return a `TextGenerationResponse`
I think the fix might be replacing
`generations.append([_response_to_generation(r) for r in res.candidates])`
with
```
if self.is_codey_model:
generations.append([_response_to_generation(res)])
else:
generations.append([_response_to_generation(r) for r in res.candidates])
```
happy to send a pr if I helps | Langchain crashes when retrieving results from vertexai codey models | https://api.github.com/repos/langchain-ai/langchain/issues/12156/comments | 2 | 2023-10-23T13:02:18Z | 2024-02-08T16:15:20Z | https://github.com/langchain-ai/langchain/issues/12156 | 1,957,115,754 | 12,156 |
[
"hwchase17",
"langchain"
]
| ### System Info
`langchain==0.0.320`
### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
from langchain.vectorstores import MatchingEngine
texts = [
"The cat sat on",
"the mat.",
"I like to",
"eat pizza for",
"dinner.",
"The sun sets",
"in the west.",
]
vector_store = MatchingEngine.from_components(
texts=texts,
project_id="<my_project_id>",
region="<my_region>",
gcs_bucket_uri="<my_gcs_bucket>",
index_id="<my_matching_engine_index_id>",
endpoint_id="<my_matching_engine_endpoint_id>",
)
vector_store.add_texts(texts=texts)
```
### Expected behavior
I think line 136 in matching_engine.py should be
```py
jsons.append(json_)
``` | MatchingEngine.add_texts() doesn't work | https://api.github.com/repos/langchain-ai/langchain/issues/12154/comments | 1 | 2023-10-23T12:28:00Z | 2023-11-19T18:10:42Z | https://github.com/langchain-ai/langchain/issues/12154 | 1,957,048,261 | 12,154 |
[
"hwchase17",
"langchain"
]
| ### System Info
`langchain==0.0.320`
Example file: https://drive.google.com/file/d/1zDj3VXohUO7x4udu9RY9KxWbzBUIrsvb/view?usp=sharing
```
from PyPDF2 import PdfReader
from langchain.text_splitter import CharacterTextSplitter
# Specify the PDF file to be processed
pdf = "Deep Learning.pdf"
# Initialize a PdfReader object to read the PDF file
pdf_reader = PdfReader(pdf)
# Initialize an empty string to store the extracted text from the PDF
text = ""
for i, page in enumerate(pdf_reader.pages):
text += f" ### Page {i}:\n\n" + page.extract_text()
assert text.count(" ### ") == 100
text_splitter = CharacterTextSplitter(
separator=" ### ",
)
chunks = text_splitter.split_text(text)
len(chunks) ## = 76
```
Length chunks = 76 while there are 100 counts of the separator ` ### `. All the splits by ` ### ` look decent, so not sure why they are being merged strangely. Please help advice! Thanks!
Thank you so much!
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from PyPDF2 import PdfReader
from langchain.text_splitter import CharacterTextSplitter
# Specify the PDF file to be processed
pdf = "Deep Learning.pdf"
# Initialize a PdfReader object to read the PDF file
pdf_reader = PdfReader(pdf)
# Initialize an empty string to store the extracted text from the PDF
text = ""
for i, page in enumerate(pdf_reader.pages):
text += f" ### Page {i}:\n\n" + page.extract_text()
assert text.count(" ### ") == 100
text_splitter = CharacterTextSplitter(
separator=" ### ",
)
chunks = text_splitter.split_text(text)
len(chunks) ## = 76
```
### Expected behavior
`len(chunks) == 100` | CharacterTextSplitter return incorrect chunks | https://api.github.com/repos/langchain-ai/langchain/issues/12151/comments | 2 | 2023-10-23T09:37:40Z | 2024-02-08T16:15:26Z | https://github.com/langchain-ai/langchain/issues/12151 | 1,956,756,560 | 12,151 |
[
"hwchase17",
"langchain"
]
| ### System Info
Ubuntu: 22.04.3 LTS
python: 3.10
pip: 22.0.2
langchain: 0.0.319
vector stores: faiss
llm model: llama-2-13b-chat.Q4_K_M.gguf and mistral-7b-openorca.Q4_K_M.gguf
embeddings model: thenlper/gte-large
### Who can help?
anyone
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Prompt is created via PromptTemplate.
In Prompt:
```
You are a financial consultant. Try to answer the question based on the information below.
If you cannot answer the question based on the information, say that you cannot find the answer or are unable to find the answer.
Therefore, try to understand the context of the question and answer only on the basis of the information given. Do not write answers that are irrelevant.
If you do not have enough information to end a sentence with a period, then do not write that sentence.
Context: {context}
Question: {question}
Helpful answer:
```
2. Load vectoron storage using HuggingFaceEmbeddings and FAISS:
```
embeddings = HuggingFaceEmbeddings(model_name="thenlper/gte-large", model_kwargs={'device': 'cpu'})
db = FAISS.load_local(persist_directory, embeddings)
retriever = db.as_retriever(search_kwargs={"k": 2})
```
3. Load the LLM via LlamaCpp or CTransformers
LlamaCpp code:
```
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = LlamaCpp(
model_path=model_path,
n_gpu_layers=-1,
use_mlock=True,
n_batch=512,
n_ctx=2048,
temperature=0.8,
f16_kv=True,
callback_manager=callback_manager,
verbose=True,
top_k=1
)
```
CTransformers Code:
```
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = CTransformers(
model=model_path,
model_type="llama",
config={'max_new_tokens': 2048, 'temperature': 0.8},
callbacks=callback_manager
)
```
4. Create Retriever using RetrievalQA:
```
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=return_source_documents,
chain_type_kwargs={"prompt": custom_prompt}
)
```
5. Query the created retriever: qa("your question")
### Expected behavior
I look forward to receiving a completed response from LLM.
However, when using LlamaCpp, the response reaches about 1000 characters and ends. The last sentence from LLM is abruptly cut off.
When CTransformers are used, the response can reach different lengths, after which an error occurs: "Number of tokens (513) exceeded maximum context length (512)" and there are many of these errors. After that I get a response where the beginning is correct and then the same words are repeated. The repetition of words starts from the moment the errors about the number of tokens begin. | LlamaCpp truncates the response from LLM, and CTransformers gives a token overrun error | https://api.github.com/repos/langchain-ai/langchain/issues/12150/comments | 2 | 2023-10-23T09:32:40Z | 2024-02-13T16:10:07Z | https://github.com/langchain-ai/langchain/issues/12150 | 1,956,747,660 | 12,150 |
[
"hwchase17",
"langchain"
]
| ### System Info
python == 3.10.12
langchain == 0.0.312
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```open_llm = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY, model_name='gpt-3.5-turbo-16k', max_tokens=3000)
# test chain (LCEL)
human_message = HumanMessagePromptTemplate.from_template('test prompt: {input}')
system_message = SystemMessagePromptTemplate.from_template('system')
all_messages = ChatPromptTemplate.from_messages([system_message, human_message])
test_chain = all_messages | open_llm | StrOutputParser()
# LLMChain
class MyLLMChain(LLMChain):
output_key: str = "output" #: :meta private
prompt_infos = [
{
"name": "mastermind",
"description": "for general questions.",
"chain": MyLLMChain(llm=openai_chatbot, prompt=PromptTemplate(template='''{input}''',
input_variables=["input"]
))
},
{
"name": "test",
"description": "for testing",
"chain": test_chain
}]
destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
destinations_str = "\n".join(destinations)
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str)
router_prompt = PromptTemplate(
template=router_template,
input_variables=["input"],
# output_parser=OpenAIFunctionsAgentOutputParser()
# output_parser=RouterOutputParser(),
output_parser=OutputFixingParser.from_llm(parser=RouterOutputParser(), llm=openai_chatbot),
)
router_chain = LLMRouterChain.from_llm(openai_chatbot, router_prompt)
destination_chains = {}
for p_info in prompt_infos:
name = p_info["name"]
chain = p_info["chain"]
destination_chains[name] = chain
default_chain = ConversationChain(llm=openai_chatbot, output_key="output")
agent_yan_chain = MultiPromptChain(
router_chain=router_chain,
destination_chains=destination_chains,
default_chain=default_chain,
verbose=True
)
```
Error message:
```
ValidationError: 1 validation error for MultiPromptChain
destination_chains -> test
Can't instantiate abstract class Chain with abstract methods _call, input_keys, output_keys (type=type_error)
```
### Expected behavior
LCEL should work with MultiPromptChain | LCEL not working with MultiPromptChain | https://api.github.com/repos/langchain-ai/langchain/issues/12149/comments | 7 | 2023-10-23T09:10:57Z | 2024-06-26T05:16:03Z | https://github.com/langchain-ai/langchain/issues/12149 | 1,956,707,298 | 12,149 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python 3.11, langchain 0.0.315 and mac
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a FastAPI endpoint/router
```python
from fastapi import APIRouter, Response
from pydantic import BaseModel
from typing import Optional
from langchain.schema import Document
router = APIRouter()
class QueryOut(BaseModel):
answer: str
sources: list[Document]
@router.post("/query")
async def get_answer():
sources = [Document(page_content="foo", metadata={"baa": "baaaa"})]
return QueryOut(answer="answer", sources=sources)
```
Then call it and boom
```bash
File "/Users/FRANCESCO.ZUPPICHINI/Documents/dbi_alphaai_knowledge_bot/.venv/lib/python3.11/site-packages/pydantic/main.py", line 164, in __init__
__pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)
TypeError: BaseModel.validate() takes 2 positional arguments but 3 were given
```
Is this linked to the fact that langchain is still using pydantic v1? If so, I love you guys but put that 30M to good use 😄
### Expected behavior
Should not explode | Langchain Document schema explodes with FastAPI | https://api.github.com/repos/langchain-ai/langchain/issues/12147/comments | 2 | 2023-10-23T07:35:17Z | 2023-10-23T13:17:32Z | https://github.com/langchain-ai/langchain/issues/12147 | 1,956,539,711 | 12,147 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello, I have this use-case where there are different types of documents. I can parse documents using document loaders using langchain. But, there are images also in these documents. I want to store them as metadata and if answer generated from a context chunk it show the image also. Please help.
### Suggestion:
_No response_ | How to show images from PDFs, PPT, DOCs documents as part of answer? | https://api.github.com/repos/langchain-ai/langchain/issues/12144/comments | 2 | 2023-10-23T02:16:49Z | 2024-02-08T16:15:35Z | https://github.com/langchain-ai/langchain/issues/12144 | 1,956,221,942 | 12,144 |
[
"hwchase17",
"langchain"
]
| ### System Info
version:
- langchain-0.0.320
- py311
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Running the [example code](https://python.langchain.com/docs/integrations/retrievers/arxiv)
```python
from langchain.retrievers import ArxivRetriever
retriever = ArxivRetriever(load_max_docs=2)
docs = retriever.get_relevant_documents(query="1605.08386")
```
The above code gives the error:
`AttributeError: partially initialized module 'arxiv' has no attribute 'Search' (most likely due to a circular import)`
### Expected behavior
Should print out
```
{'Published': '2016-05-26',
'Title': 'Heat-bath random walks with Markov bases',
'Authors': 'Caprice Stanley, Tobias Windisch',
'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on\nfibers of a fixed integer matrix can be bounded from above by a constant. We\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditions on the set of moves so that the heat-bath random\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\ndimension.'}
``` | AttributeError: partially initialized module 'arxiv' has no attribute 'Search' (most likely due to a circular import) | https://api.github.com/repos/langchain-ai/langchain/issues/12143/comments | 2 | 2023-10-23T01:04:04Z | 2023-12-06T02:37:31Z | https://github.com/langchain-ai/langchain/issues/12143 | 1,956,153,788 | 12,143 |
[
"hwchase17",
"langchain"
]
| ### System Info
latest version langchain locks peer dependency: "googleapis" version to 126.0.1, wondering why it has to be googleapis v126.0.1. it introduces tech debts.
Please fix the peer dep version for `googleapis` if there is no specific reasons.
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
when try to install both latest version langchain and googleapis latest version in nodejs via npm.
### Expected behavior
langchain shouldn't prevent latest googleapis to be installed. | latest version langchain locks peer dependency: "googleapis" version to 126.0.1 | https://api.github.com/repos/langchain-ai/langchain/issues/12142/comments | 2 | 2023-10-23T00:55:32Z | 2024-01-31T16:33:43Z | https://github.com/langchain-ai/langchain/issues/12142 | 1,956,147,176 | 12,142 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
On this page
This example:
```
class SearchInput(BaseModel):
query: str = Field(description="should be a search query")
@tool("search", return_direct=True, args_schema=SearchInput)
def search_api(query: str) -> str:
"""Searches the API for the query."""
return "Results"
search_api
```
Fails with:
ValidationError: 1 validation error for StructuredTool
args_schema
subclass of BaseModel expected (type=type_error.subclass; expected_class=BaseModel)
### Idea or request for content:
nothing missing, but example fails
Name: langchain
Version: 0.0.311 | DOC: Example from Custom Tool documentation fails | https://api.github.com/repos/langchain-ai/langchain/issues/12138/comments | 2 | 2023-10-22T18:25:42Z | 2023-10-22T18:48:16Z | https://github.com/langchain-ai/langchain/issues/12138 | 1,956,004,892 | 12,138 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
When starting Docusaurus in a local container environment using the `make docs_build` command, it runs at `localhost:3000`, but a connection from the host machine is not possible.
The container has both `poetry` and `yarn` installed.
Even though port 3000 is forwarded to the host, the connection is not established.
https://github.com/langchain-ai/langchain/blob/master/docs/package.json#L7
I am proficient with Python but struggle with React, which might be influencing my troubleshooting.
Thank you!!
### Idea or request for content:
By adding the `--host 0.0.0.0` option to the `start` command in `package.json`, the issue might be resolved.
```diff
-"start": "rm -rf ./docs/api && docusaurus start",
+"start": "rm -rf ./docs/api && docusaurus start --host 0.0.0.0",
```
- With this change, it would be possible to access the port that's being forwarded from within the container to the host environment.
- However, using the `--host 0.0.0.0` option can have security implications, so caution is advised especially in production environments. | DOC: Issue Connecting to Docusaurus in Local Container Environment | https://api.github.com/repos/langchain-ai/langchain/issues/12127/comments | 1 | 2023-10-22T06:06:45Z | 2024-02-06T16:16:26Z | https://github.com/langchain-ai/langchain/issues/12127 | 1,955,775,678 | 12,127 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Microsoft OneNote loader like EverNoteLoader or ObsidianLoader
### Motivation
N/A
### Your contribution
N/A | Is there any loader for Microsoft OneNote like EverNoteLoader or ObsidianLoader | https://api.github.com/repos/langchain-ai/langchain/issues/12125/comments | 10 | 2023-10-22T04:36:08Z | 2024-06-07T11:49:38Z | https://github.com/langchain-ai/langchain/issues/12125 | 1,955,757,359 | 12,125 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I found that using a `RetrievalQA` for streaming outputs gibberish response. For example, using a `RetrievalQA` with code below on the `state_of_the_union.txt` example:
```
doc_chain = load_qa_chain(
llm=ChatOpenAI(
streaming=True,
openai_api_key=api_key,
callbacks=[StreamingStdOutCallbackHandler()]
),
chain_type="map_reduce",
verbose=False,
)
retrieval_qa = RetrievalQA(
combine_documents_chain=doc_chain,
retriever=retriever,
)
```
With this call: `retrieval_qa.run("What did the president say about Ketanji Brown Jackson")` outputs this streamed response:
```
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.""And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.""And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence."The given portion of the document does not mention anything about what the president said about Ketanji Brown Jackson.The given portion of the document does not provide any information about what the president said about Ketanji Brown Jackson.
```
While using `VectorDBChain`:
```
qa = VectorDBQA.from_chain_type(
llm=ChatOpenAI(
streaming=True,
openai_api_key=api_key,
callbacks=[StreamingStdOutCallbackHandler()]
),
chain_type="map_reduce",
vectorstore=db
)
```
Outputs this:
```
The President said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, who is one of our nation's top legal minds and will continue Justice Breyer's legacy of excellence.
```
Is there any reason as to why the `VectorDBQA` chains have been deprecated in favour of the `RetrievalQA` chains? At first I thought the gibbering streaming was related to using PGVector, but I even tried it with Chroma and am having the same issue.
### Suggestion:
_No response_ | Issue: `VectorDBQA` is a better chain than `RetrievalQA` when it comes to streaming model response | https://api.github.com/repos/langchain-ai/langchain/issues/12124/comments | 2 | 2023-10-22T04:31:46Z | 2024-02-06T16:16:31Z | https://github.com/langchain-ai/langchain/issues/12124 | 1,955,756,498 | 12,124 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Feature request to Integrate a new Langchain tool that make recommandtions on steam games based on user's Steam ID and provide game information based on given Steam game name.
### Motivation
We recognize the current challenges users face when discovering games on Steam—cluttered search results and lack of tailored information. To revolutionize this experience, we propose integrating Langchain, offering personalized game recommendations based on a user's Steam ID and comprehensive game details based on specific titles.
Langchain's algorithm will provide personalized game suggestions aligned with users' preferences, eliminating irrelevant options. It'll also furnish detailed game insights—prices, discounts, popularity, and latest news—for informed decision-making.
This integration aims to streamline game discovery, enhance user satisfaction, and foster a vibrant gaming community on Steam. We're eager to discuss this enhancement and its potential to transform the Steam experience.
### Your contribution
We are a group of 4 looking to contribute to Langchain as a course project and will be creating a PR for this issue in mid November. | Feature: Steam game recommendation tool | https://api.github.com/repos/langchain-ai/langchain/issues/12120/comments | 8 | 2023-10-21T20:44:10Z | 2023-12-15T04:37:11Z | https://github.com/langchain-ai/langchain/issues/12120 | 1,955,656,927 | 12,120 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.285, python 3.11.5, Windows 11
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
``` python
from langchain.embeddings import HuggingFaceInstructEmbeddings
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.llms import OpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.document_loaders import TextLoader
from langchain.memory import ConversationBufferMemory
from mysecrets import secrets
loader = TextLoader("./sotu.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(documents)
model_name = "hkunlp/instructor-large"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': True}
embed_instruction = "Represent the document for summary"
embedding_function = HuggingFaceInstructEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs,
embed_instruction = embed_instruction
)
vectorstore = Chroma.from_documents(documents, embedding_function)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
llm = OpenAI(openai_api_key=secrets["OPENAI-APIKEY"],temperature=0)
qa = ConversationalRetrievalChain.from_llm(llm=llm, retriever=vectorstore.as_retriever(), memory=memory, return_source_documents=True)
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query})
```
### Expected behavior
When `return_source_documents` is set to False, code runs as intended. When ConversationalRetrievalChain uses the default value for `memory`, code runs as intended. If ConversationalRetrievalChain is used with memory and source documents are to be returned, the code fails since chat_memory.py (https://github.com/langchain-ai/langchain/blame/master/libs/langchain/langchain/memory/chat_memory.py) is expecting only one key. | ConversationalRetrievalChain cannot return source documents when using ConversationBufferWindowMemory | https://api.github.com/repos/langchain-ai/langchain/issues/12118/comments | 2 | 2023-10-21T20:06:18Z | 2023-10-22T00:11:00Z | https://github.com/langchain-ai/langchain/issues/12118 | 1,955,647,569 | 12,118 |
[
"hwchase17",
"langchain"
]
| ### Feature request
There seems to be a conflation between configuration code and inheritance trees that make it nearly impossible to extend langchain functionality without recreating entire class inheritance. I ran into this almost immediately when trying to customize agents and executors to my application needs, but also take advantage of the pre-assembled styles (e.g. Zero-Shot react). I had to recreate my style of zero shot executor and agent by copying the parameters from the subclasses into my forked instance of the bases.
My proposal is to decouple these two concepts and have a single inheritance tree for functionality (e.g. agents, chains) and a separate notion of a “config” class that can hold the parameterization for an object. The factory methods on the core classes can then accept a config parameter to create an instance with the desired parameterization. This enables extension in two dimensions - any core object (agent, chain, etc) x any configuration. Furthermore new configurations can easily be created by end users or even extensions of existing ones (maybe you want to swap out one parameter from a predefined config without copy pasting the whole code).
### Motivation
Architectural improvement and future extensibility of the framework
### Your contribution
PR forthcoming with a POC when I have some time to abstract what I did for my projects into the langchain source but wanted to seed this for discussion! | Architecture: Decouple Configuration from Inheritance for better extensibility | https://api.github.com/repos/langchain-ai/langchain/issues/12117/comments | 2 | 2023-10-21T19:33:24Z | 2024-02-06T16:16:36Z | https://github.com/langchain-ai/langchain/issues/12117 | 1,955,638,250 | 12,117 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Feature request to integrate the SendGrid API into Langchain for enhanced email communication and notifications.
**Relevant links:**
1. [SendGrid API Documentation](https://sendgrid.com/docs/API_Reference/index.html): The official documentation for the SendGrid API, offering comprehensive information on how to use the API for sending emails and managing email communication.
2. [SendGrid GitHub Repository](https://github.com/sendgrid/sendgrid-python): The GitHub repository for SendGrid's official Python library, which can be utilized for API integration.
### Motivation
Langchain is committed to offering a seamless platform for language and communication-related tools. By integrating the SendGrid API, we aim to bring advanced email communication capabilities to the platform, enhancing its utility and user experience.
SendGrid is a renowned email platform that provides a reliable and efficient way to send, receive, and manage emails. By incorporating the SendGrid API into Langchain, we enable users to send emails, manage notifications, and enhance their communication within the platform.
Our goal is to provide Langchain users with a feature-rich email system that allows them to send notifications, updates, and alerts directly from Langchain. This integration will streamline communication, offering users a seamless experience and the ability to keep stakeholders informed and engaged through email.
In summary, the SendGrid API integration project is designed to extend Langchain's capabilities by incorporating a powerful email communication tool. This will help users effectively manage email notifications and stay connected with stakeholders within the platform.
### Your contribution
We are 4 University of Toronto students and are very interested in contributing to Langchain for a course project. We will be
creating a PR that implements this feature sometime in mid November. | SendGrid API Integration For Enhanced Email Communication | https://api.github.com/repos/langchain-ai/langchain/issues/12116/comments | 2 | 2023-10-21T19:00:04Z | 2024-02-09T16:13:23Z | https://github.com/langchain-ai/langchain/issues/12116 | 1,955,627,328 | 12,116 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Request to Integrate Stack Exchange API into Langchain for enhanced information access and interaction.
**Relevant links:**
1. **Stack Exchange API Documentation**: The official documentation for the Stack Exchange API, providing detailed information on how to interact with the API and retrieve data.
[Stack Exchange API Documentation](https://api.stackexchange.com/docs)
2. **Stack Exchange API Client GitHub Repository**: Community-supported libraries or tools that interact with the API.
[Stack Exchange API Client GitHub Repository](https://github.com/benatespina/StackExchangeApiClient)
### Motivation
Langchain aims to provide a comprehensive and seamless platform for accessing and interacting with a wide array of linguistic and language-related resources. In this context, integrating the Stack Exchange API would greatly enhance the utility of the platform.
Stack Exchange hosts a plethora of specialized communities, each dedicated to specific domains of knowledge and expertise. These communities generate valuable content in the form of questions, answers, and discussions. By integrating the Stack Exchange API into Langchain, we can enable users to access this wealth of information directly from within the platform.
What we aim to provide is a convenient gateway for Langchain users to explore Stack Exchange communities, search for relevant questions and answers, and actively participate in discussions. This integration would not only simplify the process of finding authoritative information on various topics but also facilitate direct engagement with experts and enthusiasts in their respective fields.
In summary, our Stack Exchange API integration project is designed to enrich the Langchain experience by offering a direct link to the vast knowledge repositories of Stack Exchange communities. It will empower users to seamlessly navigate between these platforms, harnessing the collective wisdom of these communities for their linguistic and language-related endeavors.
### Your contribution
We are 4 University of Toronto students and are very interested in contributing to Langchain for a course project. We will be
creating a PR that implements this feature sometime in mid November. | Stack Exchange API Integration | https://api.github.com/repos/langchain-ai/langchain/issues/12115/comments | 2 | 2023-10-21T18:49:17Z | 2024-02-06T16:16:46Z | https://github.com/langchain-ai/langchain/issues/12115 | 1,955,624,167 | 12,115 |
[
"hwchase17",
"langchain"
]
| ### System Info
#### Description:
I encountered an issue with the UnstructuredURLLoader class from the "langchain" library, specifically in the libs/langchain/langchain/document_loaders/url.py module. When trying to handle exceptions for a failed request, I observed that the exception raised by the library doesn't inherit from the base class Exception. This makes it challenging to handle exceptions properly in my code.
#### **Actual Behavior**:
The exception raised by the **`UnstructuredURLLoader`** doesn't inherit from the base class **`Exception`**, causing difficulty in handling exceptions effectively.
#### **Error Message**:
```Error fetching or processing https://en.wikipesdfdia.org/, exception: HTTPSConnectionPool(host='en.wikipesdfdia.org', port=443): Max retries exceeded with url: / (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x000002113AD12F10>: Failed to resolve 'en.wikipesdfdia.org' ([Errno 11001] getaddrinfo failed)"))```
### Who can help?
@eyurtsev @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
from langchain.document_loaders import UnstructuredURLLoader
from urllib3 import HTTPSConnectionPool
url = 'https://en.wikipesdfdia.org/'
try:
unstructured_loader = UnstructuredURLLoader([url])
all_content = unstructured_loader.load()
print("Request was successful")
except HTTPSConnectionPool as e:
# Handle HTTPConnectionPool exceptions
print(f"An HTTPConnectionPool exception occurred: {e}")
except Exception as e:
# Handle other exceptions
print(f"An unexpected error occurred: {e}")
```
### Expected behavior
I expected the exception raised by the **`UnstructuredURLLoader`** to be derived from the base class **`Exception`**, which would allow for proper exception handling. | Issue with Exception Handling: UnstructuredURLLoader Does Not Raise Exceptions Inheriting from Base Class Exception | https://api.github.com/repos/langchain-ai/langchain/issues/12112/comments | 2 | 2023-10-21T16:52:54Z | 2024-02-06T16:16:51Z | https://github.com/langchain-ai/langchain/issues/12112 | 1,955,575,850 | 12,112 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain:0.0.319
Python:3.11.2
System: macOS 13.5.2 arm64
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I implemented a generic module to handle many types of chat models. I executed the following script after I set `OPENAI_API_KEY=xxx` in `.env` file.
```python
from dataclasses import dataclass, field
from langchain.schema.language_model import BaseLanguageModel
from langchain.chat_models import ChatOpenAI
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
@dataclass
class ModelConfig:
"""Model Configuration"""
model: BaseLanguageModel
arguments: field(
default_factory=dict
) # This could be any arguments for BaseLanguageModel child class
def init_model(self) -> BaseLanguageModel:
return self.model(**self.arguments)
if __name__ == "__main__":
model = ModelConfig(model=ChatOpenAI, arguments=dict())
print(model)
```
And then the output contains **a leaked API key**!
```sh
<class 'openai.api_resources.chat_completion.ChatCompletion'> openai_api_key=xxxx>
```
### Expected behavior
My idea off the top of my head is to use, say, `__str__` directive inside the original class. But this seems just a workaround for ChatOpenAI class. So something more generic feature in `BaseLanguageModel` would be more decent. Like should have a common property in the base class, which handle API token across any LLM models and this property might be a special class that can hide the value when other class and functions call this.
https://github.com/langchain-ai/langchain/blob/08f5e6b8012f5eda2609103f33676199a3781a15/libs/langchain/langchain/chat_models/openai.py#L181-L238
## Similar issue.
#8499 | API Key Leakage in the BaseLanguageModel | https://api.github.com/repos/langchain-ai/langchain/issues/12110/comments | 3 | 2023-10-21T16:08:21Z | 2024-02-10T16:11:07Z | https://github.com/langchain-ai/langchain/issues/12110 | 1,955,553,596 | 12,110 |
[
"hwchase17",
"langchain"
]
| ### System Info
When upgrading from version 0.317 to 0.318, there is a bug regarding the Pydantic model validations. This issue doesn't happen in versions before 0.318.
File "...venv/lib/python3.10/site-packages/langchain/retrievers/google_vertex_ai_search.py", line 268, in __init__
self._serving_config = self._client.serving_config_path(
File "pydantic/main.py", line 357, in pydantic.main.BaseModel.__setattr__
ValueError: "GoogleVertexAISearchRetriever" object has no field "_serving_config"
### Who can help?
I tagged both of you, @eyurtsev and @kreneskyp, regarding this version 0.318 change, in case it is related: https://github.com/langchain-ai/langchain/pull/11936.
Thanks for this amazing library!
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Execute the following code:
```
def main():
aiplatform.init(project=PROJECT_ID)
llm = VertexAI(model_name=MODEL, temperature=0.0)
retriever = GoogleVertexAISearchRetriever(
project_id=PROJECT_ID, search_engine_id=DATA_STORE_ID
)
search_query = "Who was the CEO of DeepMind in 2021?"
retrieval_qa = RetrievalQA.from_chain_type(
llm=llm, chain_type="stuff", retriever=retriever
)
answer = retrieval_qa.run(search_query)
print(answer)
if __name__ == "__main__":
main()
```
### Expected behavior
To output the answer from the model, as it does in versions before 0.318 | "GoogleVertexAISearchRetriever" object has no field "_serving_config". It's a Pydantic-related bug. | https://api.github.com/repos/langchain-ai/langchain/issues/12100/comments | 2 | 2023-10-21T11:31:24Z | 2023-10-24T08:20:04Z | https://github.com/langchain-ai/langchain/issues/12100 | 1,955,451,984 | 12,100 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Feature request to Integrate a new Langchain tool that summarizes reddit user interactions/discussion based on user input.
### Motivation
Reddit is a vast platform with numerous discussions and threads covering a wide range of topics. Users often seek information, insights, and opinions on specific subjects of interest within these discussions. However, navigating through these discussions and identifying the most valuable insights can be a time-consuming and sometimes overwhelming process.
What we aim to provide is a summarization toolkit that simplifies the experience of exploring Reddit. With this toolkit, users can input a topic or subject they are interested in, and it will not only aggregate relevant discussions (posts) from Reddit but also generate concise and coherent summaries of these discussions. This tool streamlines the process of distilling key information, popular opinions, and diverse perspectives from Reddit threads, making it easier for users to stay informed and engaged with the platform's wealth of content.
In summary, our Reddit summarization toolkit is designed to save users time and effort by delivering informative and easily digestible summaries of Reddit discussions, ultimately enhancing their experience in accessing and comprehending the vast amount of information and opinions present on the platform.
### Your contribution
We are a group of 4 looking to contribute to Langchain as a course project and will be creating a PR for this issue in mid/late November. | feature: Reddit API Tool | https://api.github.com/repos/langchain-ai/langchain/issues/12097/comments | 3 | 2023-10-21T05:49:41Z | 2023-12-03T17:29:57Z | https://github.com/langchain-ai/langchain/issues/12097 | 1,955,338,588 | 12,097 |
[
"hwchase17",
"langchain"
]
| When running on colab, it has following error:
```
[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data] Unzipping tokenizers/punkt.zip.
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data] /root/nltk_data...
[nltk_data] Unzipping taggers/averaged_perceptron_tagger.zip.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-2-3092bb1466a5>](https://localhost:8080/#) in <cell line: 7>()
5
6 # Get elements
----> 7 raw_pdf_elements = partition_pdf(filename=path+"LLaVA.pdf",
8 # Using pdf format to find embedded image blocks
9 extract_images_in_pdf=True,
8 frames
[/usr/local/lib/python3.10/dist-packages/unstructured_inference/models/base.py](https://localhost:8080/#) in get_model(model_name, **kwargs)
70 else:
71 raise UnknownModelException(f"Unknown model type: {model_name}")
---> 72 model.initialize(**initialize_params)
73 models[model_name] = model
74 return model
TypeError: UnstructuredYoloXModel.initialize() got an unexpected keyword argument 'extract_images_in_pdf'
```
https://github.com/langchain-ai/langchain/blob/5dbe456aae755e3190c46316102e772dfcb6e148/cookbook/Semi_structured_and_multi_modal_RAG.ipynb#L103 | How to run Semi_structured_and_multi_modal_RAG.ipynb | https://api.github.com/repos/langchain-ai/langchain/issues/12096/comments | 2 | 2023-10-21T02:51:14Z | 2024-02-08T16:15:50Z | https://github.com/langchain-ai/langchain/issues/12096 | 1,955,247,057 | 12,096 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Cohere's upcoming (currently in BETA) Co.Chat + RAG API Endpoint will offer additional parameters such as "documents" and "connectors" that can be used for RAG. This is the major new feature of the Cohere Coral models (RAG)
See https://docs.cohere.com/reference/chat
GPT-4 (and the Other Chat Models following soon after) I am sure will have similar ability to offer RAG with its various new chat models.
Would the ability to add additional parameters to API chat endpoints (such as a document folder or a url etc...) be able to be supported via the Langchain Runnable Interface once these new Chat endpoints API's are GA?
This support would be needed for the external RAG for both the Input Type (_PromptValue?_) and the Output Type. (ChatMessage)
### Motivation
The main motivation for this proposal for updating the Input and Output Types in the Langchain Runnable Interface _(if that is indeed the correct place from a code standpoint this would be supported_) is to support RAG inputs and outputs (HumanMessage and AIMessage) with the newer Chat Models from Cohere, OpenAI etc. and in very near future support multimodal LLMs which are fast becoming the norm.
### Your contribution
I would need a bit of help with the PR. | Chat API Endpoints with RAG | https://api.github.com/repos/langchain-ai/langchain/issues/12094/comments | 3 | 2023-10-21T00:44:36Z | 2024-02-01T21:05:22Z | https://github.com/langchain-ai/langchain/issues/12094 | 1,955,167,241 | 12,094 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Implement MemGPT for better memory management with local and hosted models:
https://github.com/cpacker/MemGPT
### Motivation
This can help automate the manually creation and retrieval of vector stores.
### Your contribution
I'm not really familiar with the langchain repo at the moment, I could provide feedback on the implementation from a users perspective. | Integrate MemGPT | https://api.github.com/repos/langchain-ai/langchain/issues/12091/comments | 8 | 2023-10-20T20:33:10Z | 2024-07-10T16:05:21Z | https://github.com/langchain-ai/langchain/issues/12091 | 1,954,978,462 | 12,091 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Many of the non-OpenAI LLMs struggle to complete the first "Thought" part of the prompt. Falcon, Llama, Mistral, Zephyr, and many others will successfully choose the action and action input, but leave out an initial thought.
### Motivation
For the very first interaction, the agent scratchpad is empty, and the first thought isn't always necessarily helpful, especially if the first action and action input are valid.
Interestingly, removing the "thought" portion of the prompt often results in the LLM properly completing the thought. However, doing so breaks Langchain because Langchain is expecting the agent_scratchpad variable to be present in the template for completion.
An alternative could be to make the entire "Thought" part of the prompt only present if agent_scratchpad isn't empty. However, that would require "building" the prompt template conditionally.
### Your contribution
Most of this takes place in either the MRKL agent:
https://github.com/langchain-ai/langchain/tree/master/libs/langchain/langchain/agents/mrkl
Or potentially the ReAct Docstore agent:
https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/react/base.py | Improve flexibility of ReAct agent on the first iteration | https://api.github.com/repos/langchain-ai/langchain/issues/12087/comments | 1 | 2023-10-20T19:18:11Z | 2023-10-25T15:54:28Z | https://github.com/langchain-ai/langchain/issues/12087 | 1,954,895,641 | 12,087 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.