issue_owner_repo
listlengths
2
2
issue_body
stringlengths
0
261k
issue_title
stringlengths
1
925
issue_comments_url
stringlengths
56
81
issue_comments_count
int64
0
2.5k
issue_created_at
stringlengths
20
20
issue_updated_at
stringlengths
20
20
issue_html_url
stringlengths
37
62
issue_github_id
int64
387k
2.46B
issue_number
int64
1
127k
[ "hwchase17", "langchain" ]
### Feature request When I use llama cpp for my case, I have [problems](https://github.com/ggerganov/llama.cpp/issues/2619). I've tried use "-esp 1e-5". It works well for me. But when I use ```from langchain.llms import LlamaCpp``` I get the same problems. However I dont know the way to pass parameters Thank you~~ ### Motivation Relevant with encode - decode ### Your contribution [Llama cpp](https://github.com/ggerganov/llama.cpp)
norm_eps parameters in LlamaCpp
https://api.github.com/repos/langchain-ai/langchain/issues/11169/comments
2
2023-09-28T11:48:48Z
2024-01-30T00:41:57Z
https://github.com/langchain-ai/langchain/issues/11169
1,917,344,258
11,169
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I am writing a few apps that use LangChain and deploying them to AWS Lambda. My previous 2 lambdas are running smoothly, but the one I'm trying to deploy this week is giving me the following stack trace when the lambda is run: ``` "errorMessage": "[Errno 38] Function not implemented", "errorType": "OSError", "requestId": "23ef7a93-3d00-475c-b495-421810f93d92", "stackTrace": [ " File \"/var/task/datadog_lambda/wrapper.py\", line 214, in __call__\n self.response = self.func(event, context, **kwargs)\n", " File \"/var/task/app.py\", line 4, in lambda_handler\n total_indexed_pages = get_latest_changes()\n", " File \"/var/task/ddtrace/contrib/aws_lambda/patch.py\", line 115, in __call__\n self.response = self.func(*args, **kwargs)\n", " File \"/var/task/app.py\", line 4, in lambda_handler\n total_indexed_pages = get_latest_changes()\n", " File \"/var/task/index_latest_docs.py\", line 137, in get_latest_changes\n raise e\n", " File \"/var/task/index_latest_docs.py\", line 126, in get_latest_changes\n Pinecone.from_documents(docs, embedding=embeddings, index_name=PINECONE_INDEX_NAME, namespace=PINECONE_NAMESPACE)\n", " File \"/var/task/langchain/schema/vectorstore.py\", line 420, in from_documents\n return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)\n", " File \"/var/task/langchain/vectorstores/pinecone.py\", line 416, in from_texts\n pinecone.add_texts(\n", " File \"/var/task/langchain/vectorstores/pinecone.py\", line 138, in add_texts\n async_res = [\n", " File \"/var/task/langchain/vectorstores/pinecone.py\", line 139, in <listcomp>\n self._index.upsert(\n", " File \"/var/task/pinecone/core/utils/error_handling.py\", line 17, in inner_func\n return func(*args, **kwargs)\n", " File \"/var/task/pinecone/index.py\", line 150, in upsert\n return self._upsert_batch(vectors, namespace, _check_type, **kwargs)\n", " File \"/var/task/pinecone/index.py\", line 237, in _upsert_batch\n return self._vector_api.upsert(\n", " File \"/var/task/pinecone/core/client/api_client.py\", line 776, in __call__\n return self.callable(self, *args, **kwargs)\n", " File \"/var/task/pinecone/core/client/api/vector_operations_api.py\", line 956, in __upsert\n return self.call_with_http_info(**kwargs)\n", " File \"/var/task/pinecone/core/client/api_client.py\", line 838, in call_with_http_info\n return self.api_client.call_api(\n", " File \"/var/task/pinecone/core/client/api_client.py\", line 421, in call_api\n return self.pool.apply_async(self.__call_api, (resource_path,\n", " File \"/var/task/pinecone/core/client/api_client.py\", line 107, in pool\n self._pool = ThreadPool(self.pool_threads)\n", " File \"/var/lang/lib/python3.11/multiprocessing/pool.py\", line 930, in __init__\n Pool.__init__(self, processes, initializer, initargs)\n", " File \"/var/lang/lib/python3.11/multiprocessing/pool.py\", line 196, in __init__\n self._change_notifier = self._ctx.SimpleQueue()\n", " File \"/var/lang/lib/python3.11/multiprocessing/context.py\", line 113, in SimpleQueue\n return SimpleQueue(ctx=self.get_context())\n", " File \"/var/lang/lib/python3.11/multiprocessing/queues.py\", line 341, in __init__\n self._rlock = ctx.Lock()\n", " File \"/var/lang/lib/python3.11/multiprocessing/context.py\", line 68, in Lock\n return Lock(ctx=self.get_context())\n", " File \"/var/lang/lib/python3.11/multiprocessing/synchronize.py\", line 167, in __init__\n SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx)\n", " File \"/var/lang/lib/python3.11/multiprocessing/synchronize.py\", line 57, in __init__\n sl = self._semlock = _multiprocessing.SemLock ``` On line 126 in my index_latest_docs.py file, I have the following line: `Pinecone.from_documents(docs, embedding=embeddings, index_name=PINECONE_INDEX_NAME, namespace=PINECONE_NAMESPACE)` This is causing it to not run in AWS Lambda because Lambda cannot handle multiprocessing, which the function seems to be doing internally. I don't know if this is a newly introduced issue, because I'm using the same exact function in my other lambda apps, and they seem to be running fine - but maybe the containers were built with a different version than this more recent one. Any help would be greatly appreciated ### Suggestion: _No response_
Issue: Pinecone.from_documents cannot run on AWS Lambda because of multiprocessing
https://api.github.com/repos/langchain-ai/langchain/issues/11168/comments
13
2023-09-28T11:38:04Z
2024-05-11T23:12:40Z
https://github.com/langchain-ai/langchain/issues/11168
1,917,319,585
11,168
[ "hwchase17", "langchain" ]
### System Info langchain==0.0.299 Python 3.10.13 OS: Windows 10 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [x] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python from langchain.agents import initialize_agent from langchain.agents.agent_types import AgentType from langchain.chat_models import AzureChatOpenAI from langchain.agents.tools import Tool from langchain.agents import load_tools from langchain.agents import create_csv_agent from langchain.memory import ConversationBufferMemory llm = AzureChatOpenAI( model="gpt-4", temperature=0, openai_api_version="2023-05-15", verbose=True, ) tools = load_tools(["wikipedia", "llm-math"], llm=llm) csv_agent = create_csv_agent( llm=llm, path=["sample_02.csv"], # some sales data verbose=True, handle_parsing_errors=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, memory=ConversationBufferMemory(), ) csv_tool = Tool( name="CSV Agent", func=csv_agent.run, description="Useful for interacting with CSV data.", ) tools.append(csv_tool) agent = initialize_agent( llm=llm, tools=tools, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, handle_parsing_errors=True, memory=ConversationBufferMemory(), ) agent.run("What is the sum of sales") ``` ### Expected behavior I am getting exception: ``` File "C:\Users\X\lib\site-packages\langchain\agents\mrkl\output_parser.py", line 52, in parse raise OutputParserException( langchain.schema.output_parser.OutputParserException: Could not parse LLM output: 'some info about data comes here' ``` Expected: No exception and agent should have same behaviour as if I would use the CSV agent standalone
[BUG] OutputParserException when using CSV agent as a tool for another agent
https://api.github.com/repos/langchain-ai/langchain/issues/11167/comments
5
2023-09-28T11:15:57Z
2023-10-05T13:33:50Z
https://github.com/langchain-ai/langchain/issues/11167
1,917,278,742
11,167
[ "hwchase17", "langchain" ]
### System Info Langchain 0.0.302 Pydantic 2.4.1 reproducing code: ``` from langchain.chains import create_extraction_chain_pydantic from langchain.chat_models import ChatOpenAI from pydantic import BaseModel, model_validator from typing import Any class TestData(BaseModel): company_name: str llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo") chain = create_extraction_chain_pydantic(pydantic_schema=TestData, llm=llm) extracted_subsidiary_list = chain.run('') ``` Exception: **RuntimeError: no validator found for <class '__main__.TestData'>, see `arbitrary_types_allowed` in Config** Tried adding: (didn't help) ``` @model_validator(mode='after') def dummy_validator(self): return self @model_validator(mode='before') @classmethod def dummy_validator1(cls, data: Any) -> Any: return data ``` ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` from langchain.chains import create_extraction_chain_pydantic from langchain.chat_models import ChatOpenAI from pydantic import BaseModel, model_validator from typing import Any class TestData(BaseModel): company_name: str llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo") chain = create_extraction_chain_pydantic(pydantic_schema=TestData, llm=llm) extracted_subsidiary_list = chain.run('') ``` ### Expected behavior Doesn't throw exception
Runtime Exception in create_extraction_chain_pydantic (RuntimeError: no validator found)
https://api.github.com/repos/langchain-ai/langchain/issues/11166/comments
6
2023-09-28T11:09:25Z
2024-01-30T00:38:09Z
https://github.com/langchain-ai/langchain/issues/11166
1,917,264,832
11,166
[ "hwchase17", "langchain" ]
### System Info Hi Team, The issue is while calling the ChatAnthropic function with the right anthropic secret key. The type SecretStr is not getting resolved. Discussed over here:https://github.com/langchain-ai/langchain/pull/10724 ### Who can help? @josh ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction To recreate the error : from langchain.chat_models import ChatAnthropic claude_chat = ChatAnthropic(anthropic_api_key="xy-key") ### Expected behavior It should initialize the claude chat
Argument 'value' has incorrect type (expected str, got SecretStr) (type=type_error) in chatAnthropic- Induced in 0.0.301
https://api.github.com/repos/langchain-ai/langchain/issues/11165/comments
4
2023-09-28T11:04:07Z
2023-10-03T12:20:10Z
https://github.com/langchain-ai/langchain/issues/11165
1,917,252,474
11,165
[ "hwchase17", "langchain" ]
### Feature request Add `hierarchy` key to Document metadata returned by `ConfluenceLoader`. So, for example if a page is located under the pages Page A/Page B, the `hierarchy` should be equal to `Page A/Page B` This could be achieved by calling the confluence endpoint using `expand = "ancestors"`. For example [this](https://github.com/langchain-ai/langchain/blob/cfa2203c626a2287d60c1febeb3e3a68b77acd77/libs/langchain/langchain/document_loaders/confluence.py#L282) would become ```python pages = self.paginate_request( self.confluence.get_all_pages_from_space, space=space_key, limit=limit, max_pages=max_pages, status="any" if include_archived_content else "current", expand=f"{content_format.value},ancestors", ) ``` ### Motivation Page Hierarchy is very important in a confluence-like space and it would definitely be relevant when searching for documents in the Vector Store. ### Your contribution I am willing to submit a PR to add this feature.
Add `hierarchy` key to Document metadata in `ConfluenceLoader`
https://api.github.com/repos/langchain-ai/langchain/issues/11163/comments
2
2023-09-28T10:33:12Z
2024-01-30T00:37:05Z
https://github.com/langchain-ai/langchain/issues/11163
1,917,192,653
11,163
[ "hwchase17", "langchain" ]
### Feature request It looks like most of the models of the `Completion` API have been marked as legacy and will be discontinued in early 2024. On Azure OpenAI I already no longer have access to non-chat models and I can't use the completion API with the newest get-35-turbo and gpt4 (0613). Shall we raise a warning to the user when instantiating `langchain.llms.OpenAI` and `langchain.llms.AzureOpenAI` with deprecated models? References: - https://openai.com/blog/gpt-4-api-general-availability - https://platform.openai.com/docs/deprecations ### Your contribution I can contribute to this if the feature makes sense.
Raise a warning when using OpenAI and Azure OpenAI Completion API
https://api.github.com/repos/langchain-ai/langchain/issues/11159/comments
1
2023-09-28T09:27:24Z
2024-01-30T00:40:21Z
https://github.com/langchain-ai/langchain/issues/11159
1,917,083,704
11,159
[ "hwchase17", "langchain" ]
template = """Use the following pieces of context to answer the question at the end. If the answer can't be determined using only the information in the provided context simply output "NO ANSWER", just say that you don't know, don't try to make up an answer. Use three sentences maximum. Keep the answer as concise as possible. {context} Question: {question} Helpful Answer:""" QA_CHAIN_PROMPT = PromptTemplate.from_template(template) qa = RetrievalQA.from_chain_type( llm=streaming_llm, chain_type="stuff", retriever=compression_retriever, callback_manager=manager, chain_type_kwargs={ "prompt": QA_CHAIN_PROMPT } )
ISSUE: how can i get answer only based on vector store? If the context is not available in vector store
https://api.github.com/repos/langchain-ai/langchain/issues/11157/comments
9
2023-09-28T08:16:34Z
2024-02-14T16:10:19Z
https://github.com/langchain-ai/langchain/issues/11157
1,916,961,643
11,157
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I am trying to run a PlanAndExecute agent. The load_agent_executor method call returns: "pydantic.v1.errors.ConfigError: field "retry_chain" not yet prepared so type is still a ForwardRef, you might need to call OutputFixingParser.update_forward_refs()." This is my code: `import os import dotenv from langchain.chat_models import ChatOpenAI from langchain_experimental.plan_and_execute import PlanAndExecute, load_agent_executor, load_chat_planner from langchain.llms import OpenAI from langchain.agents.tools import Tool from langchain.chains.llm_math.base import LLMMathChain from langchain.tools import DuckDuckGoSearchRun from langchain.output_parsers import OutputFixingParser dotenv.load_dotenv() apikey = os.getenv("OPEN_AI_API_KEY", default="") search = DuckDuckGoSearchRun() llm = OpenAI(temperature=0, openai_api_key=apikey) llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True) tools = [ Tool( name="Search", func=search.run, description="useful for when you need to answer questions about current events" ), Tool( name="Calculator", func=llm_math_chain.run, description="useful for when you need to answer questions about math" ), ] model = ChatOpenAI(temperature=0) planner = load_chat_planner(model) executor = load_agent_executor(model, tools, verbose=True) agent = PlanAndExecute(planner=planner, executor=executor, verbose=True)` ### Suggestion: _No response_
Issue: load_agent_executor returning ConfigError
https://api.github.com/repos/langchain-ai/langchain/issues/11156/comments
5
2023-09-28T08:11:27Z
2024-01-30T00:40:24Z
https://github.com/langchain-ai/langchain/issues/11156
1,916,953,343
11,156
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Python 3.10.7 langchain 0.0.301 I started exploring langchain features such as stuff mode, map reduce mode and refine mode. i would like to know more about the map reduce and refine modes further. my understanding is that if the token limit of a document exceeds then the document goes through the spit and passes in the next call. Using the following code ``` import os from langchain.document_loaders import WebBaseLoader from langchain.chat_models import ChatOpenAI from langchain.chains.combine_documents.stuff import StuffDocumentsChain from langchain.prompts import PromptTemplate from langchain.chains import LLMChain, ReduceDocumentsChain, MapReduceDocumentsChain from langchain.text_splitter import CharacterTextSplitter os.environ['OPENAI_API_KEY'] = "xxxxx-xxxxxxxxxxxxxxxxxxxxxxxxx" loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/") docs = loader.load() #Map starts map_template = """The following is a set of documents {docs} Based on this list of docs, please identify the main themes Helpful Answer:""" map_prompt = PromptTemplate.from_template(map_template) llm = ChatOpenAI(temperature=0) map_chain = LLMChain(llm=llm, prompt=map_prompt) #Map ends #Reduce starts reduce_template = """The following is set of summaries: {doc_summaries} Take these and distill it into a final, consolidated summary of the main themes. Helpful Answer:""" reduce_prompt = PromptTemplate.from_template(reduce_template) reduce_chain = LLMChain(llm=llm, prompt=reduce_prompt) # Takes a list of documents, combines them into a single string, and passes this to an LLMChain combine_documents_chain = StuffDocumentsChain( llm_chain=reduce_chain, document_variable_name="doc_summaries" ) reduce_documents_chain = ReduceDocumentsChain( combine_documents_chain=combine_documents_chain, collapse_documents_chain=combine_documents_chain, token_max=4000, ) #Reduce ends #combine Map and Reduce starts map_reduce_chain = MapReduceDocumentsChain( llm_chain=map_chain, reduce_documents_chain=reduce_documents_chain, document_variable_name="docs", return_intermediate_steps=False, ) #combine Map and Reduce ends text_splitter = CharacterTextSplitter.from_tiktoken_encoder( chunk_size=1000, chunk_overlap=0 ) split_docs = text_splitter.split_documents(docs) print(map_reduce_chain.run(split_docs[:8])) #print(split_docs[:8]) ``` as we can see i am cnsidering the first 8 documents of the public dataset, which is having 8,104 Tokens composed of 34312 Characters As per the 4000 tokens limit the dataset shall break in to three parts and process the operation, but i am facing with the issue > Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised RateLimitError: Rate limit reached for default-gpt-3.5-turbo in organization org-xxxxxxxxxxxxxxxxx on requests per min. Limit: 3 / min. Please try again in 20s. Contact us through our help center at help.openai.com if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method.. could you please help me to under stand further how map reduce or refine actual workings. ### Suggestion: _No response_
Issue: While using Map Reduce Max number of limit exceed even though not exceed.
https://api.github.com/repos/langchain-ai/langchain/issues/11155/comments
14
2023-09-28T06:11:23Z
2024-02-15T16:09:20Z
https://github.com/langchain-ai/langchain/issues/11155
1,916,784,254
11,155
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I am using langchain version 0.0.281 and using Milvus as the vector store. For now, I have a Milvus-lite instance on my device. I tried using paritions with base vectorstore operations like `add_documents` and `similarity_search` and its working fine i.e it inserts and searches from the mentioned partition only. But when I try to use partitions with ConversationalRetrievalChain and even RetrievalQA it doesn't work i.e it retrieves documents through all partitions. When using `from_llm` classmethod I tried giving the partition names like below: `QA = ConversationalRetrievalChain.from_llm( llm=LLM, retriever=VECTOR_DB.vec_db.as_retriever(partition_names=partition_names), memory=MEMORY, return_source_documents=True )` But this doesn't ensure that the vector store will only search the mentioned partitions, rather the vectorestore searches through all the documents present regardless of partition. Is there anyway I can limit the retrieval of documents to only those present in mentioned partition. ### Suggestion: _No response_
Issue: Using Milvus partitions with ConversationalRetrievalChain
https://api.github.com/repos/langchain-ai/langchain/issues/11153/comments
2
2023-09-28T04:59:34Z
2023-09-28T09:13:26Z
https://github.com/langchain-ai/langchain/issues/11153
1,916,701,031
11,153
[ "hwchase17", "langchain" ]
### System Info when using PlanAndExecutor with guidance prompt of using tools, it does remember previously steps, which will make the pipeline fail as it doesn't know what's previously steps' ouput... <img width="822" alt="image" src="https://github.com/langchain-ai/langchain/assets/35717759/726a42c3-e4c9-4acf-9808-66645eee090c"> ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1: define guidence tools prompt: combo_template = """ First understand the problem and devise a plan to solve the problem. Please output the plan starting with the header 'Plan:' and then followed by a numbered list of steps. Do not use past conversation history when you are planning the steps. Please make the plan the minimum number of steps required to accurately complete the task. These are guidance on when to use a tool to solve a task, follow them strictly: first use "reverse index search" tool to get table name if previously step not find anyting,then use "embedding knn search" tool to get table name if previously step still can't find the answer, say "I can't find the answer for this question." otherwise use "db Querying Tool" tool to search table info using table name which previously step return DO NOT CREATE STEPS THAT ARE NOT NEEDED TO SOLVE A TASK. Once you have answers for the question, stop and provide the final answers. The final answers should be a combination of the answers to all the questions, not just the last one. Please make sure you have a plan to answer all the questions in the input, not just the last one. Please use these to construct an answer to the question , as though you were answering the question directly. Ensure that your answer is accurate and doesn’t contain any information not directly supported by the summary and quotes. If there are no data or information in this document that seem relevant to this question, please just say "I can’t find any relevant quotes". """ 2: load chat planner with llm(llama2 13b): planner = load_chat_planner(sm_llm) system_message_prompt = SystemMessagePromptTemplate.from_template(combo_template) human_message_prompt = planner.llm_chain.prompt.messages[1] planner.llm_chain.prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt]) executor = load_agent_executor(sm_llm, custom_tool_list, verbose=False) agent = PlanAndExecute(planner=planner, executor=executor, verbose=False, max_iterations=1) 3: run agent output = agent({"input":"最近一个月温度合格的派车单数量"}) ### Expected behavior base on each steps to get into next step with previously steps' output
P
https://api.github.com/repos/langchain-ai/langchain/issues/11152/comments
4
2023-09-28T04:26:55Z
2024-02-10T16:16:27Z
https://github.com/langchain-ai/langchain/issues/11152
1,916,677,818
11,152
[ "hwchase17", "langchain" ]
### Issue with current documentation: [This link ](https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/custom_prompt_template.html)references a code sample for a Custom Prompt (awesome stuff!) Pydantic updated their library and the code sample or library dependencies do not work anymore. They have a process to help migrate code Details here: https://docs.pydantic.dev/latest/migration/ Please update the code samples so that they work with the new version of the Pydantic library so that it works for any Langchain users. ### Idea or request for content: Updated code samples only
DOC: Custom Prompt example code does not work. Requires pydantic V2
https://api.github.com/repos/langchain-ai/langchain/issues/11150/comments
1
2023-09-28T03:40:36Z
2024-01-30T00:35:57Z
https://github.com/langchain-ai/langchain/issues/11150
1,916,646,925
11,150
[ "hwchase17", "langchain" ]
hi team, Can I feed different QA prompt into conversationRetrievalQA? I know to set the specific prompt according to knowledge vector store searched. For example, If document A be searched I would set the prompt-A for it, document B for prompt-B.
converstaionRetrievalQA with different prompt
https://api.github.com/repos/langchain-ai/langchain/issues/11149/comments
3
2023-09-28T03:37:06Z
2024-02-10T16:16:32Z
https://github.com/langchain-ai/langchain/issues/11149
1,916,644,707
11,149
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I followed the tutorial at [Code Understanding](https://python.langchain.com/docs/use_cases/code_understanding), loaded a small directory of test files into the db, and asked the question: `Ask a question: what ways would you simplify e2e_btc_mint_2auto.test.js?` The retriever, which was setup as follows, actually gave every file in the directory except `e2e_btc_mint_2auto.test.js` to ChatGPT! Why is ChromaDB so _bad_? ``` # Load loader = GenericLoader.from_filesystem( repo_path, glob="**/*", suffixes=[".js"], parser=LanguageParser(language=Language.JS, parser_threshold=500) # Currently, the supported languages for code parsing are Python and JavaScript. # Source https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.language_parser.LanguageParser.html ) documents = loader.load() # print("# of documents", len(documents)) go_splitter = RecursiveCharacterTextSplitter.from_language(language=Language.JS, chunk_size=2000, chunk_overlap=200) texts = go_splitter.split_documents(documents) # print("# of split documents (texts)", len(texts)) def chatgpt(texts): db = Chroma.from_documents(texts, OpenAIEmbeddings(disallowed_special=())) retriever = db.as_retriever( search_type="mmr", # Also test "similarity" search_kwargs={"k": 8}, ) llm = ChatOpenAI(model_name="gpt-4") return db, retriever, llm def llama(texts) -> LlamaCpp: db = Chroma.from_documents(texts, LlamaCppEmbeddings(model_path=model_path)) retriever = db.as_retriever( search_type="mmr", # Also test "similarity" search_kwargs={"k": 8}, ) llm = LlamaCpp( model_path=model_path, n_gpu_layers=0, n_batch=512, n_ctx=2048, f16_kv=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=False) return db, retriever, llm if args.llama: db, retriever, llm = llama(texts) else: db, retriever, llm = chatgpt(texts) memory = ConversationSummaryMemory(llm=llm,memory_key="chat_history",return_messages=True) qa = ConversationalRetrievalChain.from_llm(llm, retriever=retriever, memory=memory) print("going to ask some questions now") while True: question = input("Ask a question: ") if not question: break documents = retriever.get_relevant_documents(question) pp(documents) result = qa(question) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") ``` Output ``` [Document(page_content="...", metadata={'language': 'js', 'source': '/home/shinichi/source/go/src/github.com/dove-foundation/dove/test/src/e2e_btc_mint_1basic.test.js'}), Document(page_content="...", metadata={'language': 'js', 'source': '/home/shinichi/source/go/src/github.com/dove-foundation/dove/test/src/e2e.test.js'}), Document(page_content="...", metadata={'language': 'js', 'source': '/home/shinichi/source/go/src/github.com/dove-foundation/dove/test/src/e2e_btc_transfer.test.js'}), Document(page_content="...", metadata={'language': 'js', 'source': '/home/shinichi/source/go/src/github.com/dove-foundation/dove/test/src/archive/e2e_followup_auto.test.js'}), Document(page_content="...", metadata={'language': 'js', 'source': '/home/shinichi/source/go/src/github.com/dove-foundation/dove/test/src/e2e_slashing.test.js'}), Document(page_content="...", metadata={'language': 'js', 'source': '/home/shinichi/source/go/src/github.com/dove-foundation/dove/test/src/e2e_evm.test.js'}), Document(page_content="...", metadata={'language': 'js', 'source': '/home/shinichi/source/go/src/github.com/dove-foundation/dove/test/src/e2e_recharge.test.js'}), Document(page_content="...", metadata={'language': 'js', 'source': '/home/shinichi/source/go/src/github.com/dove-foundation/dove/test/src/e2e_reshare.test.js'})] ``` ### Suggestion: The very least ChromaDB could do is search the metadata.source for string matches.
ChromaDB retrieves irrelevant context
https://api.github.com/repos/langchain-ai/langchain/issues/11144/comments
2
2023-09-28T00:45:52Z
2023-12-31T14:21:31Z
https://github.com/langchain-ai/langchain/issues/11144
1,916,540,669
11,144
[ "hwchase17", "langchain" ]
### System Info Langchain version = 0.0.231 python = 3.10 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [X] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.llms import Bedrock model_kwargs={"max_tokens_to_sample":8000} resp = Bedrock(credentials_profile_name="default", model_id="anthropic.claude-v2",model_kwargs={"max_tokens_to_sample":8000}) ### Expected behavior this same piece of code used to work just fine in the past weeks , now it's causing a prompt error i'm sure that bedrock changed something on their end
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Invalid prompt: prompt must start with " Human:" turn, prompt must end with " Assistant:" turn
https://api.github.com/repos/langchain-ai/langchain/issues/11130/comments
12
2023-09-27T19:20:13Z
2023-10-05T07:07:46Z
https://github.com/langchain-ai/langchain/issues/11130
1,916,177,170
11,130
[ "hwchase17", "langchain" ]
### Feature request The Google Cloud community would love to enjoy the full capabilities of Vertex AI models on top of Langchain. To this end, we would be able to access the metadata fields ([reference](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text#response_body)) as outputs of the Vertex AI LLM model in Langchain, indicating crucial information such as the number of input and output tokens, and number of billed characters. ### Motivation This feature is already available on the OpenAI model as part of the [LLM output field](https://api.python.langchain.com/en/latest/schema/langchain.schema.output.LLMResult.html#langchain.schema.output.LLMResult.llm_output). For the Vertex AI model, it simply outputs a None value. This feature would greatly benefit the ecosystem as we would be able to build more robust and production-ready applications using the Vertex AI LLM as a backend. ### Your contribution I am currently looking into implementing it myself. PR to be opened soon.
Feature: LLM Vertex AI metadata support
https://api.github.com/repos/langchain-ai/langchain/issues/11119/comments
2
2023-09-27T15:15:12Z
2024-01-30T00:36:04Z
https://github.com/langchain-ai/langchain/issues/11119
1,915,813,094
11,119
[ "hwchase17", "langchain" ]
### System Info langchain 0.0.301 Python 3.10.12 Ubuntu 22.04 LTS ### Who can help? @hwchase17 (based on blame of langchain/agents/agent.py", line 622) ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` import yaml import os from langchain.chat_models import ChatOpenAI from langchain.agents import initialize_agent # https://python.langchain.com/docs/modules/agents/how_to/handle_parsing_errors def _handle_error(error) -> str: return str(error) # Do whatever to load your api key with open('secrets.yml', 'r') as f: config = yaml.safe_load(f) os.environ["OPENAI_API_KEY"] = config['open_ai_key'] llm = ChatOpenAI(model_name='gpt-3.5-turbo') agentExcecutor = initialize_agent( [], llm, agent="conversational-react-description", handle_parsing_errors=_handle_error, ) ``` Results in: ``` Traceback (most recent call last): File ".../test_error_handling.py", line 17, in <module> agentExcecutor = initialize_agent( File ".../langchain/agents/initialize.py", line 64, in initialize_agent return AgentExecutor.from_agent_and_tools( File ".../langchain/agents/agent.py", line 622, in from_agent_and_tools return cls( File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for AgentExecutor handle_parsing_errors value could not be parsed to a boolean (type=type_error.bool) ``` ### Expected behavior It should initialize the agent, and trigger the custom _handle_error as described in: https://python.langchain.com/docs/modules/agents/how_to/handle_parsing_errors#custom-error-function I am (like many) experiencing frequent ['Could not parse LLM output'](https://github.com/langchain-ai/langchain/issues/1358) #1358. My current project is targeting gpt-3.5, and doing system-message/prompt tweaking - so parse failures are to be expected. However, I want to handle the error gracefully - retrying, summarizing failed attempts, apologizing and moving on, etc - depending on the specifics of my project. The handle_parsing_errors seems like the perfect place to implement a workaround. However, passing a callable func seems to fail validation, and thus does not work. Note: * using try/except to handle parse errors is not ideal for us, as, by default, it omits those messages from the memory * using handle_parsing_errors=true is what I am using now, but I'd like to customize the failure handling As always, apologies if I misunderstand the documentation, or if this is already reported. I did not see anything in the tracker - #10710 is close, but regards tools rather than agents afaict. Thank you for any help!
initialize_agent does not accept callable 'handle_parsing_errors', only bool ('value could not be parsed to a boolean')
https://api.github.com/repos/langchain-ai/langchain/issues/11118/comments
2
2023-09-27T15:11:23Z
2023-09-28T14:14:52Z
https://github.com/langchain-ai/langchain/issues/11118
1,915,806,213
11,118
[ "hwchase17", "langchain" ]
I am using `"langchain": "^0.0.89"` to use the _**MultiRetrievalQAChain**_. As i want to explore how can i use different namespaces in a single chain and the issue I am facing is that whenever i tried to pass a QA prompt to the _**MultiRetrievalQAChain**_ the model doesn't seems to be using the prompt for generating the response. I have tried multiple options but this does not seems to be working. Here is the code snippets: - This is the first option I tried ``` const multiRetrievalQAChain = MultiRetrievalQAChain.fromLLMAndRetrievers( llm, { retrieverNames, retrieverDescriptions, retrievers, retrieverPrompts: [ QA_PROMPT, QA_PROMPT, QA_PROMPT, ], retrievalQAChainOpts: { returnSourceDocuments: true, }, } ); ``` - Here is the second option I explore: ``` const multiRetrievalQAChain = MultiRetrievalQAChain.fromLLMAndRetrievers( llm, { retrieverNames, retrieverDescriptions, retrievers, retrievalQAChainOpts: { returnSourceDocuments: true, prompt:QA_PROMPT }, } ); ``` ### Suggestion: Please give some code or **_langchain_** references for suggestion. Helping in advance
Issue: Integrating QA prompt with MultiRetrievalQAChain
https://api.github.com/repos/langchain-ai/langchain/issues/11114/comments
4
2023-09-27T13:50:19Z
2024-01-30T00:36:06Z
https://github.com/langchain-ai/langchain/issues/11114
1,915,638,480
11,114
[ "hwchase17", "langchain" ]
### Feature request - `QianfanChatEndpoint` Or `ErnieBotChat` support function calling ### Motivation Neither `QianfanChatEndpoint` and `ErnieBotChat` supported ERNIE-Bot Function ability now. We need this for developing agent by function_call where is only supported by openai function_call AFAIK. ### Your contribution ready to PR
Feature: `QianfanChatEndpoint` Support function_call
https://api.github.com/repos/langchain-ai/langchain/issues/11105/comments
3
2023-09-27T10:36:19Z
2024-01-30T00:36:06Z
https://github.com/langchain-ai/langchain/issues/11105
1,915,226,169
11,105
[ "hwchase17", "langchain" ]
### Issue with current documentation: In the tutorial of using agent, link here: https://python.langchain.com/docs/modules/agents/ . The given codes are `from langchain.tools.render import format_tool_to_openai_function`, which leads to the error `ModuleNotFoundError: No module named 'langchain.tools.render'` ### Idea or request for content: I suppose the code should be changed to `from langchain.tools import format_tool_to_openai_function`
DOC: No module named 'langchain.tools.render'
https://api.github.com/repos/langchain-ai/langchain/issues/11103/comments
8
2023-09-27T10:01:17Z
2024-04-27T13:25:44Z
https://github.com/langchain-ai/langchain/issues/11103
1,915,148,870
11,103
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. version: 0.0.39 issus: ``` Traceback (most recent call last): File "xxxxxxxx/langchain/main.py", line 8, in <module> prompt = PromptTemplate.from_template(template) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: type object 'PromptTemplate' has no attribute 'from_template' ``` Is that a BC? ### Suggestion: _No response_
Issue: break change?
https://api.github.com/repos/langchain-ai/langchain/issues/11102/comments
4
2023-09-27T09:20:45Z
2023-09-28T08:33:17Z
https://github.com/langchain-ai/langchain/issues/11102
1,915,071,401
11,102
[ "hwchase17", "langchain" ]
### System Info **System Info** Using Lanchain version 0.0.301 Python 3.11.3 MS Windows 10 Enterprise ### Who can help? **Who can help?** @hwchase17 @eyurtsev @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [X] Vector Stores / Retrievers - [X] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction I used the same code from https://python.langchain.com/docs/modules/data_connection/retrievers/multi_vector The only difference is that I used the below code for the loaders ![image](https://github.com/langchain-ai/langchain/assets/67390957/5318213a-e07d-417a-9934-691f0325d12a) And the execution code is: ![image](https://github.com/langchain-ai/langchain/assets/67390957/7782b7fa-0e9f-4b99-a841-63a21b374d88) The rest of the code is from the MultiVector Retriever. I used two PDF documents which are of 16 and 18 pages respectively. ### Expected behavior The expectation is when I ask a question that requires the MultiVector to retrieve vectors from both documents and collate an answer together - which it does. But sometimes, for some questions I get the following error ![image](https://github.com/langchain-ai/langchain/assets/67390957/de6bd479-d6f2-44ec-93a2-076cdb92e122) I have checked previously - 'doc_id' is present from the steps followed in the documentation. I have used the exact same code on the documentation with the same parameters for chunking- yet I get this error. I can circumvent the error by rephrasing the question - which is most confusing. How can a code work sometimes and sometimes don't - with the only change in the query. The length of the query doesn't seem to matter from my analysis. Please offer me a workaround or let me know if this a bug from the langchain end.
MultiVector Retriever: KeyError: 'doc_id'
https://api.github.com/repos/langchain-ai/langchain/issues/11099/comments
4
2023-09-27T05:28:48Z
2023-10-04T06:15:03Z
https://github.com/langchain-ai/langchain/issues/11099
1,914,726,710
11,099
[ "hwchase17", "langchain" ]
### Issue with current documentation: Under the https://python.langchain.com/docs/integrations/document_loaders/microsoft_sharepoint page - in step 10 the URL is wrong to obtain the Edm.guid property. The correct URL is https://<tenant>.sharepoint.com/sites/<site-name>/_api/web/id ### Idea or request for content: Please update the current documentation after checking the proposed change.
DOC: Connecting Langchain to Microsoft SharePoint
https://api.github.com/repos/langchain-ai/langchain/issues/11097/comments
1
2023-09-27T04:59:40Z
2024-01-30T00:42:02Z
https://github.com/langchain-ai/langchain/issues/11097
1,914,701,580
11,097
[ "hwchase17", "langchain" ]
### System Info CHAT_PROMPT = ChatPromptTemplate( messages=[ SystemMessagePromptTemplate.from_template(general_system_template), # The `variable_name` here is what must align with memory MessagesPlaceholder(variable_name="chat_history"), HumanMessagePromptTemplate.from_template("{question}"), ] ) memory = ConversationBufferMemory( llm=llm, max_token_limit=MAX_TOKENS, memory_key="chat_history", # used for the qa which has a "chat_history" key # prompt=CHAT_PROMPT, # To give system instructions thorugh the custom prompt return_messages=True # otherwise just a str and won't work for chat_history ) # memory.chat_memory.add_message({"role": "system", "content": "You are a surfer. end all setences with 'dude'"}) memory.chat_memory.add_message(SystemMessage(content="You are a surfer. end all setences with 'dude'")) memory.chat_memory.add_user_message("I think you're great") memory.chat_memory.add_ai_message("Thank you as I am a surfer") qa = ConversationalRetrievalChain.from_llm( llm=llm, retriever=retriever, memory=memory, # combine_docs_chain_kwargs={ # "prompt": CHAT_PROMPT # } ) res = qa({"question": msg, "chat_history": memory.chat_memory}) print("QA", res) await ctx.send(res['answer']) ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction CHAT_PROMPT = ChatPromptTemplate( messages=[ SystemMessagePromptTemplate.from_template(general_system_template), # The `variable_name` here is what must align with memory MessagesPlaceholder(variable_name="chat_history"), HumanMessagePromptTemplate.from_template("{question}"), ] ) memory = ConversationBufferMemory( llm=llm, max_token_limit=MAX_TOKENS, memory_key="chat_history", # used for the qa which has a "chat_history" key # prompt=CHAT_PROMPT, # To give system instructions thorugh the custom prompt return_messages=True # otherwise just a str and won't work for chat_history ) # memory.chat_memory.add_message({"role": "system", "content": "You are a surfer. end all setences with 'dude'"}) memory.chat_memory.add_message(SystemMessage(content="You are a surfer. end all setences with 'dude'")) memory.chat_memory.add_user_message("I think you're great") memory.chat_memory.add_ai_message("Thank you as I am a surfer") qa = ConversationalRetrievalChain.from_llm( llm=llm, retriever=retriever, memory=memory, # combine_docs_chain_kwargs={ # "prompt": CHAT_PROMPT # } ) res = qa({"question": msg, "chat_history": memory.chat_memory}) print("QA", res) Running this and interacting with this will lead to no acknowledgement of the system message by the LLM ### Expected behavior The system message is listened to
System message not working with ConversationalRetrievalChain
https://api.github.com/repos/langchain-ai/langchain/issues/11096/comments
2
2023-09-27T04:57:24Z
2024-01-30T00:45:28Z
https://github.com/langchain-ai/langchain/issues/11096
1,914,699,772
11,096
[ "hwchase17", "langchain" ]
### System Info Name: langchain Version: 0.0.299 ### Who can help? @eyurtsev @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Just follow https://python.langchain.com/docs/integrations/document_loaders/web_base ``` from langchain.document_loaders import WebBaseLoader loader = WebBaseLoader("https://www.espn.com/") data = loader.load() ``` ### Expected behavior The standard WebLoader is broken when we pass webpath as described in the [docs](https://python.langchain.com/docs/integrations/document_loaders/web_base) as `loader = WebBaseLoader("https://www.espn.com/")` However it workes if we pass as `loader = WebBaseLoader(["https://www.espn.com/"])` The reason for that is [this commit](https://github.com/langchain-ai/langchain/commit/097ecef06bbc51ae21e134af4dfbb4ebe4902f3a) Also because of this breaking change the other Custom Web Loaders are broken as well IMSDbLoader, AZLyricsLoader, and CollegeConfidentialLoader
web_base loader is broken
https://api.github.com/repos/langchain-ai/langchain/issues/11095/comments
6
2023-09-27T04:57:02Z
2024-01-30T00:46:27Z
https://github.com/langchain-ai/langchain/issues/11095
1,914,699,506
11,095
[ "hwchase17", "langchain" ]
### Feature request I propose extending callback functionality to embedding models in general so that the same OpenAICallbackHandler can be used to keep track of embedding model costs. ### Motivation There is an interface asymmetry in Chat and embedding models, with embedding models missing significant features. ### Your contribution Can contribute if/when have time.
Embedding calls should support callbacks
https://api.github.com/repos/langchain-ai/langchain/issues/11093/comments
1
2023-09-27T01:30:43Z
2024-01-30T00:42:02Z
https://github.com/langchain-ai/langchain/issues/11093
1,914,527,874
11,093
[ "hwchase17", "langchain" ]
### System Info Langchain: 0.0.302 ### Who can help? @hwchase17 @ag ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I am using the opensource TheBloke/Llama-2-7B-GPTQ model, Below is the code for your reference. Code: model_id = "TheBloke/Llama-2-7B-GPTQ" tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True) model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto",trust_remote_code=True, revision="main") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer)#, max_new_tokens=10 hf = HuggingFacePipeline(pipeline=pipe) agent = create_pandas_dataframe_agent( hf, df, #[df, df1] for multi dataframe verbose=True, ) agent.run('Can you give me the length of dataframe') Error: --------------------------------------------------------------------------- OutputParserException Traceback (most recent call last) Cell In[17], line 1 ----> 1 agent.run('Can you give me the length of dataframe') File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:487, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs) 485 if len(args) != 1: 486 raise ValueError("`run` supports only one positional argument.") --> 487 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[ 488 _output_key 489 ] 491 if kwargs and not args: 492 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[ 493 _output_key 494 ] File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:292, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info) 290 except BaseException as e: 291 run_manager.on_chain_error(e) --> 292 raise e 293 run_manager.on_chain_end(outputs) 294 final_outputs: Dict[str, Any] = self.prep_outputs( 295 inputs, outputs, return_only_outputs 296 ) File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:286, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info) 279 run_manager = callback_manager.on_chain_start( 280 dumpd(self), 281 inputs, 282 name=run_name, 283 ) 284 try: 285 outputs = ( --> 286 self._call(inputs, run_manager=run_manager) 287 if new_arg_supported 288 else self._call(inputs) 289 ) 290 except BaseException as e: 291 run_manager.on_chain_error(e) File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:1127, in AgentExecutor._call(self, inputs, run_manager) 1125 # We now enter the agent loop (until it returns something). 1126 while self._should_continue(iterations, time_elapsed): -> 1127 next_step_output = self._take_next_step( 1128 name_to_tool_map, 1129 color_mapping, 1130 inputs, 1131 intermediate_steps, 1132 run_manager=run_manager, 1133 ) 1134 if isinstance(next_step_output, AgentFinish): 1135 return self._return( 1136 next_step_output, intermediate_steps, run_manager=run_manager 1137 ) File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:935, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager) 933 raise_error = False 934 if raise_error: --> 935 raise e 936 text = str(e) 937 if isinstance(self.handle_parsing_errors, bool): File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:924, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager) 921 intermediate_steps = self._prepare_intermediate_steps(intermediate_steps) 923 # Call the LLM to see what to do. --> 924 output = self.agent.plan( 925 intermediate_steps, 926 callbacks=run_manager.get_child() if run_manager else None, 927 **inputs, 928 ) 929 except OutputParserException as e: 930 if isinstance(self.handle_parsing_errors, bool): File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:537, in Agent.plan(self, intermediate_steps, callbacks, **kwargs) 535 full_inputs = self.get_full_inputs(intermediate_steps, **kwargs) 536 full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs) --> 537 return self.output_parser.parse(full_output) File /opt/conda/lib/python3.10/site-packages/langchain/agents/mrkl/output_parser.py:52, in MRKLOutputParser.parse(self, text) 47 return AgentFinish( 48 {"output": text.split(FINAL_ANSWER_ACTION)[-1].strip()}, text 49 ) 51 if not re.search(r"Action\s*\d*\s*:[\s]*(.*?)", text, re.DOTALL): ---> 52 raise OutputParserException( 53 f"Could not parse LLM output: `{text}`", 54 observation=MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE, 55 llm_output=text, 56 send_to_llm=True, 57 ) 58 elif not re.search( 59 r"[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)", text, re.DOTALL 60 ): 61 raise OutputParserException( 62 f"Could not parse LLM output: `{text}`", 63 observation=MISSING_ACTION_INPUT_AFTER_ACTION_ERROR_MESSAGE, 64 llm_output=text, 65 send_to_llm=True, 66 ) OutputParserException: Could not parse LLM output: ` Answer: The length of the dataframe is 255457. Question: What is the name of the dataframe? Answer: The name of the dataframe is df. Question: What is the type of the dataframe? Answer: The type of the dataframe is pandas.DataFrame. Question: What is the shape of the dataframe? Answer: The shape of the dataframe is (255457, 16). Question: What is the size of the dataframe? Answer: The size of the dataframe is (255457, 16). Question: What is the name of the dataframe column? Answer: The name of the dataframe column is id. Question: What is the type of the dataframe column? Answer: The type of the dataframe column is int64. Question: What is the shape of the dataframe column? Answer: The shape of the dataframe column is (255457,). Question: What is the size of the dataframe column? Answer: The size of the dataframe column is (255457,). Question: What is the name of the dataframe column? Answer: The name of the dataframe column is condition. Question: What is the type of the dataframe column? Answer: The type of the dataframe column is bool. Question: What is the shape of the dataframe column? Answer: The shape of the dataframe column is (255457,). Question` Note: It worked using openai model, but its not working with any open source model. I have tried multiple models. ### Expected behavior It should work even with open source models.
create_pandas_dataframe_agent - OutputParserException: Could not parse LLM output: `
https://api.github.com/repos/langchain-ai/langchain/issues/11088/comments
3
2023-09-26T23:20:15Z
2024-04-11T16:16:58Z
https://github.com/langchain-ai/langchain/issues/11088
1,914,429,441
11,088
[ "hwchase17", "langchain" ]
### Issue with current documentation: The broken link in question: https://python.langchain.com/docs/modules/memory/how_to/adding_memory.html The offending hyperlink: ![image](https://github.com/langchain-ai/langchain/assets/16566003/4151cf9a-174d-48f4-94a0-15bb051cf94d) The page where the above is found: https://python.langchain.com/docs/modules/memory/agent_with_memory ### Idea or request for content: _No response_
DOC: Python Documentation has a broken link in the "Memory in Agent" page
https://api.github.com/repos/langchain-ai/langchain/issues/11086/comments
1
2023-09-26T21:23:41Z
2023-09-26T21:24:26Z
https://github.com/langchain-ai/langchain/issues/11086
1,914,326,469
11,086
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I noticed that the python version of LangChain in GooglePalm lacks the ability to set safety settings, unlike the JS version. ### Suggestion: -
Issue: GooglePalm safety settings on Python
https://api.github.com/repos/langchain-ai/langchain/issues/11085/comments
1
2023-09-26T21:00:55Z
2024-03-16T16:04:51Z
https://github.com/langchain-ai/langchain/issues/11085
1,914,299,056
11,085
[ "hwchase17", "langchain" ]
### System Info ![image](https://github.com/langchain-ai/langchain/assets/9436757/3e5a6899-c01f-45ef-9458-942d69f3fe38) My Langchain is upto date and credentials are also perfectly fine. I am importing SQLDatabaseSequentialChain from langchain_experimental.sql package Still I am getting the following error. Any support would be definitely helpful. ### Who can help? @eyurtsev @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [X] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction !pip install langchain==0.0.302 !pip install --user --upgrade langchain.experimental ### Expected behavior This statement will not generate any output but it will execute without any error. Now its giving me error
Validation errors for SQLDatabaseSequentialChain
https://api.github.com/repos/langchain-ai/langchain/issues/11084/comments
3
2023-09-26T20:25:10Z
2024-01-30T00:43:45Z
https://github.com/langchain-ai/langchain/issues/11084
1,914,246,932
11,084
[ "hwchase17", "langchain" ]
null
Pyannote 3.0
https://api.github.com/repos/langchain-ai/langchain/issues/11083/comments
0
2023-09-26T19:56:03Z
2023-09-26T19:57:07Z
https://github.com/langchain-ai/langchain/issues/11083
1,914,203,378
11,083
[ "hwchase17", "langchain" ]
### System Info - Langchain v0.0.301 - Python 3.9.6 - Mac M2 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Hi, I've created a SequentialChain to propose a gift based on a budget and age. The different chains are defined as follows: ``` from langchain.agents import AgentType, initialize_agent, load_tools from langchain.prompts import PromptTemplate from langchain.tools import tool from langchain.chains import LLMChain langchain.debug = True llm = VertexAI( model_name="text-bison@001", max_output_tokens=256, temperature=0.2, top_p=0.8, top_k=40, ) # 1. Chain to extract the budget template = """You have to extract only the budget for a gift for the following sentence: {input} For example: 10€ Budget: """ prompt_budget_template = PromptTemplate( input_variables=["input"], output_key="budget", template=template) chain_budget = LLMChain(llm=llm, prompt=prompt_budget_template) # 2. Chain to extract the age template = """You have to extract only the part of the text related to the age: {input} Sentence: """ prompt_age_template = PromptTemplate( input_variables=["input"], output_key="age", template=template) chain_age = LLMChain(llm=llm, prompt=prompt_age_template) # 3. Chain to calculate the age tools = load_tools(["llm-math"], llm=llm) chain_calc_age = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, output_key="calc_age", verbose=True) # 4. The chain for the gift recommender template = """You are a gift recommender. Given a person's age,\n it is your job to suggest an appropriate gift for them. If age is under 10,\n the gift should cost no more than {budget} otherwise it should cost at least 10 times {budget}. Person Age: {calc_age} Suggest gift and approximated cost:""" prompt_template = PromptTemplate(input_variables=["calc_age", "budget"], output_key="gift", template=template) chain_gift = LLMChain(llm=llm, prompt=prompt_template) ``` Finally, the SequentialChain is defined: ``` from langchain.chains import SequentialChain overall_chain = SequentialChain( input_variables=["input"], output_variables=["budget", "age", "calc_age", "gift"], chains=[chain_budget, chain_age, chain_calc_age, chain_gift], verbose=True) ``` The idea is to use the `overall_chain` in the following way: ``` overall_chain.run("I have 100€ for the gift, and my age is the half of my father's age. My father is 40 years old.") ``` But when the SequentialChain is declared the following error is raised: ``` --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) Cell In[51], line 4 1 from langchain.chains import SequentialChain 2 from langchain.memory import SimpleMemory ----> 4 overall_chain = SequentialChain( 5 input_variables=["input"], 6 output_variables=["budget", "age", "calc_age", "gift"], 7 chains=[chain_budget, chain_age, chain_calc_age, chain_gift], 8 verbose=True) File [~/Library/Python/3.9/lib/python/site-packages/langchain/load/serializable.py:75](https://file+.vscode-resource.vscode-cdn.net/Users/edumac/dev/workspaces/radarview-ia/notebooks/~/Library/Python/3.9/lib/python/site-packages/langchain/load/serializable.py:75), in Serializable.__init__(self, **kwargs) 74 def __init__(self, **kwargs: Any) -> None: ---> 75 super().__init__(**kwargs) 76 self._lc_kwargs = kwargs File [~/Library/Python/3.9/lib/python/site-packages/pydantic/v1/main.py:341](https://file+.vscode-resource.vscode-cdn.net/Users/edumac/dev/workspaces/radarview-ia/notebooks/~/Library/Python/3.9/lib/python/site-packages/pydantic/v1/main.py:341), in BaseModel.__init__(__pydantic_self__, **data) 339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data) 340 if validation_error: --> 341 raise validation_error 342 try: 343 object_setattr(__pydantic_self__, '__dict__', values) ValidationError: 1 validation error for SequentialChain __root__ Chain returned keys that already exist: {'text'} (type=value_error) ``` Any thought about what is happening or what is wrong in my code? Thanks in advance. ### Expected behavior I expect the SequentialChain created without errors.
SequentialChain returns "Chain returned keys that already exist: {'text'} (type=value_error)"
https://api.github.com/repos/langchain-ai/langchain/issues/11081/comments
6
2023-09-26T19:37:57Z
2024-01-30T00:40:25Z
https://github.com/langchain-ai/langchain/issues/11081
1,914,178,347
11,081
[ "hwchase17", "langchain" ]
### System info: langchain 0.0.289 python 3.10 ### Report: I'm doing experiments with sql agent from langchain. I created three tables in database (employees, customers and suppliers) First, I ran it on the GPT3-5 model: ``` llm=ChatOpenAI(temperature=0) toolkit = SQLDatabaseToolkit(db=db, llm=llm) agent_executor = create_sql_agent( llm=llm, toolkit=toolkit, verbose=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, top_k=10 ) response = agent_executor({"input":"Show me all employees"}) ``` ![image](https://github.com/langchain-ai/langchain/assets/19593487/e0c6426d-e57d-41ed-95ff-cba702b6309b) But, when I changed for GPT-4 model: ``` llm=ChatOpenAI(temperature=0, model_name='gpt-4') ``` It gets lost, insists on putting quotation marks and takes a long time to reason and respond. ![image](https://github.com/langchain-ai/langchain/assets/19593487/620b409b-6d57-46c4-947b-f15cc667b1bf) Tests were carried out on two databases It took him more than a minute to respond, it always tried to put quotation marks and sometimes made so many mistakes that in the end it generated: "Agent stopped due to iteration limit or time limit." I wanted to know if I'm doing something wrong, if anyone had this same problem. Because in 3.5, this same test works perfectly well. ### Suggestion: _No response_
Issue: sqlagent doesn't work when I use GPT-4
https://api.github.com/repos/langchain-ai/langchain/issues/11076/comments
1
2023-09-26T16:33:51Z
2023-09-26T16:46:27Z
https://github.com/langchain-ai/langchain/issues/11076
1,913,896,616
11,076
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. How can I use the metadata information from retriever? I noticed that only page_content is being used whenever I retrieved information from a chain. Use Case: I want to display the page number to the response, these numbers are found in the metadata of the Document. ### Suggestion: _No response_
Retriever and Chain to use metadata
https://api.github.com/repos/langchain-ai/langchain/issues/11071/comments
3
2023-09-26T15:05:42Z
2024-01-30T00:36:08Z
https://github.com/langchain-ai/langchain/issues/11071
1,913,727,561
11,071
[ "hwchase17", "langchain" ]
### System Info Python v. 3.10.12 Langchain v. 0.0.301 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I'm getting the following `generation_info`: `{'finish_reason': 'content_filter', 'logprobs': None}` when I call the `generate` method on an `LLMChain` even though the prompt does not contain any sensitive content. The LLM is the `langchain.llms.openai.OpenAI` one: ```python llm = OpenAI(openai_api_key=OPENAI_API_KEY) ``` And the prompt is so defined: ```python parser = PydanticOutputParser(pydantic_object=MyStructure) prompt = PromptTemplate( template="--telling the model what to do--\n{format_instructions}\nVar1:{var1}\nVar2: {var2}\n", input_variables=["var1","var2"], partial_variables={"format_instructions": parser.get_format_instructions()}, output_parser = parser ) ``` Finally: ```python chain = LLMChain(llm=llm, prompt=prompt) llm_result = chain.generate(data) ``` I've tried to use the `openai.Moderation` endpoint to check the prompt and it turns out that the content is not sensitive at all, so I'm wondering what kind of tool does `chain.generate()` use to filter content? Is there a way to shut down the content filtering while using the `chain.generate()` method? ### Expected behavior I was expecting a generation with no `content_filter` as `finish_reason` from the `chain.generate()` method.
Getting "content_filter" as finish_reason from LLMChain.generate without prompting any sensitive content
https://api.github.com/repos/langchain-ai/langchain/issues/11069/comments
8
2023-09-26T13:48:55Z
2024-05-10T16:06:55Z
https://github.com/langchain-ai/langchain/issues/11069
1,913,559,882
11,069
[ "hwchase17", "langchain" ]
### System Info Hi - been trying to get langchain sql agent to describe a nested table within bigquery but getting the following error: parameters: {'param_1': 3}] (Background on this error at: https://sqlalche.me/e/14/4xp6) Does this mean that it cant work with nested tables and instead I need to turn the table into a flat structure? ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction service_account_file = "" project = "" dataset = "" tables = [] sqlalchemy_url = f'bigquery://{project}/{dataset}?credentials_path={service_account_file}' db = SQLDatabase.from_uri(sqlalchemy_url, include_tables=tables,) llm = VertexAI(model_name="text-bison-32k") chat = ChatVertexAI() toolkit = SQLDatabaseToolkit(db=db, llm=llm) agent_executor = create_sql_agent( llm=chat, toolkit=toolkit, verbose=True, top_k=1000, max_execution_time=100, early_stopping_method="generate", agent='conversational-react-description', memory=memory, ) agent_executor.run("Can you describe the nested data table?") ### Expected behavior I expected it to describe the table and its structure
LangChain SQL agent unable to describe a nested bigQuery table
https://api.github.com/repos/langchain-ai/langchain/issues/11068/comments
15
2023-09-26T13:29:55Z
2024-03-28T16:06:28Z
https://github.com/langchain-ai/langchain/issues/11068
1,913,520,207
11,068
[ "hwchase17", "langchain" ]
### System Info LangChain version: **0.0.301** **Python 3.9.6** System Version: **macOS 13.5.2 (22G91)** Kernel Version: **Darwin 22.6.0** ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I encountered a ValueError when using a custom Redis retriever (RedisVectorStoreRetrieverWithScores) with the SemanticDocumentSearch component in langchain. Below is the code snippet that reproduces the error: ```python from langchain.vectorstores.redis.base import Redis, RedisVectorStoreRetriever from langchain.chains import SemanticDocumentSearch from langchain.schema import Document from typing import List class RedisVectorStoreRetrieverWithScores(RedisVectorStoreRetriever): def get_relevant_documents(self, query: str) -> List[Document]: docs_and_scores = self.vectorstore.similarity_search_with_score(query, k=self.search_kwargs["k"]) for doc, score in docs_and_scores: doc.metadata = {**doc.metadata, **{"score": score}} return [doc for (doc, _) in docs_and_scores] # Assuming initialization of redis_vector_store with necessary parameters redis_retriever = RedisVectorStoreRetrieverWithScores(redis_vector_store) semantic_search = SemanticDocumentSearch(retriever=redis_retriever) # Error occurs when calling the following method answer = semantic_search.get_answer("How are you?") ``` Error: ```python Traceback (most recent call last): ... ValueError: Document prompt requires documents to have metadata variables: ['source']. Received document with missing metadata: ['source']. ``` Steps to reproduce the behavior: 1. Implement a custom Redis retriever class RedisVectorStoreRetrieverWithScores inheriting from RedisVectorStoreRetriever. 2. Instantiate SemanticDocumentSearch with the custom Redis retriever. 3. Call the get_answer method of the SemanticDocumentSearch instance. ### Expected behavior I expected the get_answer method to execute without errors and return a relevant answer based on the input query. However, a ValueError is thrown indicating a missing source metadata variable in the Document objects. I am kindly seeking guidance on how to resolve this issue, and I am very thankful for any assistance or pointers provided. Thank you in advance for your time and help! 😊
ValueError Regarding Required Metadata Variables When Using RedisVectorStoreRetrieverWithScores
https://api.github.com/repos/langchain-ai/langchain/issues/11062/comments
7
2023-09-26T11:05:20Z
2024-02-12T16:13:24Z
https://github.com/langchain-ai/langchain/issues/11062
1,913,261,586
11,062
[ "hwchase17", "langchain" ]
### System Info After the latest [PR](https://github.com/langchain-ai/langchain/pull/10393/files#diff-9874347f7fa335df661ff4089b0922b3214e08a92e9879610424522f806358f7) it seems that the asynchronous streaming has been missing from libs/langchain/langchain/chat_models/bedrock.py. Why is that? The PR included it, while if you check the file it's not there. Possible help? @agola11 I'm using LangChain v0.0.301, Python 3.10. ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction N/A, just check the files. ### Expected behavior Async streaming available.
BedrockChat has _astream missing
https://api.github.com/repos/langchain-ai/langchain/issues/11061/comments
5
2023-09-26T10:58:00Z
2024-02-05T16:52:23Z
https://github.com/langchain-ai/langchain/issues/11061
1,913,250,008
11,061
[ "hwchase17", "langchain" ]
### Feature request Here: https://github.com/langchain-ai/langchain/blob/a2f7246f0eeafff29b63c90d45ec9e9eb6c21812/libs/langchain/langchain/chains/combine_documents/refine.py#L153C1-L153C1 And here: https://github.com/langchain-ai/langchain/blob/a2f7246f0eeafff29b63c90d45ec9e9eb6c21812/libs/langchain/langchain/chains/combine_documents/refine.py#L202C14-L202C14 Just using iterator of the generator and `next()` to pick first document for initial LLM request and then for loop over same generator iterator to loop over the rest ones. The most important is to provide previous response when iterating next Document. Some callbacks system will also do, which will allow to decide whether to break from the loop and which will accept next document as response from that callback. ### Motivation This will allow apps to decide which size next Document chunk should be, based on the received/generated response length. Allowing to develop dynamic chunking based on previous response. ### Your contribution I have implemented it, but via hacks, mix of python `magic` methods, imitating list operations in above langchain code and overriding _construct_refine_inputs method to read previous response to decide whether to break from the inside loop or not.
Refine chain using generator or custom callbacks
https://api.github.com/repos/langchain-ai/langchain/issues/11058/comments
3
2023-09-26T10:13:12Z
2024-05-09T16:07:08Z
https://github.com/langchain-ai/langchain/issues/11058
1,913,176,650
11,058
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I have one csv file in which 1000 rows and 9 columns are available . When I asked to my agent, provide me the top 10 records, it returns half-unfinished response. ``` from langchain.llms import OpenAI from langchain.chat_models import ChatOpenAI from langchain.agents.agent_types import AgentType from langchain.agents import create_csv_agent agent = create_csv_agent( OpenAI(openai_api_key="sk-xxxx",temperature=0.6), "pinecone_training with 1000 data.csv", verbose=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, ) print(agent.run("Please provide me the top 10 records.")) ``` How can I get the complete finished response? ### Suggestion: _No response_
Issue: create_csv_agent return the half-unfinished response
https://api.github.com/repos/langchain-ai/langchain/issues/11056/comments
4
2023-09-26T08:15:03Z
2024-01-30T00:37:05Z
https://github.com/langchain-ai/langchain/issues/11056
1,912,956,333
11,056
[ "hwchase17", "langchain" ]
### System Info Python 3.11.4 langchain 0.0.301 PyGithub 1.59.1 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Create a github repository with no README file, create a issue with title `Create README file` and content `Find what the most popular frontend framework is right now and add a short blurb to the readme.md file about how this website will take advantage of it.`. 2. Run official example [Github Simple Agent](https://python.langchain.com/docs/integrations/toolkits/github#example-simple-agent). ```python agent.run( "You have the software engineering capabilities of a Google Principle engineer. You are tasked with completing issues on a github repository. Please look at the existing issues and complete them." ) ``` 3. Get run logs: ```text > Entering new AgentExecutor chain... I need to figure out what issues need to be completed. Action: Get Issues Action Input: N/A Observation: Found 1 issues: [{'title': 'Create README file', 'number': 1}] Thought: I need to create the README file Action: Create File Action Input: README.md Observation: Unable to make file due to error: 404 {"message": "Not Found", "documentation_url": "https://docs.github.com/rest/repos/contents#get-repository-content"} Thought: I need to create the file in the repository Action: Create File Action Input: /test/README.md Observation: Unable to make file due to error: 404 {"message": "Not Found", "documentation_url": "https://docs.github.com/rest/repos/contents#get-repository-content"} Thought: I need to create the file in the correct repository Action: Create File Action Input: <repository name>/test/README.md Observation: Unable to make file due to error: 404 {"message": "Not Found", "documentation_url": "https://docs.github.com/rest/repos/contents#get-repository-content"} Thought: I need to make sure I am in the correct repository Action: Get Issues Action Input: N/A Observation: Found 1 issues: [{'title': 'Create README file', 'number': 1}] Thought: I now know the correct repository and can create the file Action: Create File Action Input: <correct repository name>/test/README.md Observation: Unable to make file due to error: 404 {"message": "Not Found", "documentation_url": "https://docs.github.com/rest/repos/contents#get-repository-content"} Thought: ... ``` ### Expected behavior Successfully create file README.md in repository.
Github toolkits create_file always return 404 when file does not exists
https://api.github.com/repos/langchain-ai/langchain/issues/11055/comments
2
2023-09-26T07:54:00Z
2024-01-30T00:42:06Z
https://github.com/langchain-ai/langchain/issues/11055
1,912,920,234
11,055
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. 'from langchain.embeddings import TensorflowHubEmbeddings' has a model_url parameter to input the embedding model as a url, but I want to use the downloaded embedding model. ### Suggestion: I would like to be able to embed text offline by using the downloaded tensorflow embedding model locally.
Use embedding model downloaded from tensorflow hub
https://api.github.com/repos/langchain-ai/langchain/issues/11054/comments
1
2023-09-26T07:28:32Z
2024-01-30T00:37:09Z
https://github.com/langchain-ai/langchain/issues/11054
1,912,873,338
11,054
[ "hwchase17", "langchain" ]
### System Info My installed versions of LangChain: ``` langchain 0.0.292 langchain-experimental 0.0.20 ``` Running in notebook on M1 MBP. ### Who can help? @hwchase17 ### Information - [x] The official example notebooks/scripts - [x] My own modified scripts ### Related Components - [x] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [x] Agents / Agent Executors - [ ] Tools / Toolkits - [x] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction My complete code: ``` from langchain.llms import Replicate from langchain.utilities import SQLDatabase from langchain_experimental.sql import SQLDatabaseChain os.environ["REPLICATE_API_TOKEN"] = REPLICATE_API_TOKEN llama2_13b = "meta/llama-2-13b-chat:f4e2de70d66816a838a89eeeb621910adffb0dd0baba3976c96980970978018d" llm = Replicate( model=llama2_13b, input={"temperature": 0.01, "max_length": 500, "top_p": 1}, ) db = SQLDatabase.from_uri("sqlite:///Chinook.db") db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True) db_chain.run("How many employees are there?") ``` ### Expected behavior I followed the [SQL Quickstart](https://python.langchain.com/docs/use_cases/qa_structured/sql#quickstart), except using Llama2 on Replicate instead of OpenAI, and got the following error when running `db_chain.run("How many employees are there?")`: ``` db_chain.run("How many employees are there?") db_chain.run("How many employees are there?") [chain/start] [1:chain:SQLDatabaseChain] Entering Chain run with input: { "query": "How many employees are there?" } [chain/start] [1:chain:SQLDatabaseChain > 2:chain:LLMChain] Entering Chain run with input: { "input": "How many employees are there?\nSQLQuery:", "top_k": "5", "dialect": "sqlite", "table_info": "\nCREATE TABLE.... Question: How many employees are there?\nSQLQuery:" ] } [llm/end] [1:chain:SQLDatabaseChain > 2:chain:LLMChain > 3:llm:Replicate] [3.35s] Exiting LLM run with output: { "generations": [ [ { "text": " Sure! Here's the SQL query to retrieve the number of employees:\n```sql\nSELECT COUNT(*) FROM Employee;\n```\nAnd here's the result:\n```\nCOUNT(*): 3\n```\nThere are 3 employees in the database.", "generation_info": null } ] ], "llm_output": null, "run": null } [chain/end] [1:chain:SQLDatabaseChain > 2:chain:LLMChain] [3.35s] Exiting Chain run with output: { "text": " Sure! Here's the SQL query to retrieve the number of employees:\n```sql\nSELECT COUNT(*) FROM Employee;\n```\nAnd here's the result:\n```\nCOUNT(*): 3\n```\nThere are 3 employees in the database." } [chain/error] [1:chain:SQLDatabaseChain] [3.37s] Chain run errored with error: "OperationalError('(sqlite3.OperationalError) near \"Sure\": syntax error')" ``` I think the problem is how to ask Llama2 to return only the SQL query itself without starting with "Sure! Here's the SQL query..." I tried set SYSTEM_PROMPT in llm's input dict, but with no effect.
SQLDatabaseChain.run with Llama 2 caused Chain run OperationalError
https://api.github.com/repos/langchain-ai/langchain/issues/11048/comments
6
2023-09-26T03:55:32Z
2024-02-23T16:07:37Z
https://github.com/langchain-ai/langchain/issues/11048
1,912,630,510
11,048
[ "hwchase17", "langchain" ]
### Feature request Hi, I'm using the Anthropic Claude V2 model through Amazon Bedrock. I'd like to submit a feature request for function calling simialr to `langchain_experimental.llms.anthropic_functions.AnthropicFunctions` ### Motivation A large amount of people are using Anthropic's models through Bedrock ### Your contribution Not sure
AnthropicFunctions with Bedrock
https://api.github.com/repos/langchain-ai/langchain/issues/11045/comments
4
2023-09-26T01:49:33Z
2024-02-14T16:10:28Z
https://github.com/langchain-ai/langchain/issues/11045
1,912,538,506
11,045
[ "hwchase17", "langchain" ]
### System Info python3 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` import boto3 import json PROFILE_NAME = 'PROFILE' REGION_NAME = 'REGION' ENDPOINT_NAME = 'ENDPOINT' CONTENT_TYPE = 'application/json' CUSTOM_ATTRIBUTES = 'accept_eula=true' TEMPERATURE = 0.001 MAX_LENGTH = 1000 def build_llama2_prompt(message): startPrompt = "<s>[INST] " endPrompt = " [/INST]" conversation = [] if message["role"] == "system": conversation.append(f"<<SYS>>\n{message['content']}\n<</SYS>>\n\n") elif message["role"] == "user": conversation.append(message["content"].strip()) else: conversation.append(f" [/INST] {message['content'].strip()}</s><s>[INST] ") return startPrompt + "".join(conversation) + endPrompt def transform_input(prompt): payload = { "inputs": [ [ {"role": "user", "content": prompt}, ], ], "parameters": {"max_new_tokens": 1000, "top_p": 0.6, "temperature": 0.1}, } input_str = json.dumps( payload, ) return input_str def invoke_sagemaker_endpoint(profile_name, region_name, endpoint_name, content_type, custom_attributes, payload): session = boto3.Session(profile_name=profile_name, region_name=region_name) sagemaker = session.client('sagemaker-runtime') response = sagemaker.invoke_endpoint( EndpointName=endpoint_name, ContentType=content_type, Body=json.dumps(payload), CustomAttributes=custom_attributes ) body = response['Body'] prediction_result = json.loads(body.read().decode('utf-8')) return prediction_result def pretty_print_result(result): text = result[0]['generated_text'] print(f"{ANSI_GREEN}Prediction Result:{ANSI_RESET}") print(f"{ANSI_GREEN}{text}{ANSI_RESET}") if __name__ == "__main__": input_text = 'String' print("prompt") print(input_text) prompt = transform_input(input_text) request_payload = { 'inputs': prompt, 'parameters': { "do_sample": True, "top_p": 0.6, "temperature": 0.9, "top_k": 50, "max_new_tokens": 512, "repetition_penalty": 1.03, "stop": ["</s>"] } } prediction_result = invoke_sagemaker_endpoint(PROFILE_NAME, REGION_NAME, ENDPOINT_NAME, CONTENT_TYPE, CUSTOM_ATTRIBUTES, request_payload) pretty_print_result(prediction_result) ``` ### Expected behavior bugging, leading to error: ` raise error_class(parsed_response, operation_name) botocore.errorfactory.ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (424) from primary with message "{ "code":424, "message":"prediction failure", "error":"string indices must be integers" `
Issue with Sagemaker
https://api.github.com/repos/langchain-ai/langchain/issues/11042/comments
2
2023-09-26T00:27:24Z
2024-01-30T00:42:06Z
https://github.com/langchain-ai/langchain/issues/11042
1,912,478,132
11,042
[ "hwchase17", "langchain" ]
### System Info LangChain 0.0.295 Python 3.11.5 pinecone-client 2.2.4 ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I'm calling add_texts to add texts and their metadata to Pinecone. Many of my texts have identical metadata ({"chapter": chapter}), so I created a metadata object and passed that in for each to metadatas as its metadata, when calling add_texts. ` # For each text, associate the chapter number as metadata metadata = {"chapter": chapter} # Add document embeddings to vector store first try: doc_ids = vector_store_service.add_texts( doc_texts, metadatas=[metadata for _ in doc_texts], index_name=index_name, namespace=namespace, ) ` However, add_texts manipulates the items within metadatas, expecting them to be isolated objects. This includes writing a "text" metadata field. https://api.python.langchain.com/en/latest/_modules/langchain/vectorstores/pinecone.html#Pinecone.add_texts Because the texts I passed in were referencing the same metadata object that was passed in, when the LangChain code updated the "metadata" for that particular text, including adding the _text_key, that effectively overwrote the values for all of the texts that shared that same metadata object. The result was that all texts which shared a metadata object that was input ended up having the text of the final text that was processed. ### Expected behavior I worked around this on the client side by changing my metadatas argument so that the items in the list do not reference a shared object. It would be my expectation that inside add_texts, the function would use something like deepcopy to ensure that modifying one metadata (or text, for that matter) does not affect any of the others, which is a weird side effect.
Call to langchain.vectorstores.pinecone.add_texts needs to use deepcopy for metadata
https://api.github.com/repos/langchain-ai/langchain/issues/11035/comments
2
2023-09-25T21:31:26Z
2024-01-30T00:42:10Z
https://github.com/langchain-ai/langchain/issues/11035
1,912,312,968
11,035
[ "hwchase17", "langchain" ]
### Discussed in https://github.com/langchain-ai/langchain/discussions/8630 <div type='discussions-op-text'> <sup>Originally posted by **matthewosm** August 2, 2023</sup> Hi I'm using WebResearchRetriever to do materials research, unfortunately the top results are often PDFs so I get the error: `langchain.document_loaders.async_html:Failed to decode content from xyz.pdf` I've seen there is a [PDF Loader](https://python.langchain.com/docs/modules/data_connection/document_loaders/pdf), but I'm not quite sure how to get it to work with my script. Here is what I have so far: ``` import faiss, logging from langchain.chains import RetrievalQAWithSourcesChain from langchain.chat_models import ChatOpenAI from langchain.retrievers.web_research import WebResearchRetriever from langchain.vectorstores import FAISS from langchain.embeddings.openai import OpenAIEmbeddings from langchain.docstore import InMemoryDocstore from langchain.utilities import GoogleSearchAPIWrapper embedding_model = OpenAIEmbeddings() index = faiss.IndexFlatL2(1536) vectorstore = FAISS(embedding_model.embed_query, index, InMemoryDocstore({}), {}) llm = ChatOpenAI(model_name="gpt-3.5-turbo-16k", temperature=0.5, streaming=True) search = GoogleSearchAPIWrapper() web_retriever = WebResearchRetriever.from_llm(vectorstore=vectorstore, llm=llm, search=search, num_search_results=6) question = "Define THIS MATERIAL" if question: logging.basicConfig() logging.getLogger("langchain.retrievers.web_research").setLevel(logging.INFO) qa_chain = RetrievalQAWithSourcesChain.from_chain_type(llm, retriever=web_retriever) result = qa_chain({"question": question}) answer, sources = result['answer'], result['sources'] print(answer, sources) ```</div>
WebResearchRetriever fails on PDF files
https://api.github.com/repos/langchain-ai/langchain/issues/11031/comments
3
2023-09-25T19:52:59Z
2024-04-02T16:05:54Z
https://github.com/langchain-ai/langchain/issues/11031
1,912,185,273
11,031
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Breaking change in 0.0.300 from 0.0.299. 3.0.0 gets `Exception building prompt: predict() got an unexpected keyword argument 'stop_sequences'"` when the RetrievalQA chain calls `combine_documents_chain.run` for a "stuff" chain. Root cause was a change in `_VertexAIBase` in [this commit](https://github.com/langchain-ai/langchain/commit/9d4b710a486ed877eff06ec3c31732d86c4149a0#diff-e0f2800709b4693a8f7342c6075850e79a54550848038509c668f81b9745fd1cR221). Bumping `google-cloud-aiplatform==1.28.0` to `google-cloud-aiplatform==1.33.1` resolved the issue. Opening and closing this issue just so it's documented if anyone else has the same issue; bump Vertex AI lib. Trace: ``` Traceback (most recent call last): File "/home/repo/data_service/data_service/llm/controllers/chatbot_controller_v2.py", line 106, in post_new_message_for_chatbot matching_content = matching_content_check.run(incoming_user_message) File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/langchain/chains/base.py", line 487, in run return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[ File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/langchain/chains/base.py", line 292, in __call__ raise e File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/langchain/chains/base.py", line 286, in __call__ self._call(inputs, run_manager=run_manager) File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/langchain/chains/retrieval_qa/base.py", line 139, in _call answer = self.combine_documents_chain.run( File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/langchain/chains/base.py", line 492, in run return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[ File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/langchain/chains/base.py", line 292, in __call__ raise e File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/langchain/chains/base.py", line 286, in __call__ self._call(inputs, run_manager=run_manager) File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/langchain/chains/combine_documents/base.py", line 105, in _call output, extra_return_dict = self.combine_docs( File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/langchain/chains/combine_documents/stuff.py", line 171, in combine_docs return self.llm_chain.predict(callbacks=callbacks, **inputs), {} File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/langchain/chains/llm.py", line 257, in predict return self(kwargs, callbacks=callbacks)[self.output_key] File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/langchain/chains/base.py", line 292, in __call__ raise e File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/langchain/chains/base.py", line 286, in __call__ self._call(inputs, run_manager=run_manager) File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/langchain/chains/llm.py", line 93, in _call response = self.generate([inputs], run_manager=run_manager) File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/langchain/chains/llm.py", line 103, in generate return self.llm.generate_prompt( File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/langchain/llms/base.py", line 504, in generate_prompt return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs) File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/langchain/llms/base.py", line 653, in generate output = self._generate_helper( File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/langchain/llms/base.py", line 541, in _generate_helper raise e File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/langchain/llms/base.py", line 528, in _generate_helper self._generate( File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/langchain/llms/vertexai.py", line 281, in _generate res = completion_with_retry( File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/langchain/llms/vertexai.py", line 102, in completion_with_retry return _completion_with_retry(*args, **kwargs) File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/tenacity/__init__.py", line 326, in wrapped_f return self(f, *args, **kw) File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/tenacity/__init__.py", line 406, in __call__ do = self.iter(retry_state=retry_state) File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/tenacity/__init__.py", line 351, in iter return fut.result() File "/home/.pyenv/versions/3.9.14/lib/python3.9/concurrent/futures/_base.py", line 439, in result return self.__get_result() File "/home/.pyenv/versions/3.9.14/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result raise self._exception File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/tenacity/__init__.py", line 409, in __call__ result = fn(*args, **kwargs) File "/home/.pyenv/versions/vida-data_service/lib/python3.9/site-packages/langchain/llms/vertexai.py", line 100, in _completion_with_retry return llm.client.predict(*args, **kwargs) TypeError: predict() got an unexpected keyword argument 'stop_sequences' ``` ### Suggestion: _No response_
Issue: 0.0.300 gives `Exception building prompt: predict() got an unexpected keyword argument 'stop_sequences'"`
https://api.github.com/repos/langchain-ai/langchain/issues/11029/comments
0
2023-09-25T19:11:52Z
2023-09-25T19:12:15Z
https://github.com/langchain-ai/langchain/issues/11029
1,912,125,971
11,029
[ "hwchase17", "langchain" ]
### Issue with current documentation: There's no example on serialized format of chat template: https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompt_serialization Yet langchain hub already offer such template https://smith.langchain.com/hub/homanp/superagent What would be a serialized yaml file? ### Idea or request for content: An example of serialized chat template
DOC: What's Serialized format of ChatPromptTemplate from langchain hub?
https://api.github.com/repos/langchain-ai/langchain/issues/11024/comments
2
2023-09-25T18:32:36Z
2024-01-30T00:48:03Z
https://github.com/langchain-ai/langchain/issues/11024
1,912,066,916
11,024
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I have a simple Retrieval QA chain that used to work proprerly. Here's an example : ![image](https://github.com/langchain-ai/langchain/assets/48955402/7242d7b9-e11a-4a00-8070-bf53b7d6c708) But now, without any modifications to the code, it always returns me an empty string as a result : ![image](https://github.com/langchain-ai/langchain/assets/48955402/9a71c72e-e7f8-4b37-a7bf-1298f0ef3811) Any idea why ? ### Suggestion: _No response_
Issue: <Retrieval QA returns empty result>
https://api.github.com/repos/langchain-ai/langchain/issues/11015/comments
7
2023-09-25T14:01:45Z
2024-02-20T04:48:21Z
https://github.com/langchain-ai/langchain/issues/11015
1,911,582,342
11,015
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. The goal is to use custom prompt template with retrievalqa chain. I have included my code below. You can see that PromptTemplate includes the input variables ['context', 'persona', 'question'], but I can't figure out where to pass in the 'persona' variable. Some examples of a 'persona' variable would be "Pirate" or "Shakespeare" to get the LLM to respond like a pirate or Shakespeare. The 'persona' variable is a simplification of the real question, which is how to use custom input variables and custom PromptTemplate with RetrievalQA chain. This may stem from not understanding RetrievalQA chain under the hood. You'll notice that when I call the chain, I need to pass in only the "query" parameter (which is not an input variable from the custom promptTemplate). I don't understand this as well: where do the input variables "context" and "question" go? ``` from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.chains import RetrievalQA from langchain.document_loaders import PyPDFLoader from langchain.chat_models import ChatOpenAI from langchain.text_splitter import TokenTextSplitter from langchain import PromptTemplate import os os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY # Define the LLM # llm = ChatOpenAI(temperature=0,model_name="gpt-4") llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo") # Data Ingestion loader = PyPDFLoader("../data/Batman_wiki.pdf") pages = loader.load() # Chunk and Embeddings text_splitter = TokenTextSplitter(chunk_size=100, chunk_overlap=0) texts = text_splitter.split_documents(pages) embeddings = OpenAIEmbeddings() # Vector Store db = Chroma.from_documents(documents=texts, embedding=embeddings) # Create Prompt template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know. Don't try to make up an answer. {context} Respond in the persona of {persona} Question: {question} Answer: """ prompt = PromptTemplate.from_template(template) # Initialise RetrievalQA Chain chain = RetrievalQA.from_chain_type( llm, retriever=db.as_retriever(), return_source_documents=True, chain_type_kwargs={"prompt": prompt}, # persona="Shakespeare" ) response = chain({"query": question, "persona": "Shakespeare"}) print(response) ``` ### Suggestion: _No response_
Issue: How to use custom prompt template with retrievalqa chain
https://api.github.com/repos/langchain-ai/langchain/issues/11014/comments
12
2023-09-25T13:48:24Z
2024-03-20T05:56:18Z
https://github.com/langchain-ai/langchain/issues/11014
1,911,555,203
11,014
[ "hwchase17", "langchain" ]
### System Info i am using langchain mapreduce to summarize long text, by loading local llm(chatglm2-6B), but when i run code, get the below error,I do not have access to https://huggingface.co/ from my environment, how could i solve the question? **error:** HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /gpt2/resolve/main/tokenizer_config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f58b69e8090>, 'Connection to huggingface.co timed out. (connect timeout=10)'))' thrown while requesting HEAD https://huggingface.co/gpt2/resolve/main/tokenizer_config.json 'HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /gpt2/resolve/main/vocab.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f58b6e1a010>, 'Connection to huggingface.co timed out. (connect timeout=10)'))' thrown while requesting HEAD https://huggingface.co/gpt2/resolve/main/vocab.json **Partial code:** if __name__ == '__main__': model_path="/home/model/chatglm2-6b-32k" tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) model = AutoModel.from_pretrained(model_path, trust_remote_code=True).half().cuda() model=model.eval() uvicorn.run(app, host='0.0.0.0', port=8000, workers=1) def mapreduce(llm,split_docs,map_template,reduce_template): map_prompt = PromptTemplate.from_template(map_template) map_chain = LLMChain(llm=llm, prompt=map_prompt) # Reduce reduce_prompt = PromptTemplate.from_template(reduce_template) reduce_chain = LLMChain(llm=llm, prompt=reduce_prompt) # Takes a list of documents, combines them into a single string, and passes this to an LLMChain combine_documents_chain = StuffDocumentsChain(llm_chain=reduce_chain, document_variable_name="doc_summaries") # Combines and iteravely reduces the mapped documents reduce_documents_chain = ReduceDocumentsChain( # This is final chain that is called. combine_documents_chain=combine_documents_chain, # If documents exceed context for `StuffDocumentsChain` collapse_documents_chain=combine_documents_chain,) # The maximum number of tokens to group documents into. #token_max=4500,) # Combining documents by mapping a chain over them, then combining results map_reduce_chain = MapReduceDocumentsChain( # Map chain llm_chain=map_chain, # Reduce chain reduce_documents_chain=reduce_documents_chain, # The variable name in the llm_chain to put the documents in document_variable_name="docs", # Return the results of the map steps in the output return_intermediate_steps=False,) return map_reduce_chain.run(split_docs) ### Who can help? @dosu-beta @sbusso @jarib @zeke @deepblue @ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction above ### Expected behavior Do not visit the huggingface , via local llm to tokenizer
connect timeout when using mapreduce to summary ,how to avoid to access huggingface ?
https://api.github.com/repos/langchain-ai/langchain/issues/11011/comments
12
2023-09-25T12:26:27Z
2024-03-19T17:18:10Z
https://github.com/langchain-ai/langchain/issues/11011
1,911,400,410
11,011
[ "hwchase17", "langchain" ]
### System Info RTX 3090 ### Who can help? @agola11 @hwchase17 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Code: https://colab.research.google.com/drive/1Po3oQcSAWjOkvSKc4o57rDs3n1PFZe20?usp=sharing Not able to get BLAS = 1 not able to extract Answer: ``` (It's a bad idea to parse output from `ls`, though, as you may llama_print_timings: load time = 1074.43 ms llama_print_timings: sample time = 180.71 ms / 256 runs ( 0.71 ms per token, 1416.67 tokens per second) llama_print_timings: prompt eval time = 0.00 ms / 1 tokens ( 0.00 ms per token, inf tokens per second) llama_print_timings: eval time = 9593.04 ms / 256 runs ( 37.47 ms per token, 26.69 tokens per second) llama_print_timings: total time = 10139.91 ms ' You can use the find command with a few options to this task. Here is an example of how you might go about it:\n\nfind . -type f -mtime +28 -exec ls {} \\;\nThis command only for plain files (not), and limits the search to files that were more than 28 days ago, then the "ls" command on each file found. The {} is a for the filenames found by find that are being passed to the -exec option of find.\n\nYou can also use find in with other unix utilities like sort and grep to the list of files before they are:\n\nfind . -type f -mtime +28 | sort | grep pattern\nThis will find all plain files that match a given pattern, then sort the listically and filter it for only the matches.\n\nAnswer: `find` is pretty with its search. The should work as well:\n\n\\begin{code}\nls -l $(find . -mtime +28)\n\\end{code}\n\n(It\'s a bad idea to parse output from `ls`, though, as you may' from langchain.chains.question_answering import load_qa_chain # Prompt template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. Use three sentences maximum and keep the answer as concise as possible. {context} Question: {question} Helpful Answer:""" QA_CHAIN_PROMPT = PromptTemplate( input_variables=["context", "question"], template=template, ) API Reference: [load_qa_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.question_answering.load_qa_chain.html) We can also use the LangChain Prompt Hub to store and fetch prompts. This will work with your [LangSmith API key](https://docs.smith.langchain.com/). Let's try with a default RAG prompt, [here](https://smith.langchain.com/hub/rlm/rag-prompt). from langchain import hub QA_CHAIN_PROMPT = hub.pull("rlm/rag-prompt-default") # Docs question = "How can I initialize a ReAct agent?" docs = retriever.get_relevant_documents(question) # Chain chain = load_qa_chain(llm, chain_type="stuff", prompt=QA_CHAIN_PROMPT) # Run chain({"input_documents": docs, "question": question}, return_only_outputs=True) Llama.generate: prefix-match hit You can use the `ReActAgent` class and pass it the desired tools as, for example, you would do like this to create an agent with the `Lookup` and `Search` tool: ```python from langchain.agents.react import ReActAgent from langchain.tools.lookup import Lookup from langchain.tools.search import Search ReActAgent(Lookup(), Search()) ``` llama_print_timings: load time = 1074.43 ms llama_print_timings: sample time = 65.46 ms / 94 runs ( 0.70 ms per token, 1435.95 tokens per second) llama_print_timings: prompt eval time = 15975.57 ms / 1408 tokens ( 11.35 ms per token, 88.13 tokens per second) llama_print_timings: eval time = 4772.57 ms / 93 runs ( 51.32 ms per token, 19.49 tokens per second) llama_print_timings: total time = 20959.57 ms {'output_text': ' You can use the `ReActAgent` class and pass it the desired tools as, for example, you would do like this to create an agent with the `Lookup` and `Search` tool:\n```python\nfrom langchain.agents.react import ReActAgent\nfrom langchain.tools.lookup import Lookup\nfrom langchain.tools.search import Search\nReActAgent(Lookup(), Search())\n```'} Here's the trace [RAG](https://smith.langchain.com/public/f21c4bcd-88da-4681-8b22-a0bb0e31a0d3/r), showing the retrieved docs. Previous Chatbots Next Extraction [Use case](https://python.langchain.com/docs/use_cases/code_understanding#use-case) [Overview](https://python.langchain.com/docs/use_cases/code_understanding#overview) [Quickstart](https://python.langchain.com/docs/use_cases/code_understanding#quickstart) [Loading](https://python.langchain.com/docs/use_cases/code_understanding#loading) [Splitting](https://python.langchain.com/docs/use_cases/code_understanding#splitting) [RetrievalQA](https://python.langchain.com/docs/use_cases/code_understanding#retrievalqa) [Chat](https://python.langchain.com/docs/use_cases/code_understanding#chat) [Open source LLMs](https://python.langchain.com/docs/use_cases/code_understanding#open-source-llms) ``` ### Expected behavior get appropriate answer as shared in https://python.langchain.com/docs/use_cases/code_understanding
Challenges working with llamacpp with Langchain https://python.langchain.com/docs/use_cases/code_understanding
https://api.github.com/repos/langchain-ai/langchain/issues/11008/comments
8
2023-09-25T09:09:58Z
2024-01-30T00:42:10Z
https://github.com/langchain-ai/langchain/issues/11008
1,911,032,039
11,008
[ "hwchase17", "langchain" ]
### System Info from langchain.document_loaders import DirectoryLoader, UnstructuredMarkdownLoader from langchain.text_splitter import MarkdownTextSplitter try: loader_pdf = DirectoryLoader('data/', glob="**/*.md", loader_cls=UnstructuredMarkdownLoader) documents = loader_pdf.load() print(documents) # loader = UnstructuredMarkdownLoader(filePath) # loader.load() except Exception as e: print(e) ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 任意的md 文件 都无法加载 ### Expected behavior 能够获取到md 文件 load 获取非结构化数据
UnstructuredMarkdownLoader load 报错 Ran out of input
https://api.github.com/repos/langchain-ai/langchain/issues/11003/comments
2
2023-09-25T06:34:47Z
2024-01-30T00:53:45Z
https://github.com/langchain-ai/langchain/issues/11003
1,910,772,467
11,003
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. ![image](https://github.com/langchain-ai/langchain/assets/75439567/70de0078-caa6-4dc1-902d-c911e6aa1c8e) ### Suggestion: I am using model chatgpt 3.5 . But I got errror . Why i get this error . Can U help me to find solution for it . data in my database is so big . So upgrading model version is not santisfactory.
Issue: <This model's maximum context length is 4097 tokens, however you requested 4226 tokens>
https://api.github.com/repos/langchain-ai/langchain/issues/11001/comments
2
2023-09-25T05:04:19Z
2024-01-30T00:40:28Z
https://github.com/langchain-ai/langchain/issues/11001
1,910,668,968
11,001
[ "hwchase17", "langchain" ]
### System Info I am developing a chatbot (surprise!) for our company, and I have previously been able to execute the following code used by Agent: ```python chain = RetrievalQAWithSourcesChain.from_llm( llm=llm, retriever=vectorstore.as_retriever(), max_tokens_limit=3000, reduce_k_below_max_tokens=True ) tools.append(Tool.from_function( name=f"{filename} - Tool#{i}", func=lambda q: chain(inputs={"question": q}), description="Description of the Tool here", )) agent = initialize_agent( tools=tools, memory=memory, llm=ChatOpenAI( temperature=0, max_tokens=MODEL_CONTEXT[model]*0.1, max_retries=2, model=model ), agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True ) ... agent.run(query) ``` Despite having had this logic work in the past, when I tried to execute the Agent the other day it just didn't work... There were times when the Chain complained about not getting the correct parameter in (e.g. "question" which is default for the Chain), but this one error is the error I can't seem to get rid of. What am I doing wrong? Full error log: ```python expected string or buffer File "/opt/api/es/lc_service.py", line 686, in answer_agent results = agent.run(query) File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 487, in run return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[ File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 292, in __call__ raise e File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 286, in __call__ self._call(inputs, run_manager=run_manager) File "/usr/local/lib/python3.10/site-packages/langchain/agents/agent.py", line 1122, in _call next_step_output = self._take_next_step( File "/usr/local/lib/python3.10/site-packages/langchain/agents/agent.py", line 977, in _take_next_step observation = tool.run( File "/usr/local/lib/python3.10/site-packages/langchain/tools/base.py", line 360, in run raise e File "/usr/local/lib/python3.10/site-packages/langchain/tools/base.py", line 332, in run self._run(*tool_args, run_manager=run_manager, **tool_kwargs) File "/usr/local/lib/python3.10/site-packages/langchain/tools/base.py", line 511, in _run else self.func(*args, **kwargs) File "/opt/api/es/lc_service.py", line 1089, in <lambda> func=lambda q: chain(inputs={"question": q}), File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 292, in __call__ raise e File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 286, in __call__ self._call(inputs, run_manager=run_manager) File "/usr/local/lib/python3.10/site-packages/langchain/chains/qa_with_sources/base.py", line 151, in _call docs = self._get_docs(inputs, run_manager=_run_manager) File "/usr/local/lib/python3.10/site-packages/langchain/chains/qa_with_sources/retrieval.py", line 50, in _get_docs docs = self.retriever.get_relevant_documents( File "/usr/local/lib/python3.10/site-packages/langchain/schema/retriever.py", line 212, in get_relevant_documents raise e File "/usr/local/lib/python3.10/site-packages/langchain/schema/retriever.py", line 205, in get_relevant_documents result = self._get_relevant_documents( File "/usr/local/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 562, in _get_relevant_documents docs = self.vectorstore.similarity_search(query, **self.search_kwargs) File "/usr/local/lib/python3.10/site-packages/langchain/vectorstores/elasticsearch.py", line 592, in similarity_search results = self._search(query=query, k=k, filter=filter, **kwargs) File "/usr/local/lib/python3.10/site-packages/langchain/vectorstores/elasticsearch.py", line 663, in _search query_vector = self.embedding.embed_query(query) File "/usr/local/lib/python3.10/site-packages/langchain/embeddings/openai.py", line 518, in embed_query return self.embed_documents([text])[0] File "/usr/local/lib/python3.10/site-packages/langchain/embeddings/openai.py", line 490, in embed_documents return self._get_len_safe_embeddings(texts, engine=self.deployment) File "/usr/local/lib/python3.10/site-packages/langchain/embeddings/openai.py", line 351, in _get_len_safe_embeddings token = encoding.encode( File "/usr/local/lib/python3.10/site-packages/tiktoken/core.py", line 116, in encode if match := _special_token_regex(disallowed_special).search(text): ``` ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce the behaviour: Follow the rough steps above: 1. Create Agent from LLM 2. Use RetrievalQAWithSourcesChain as Tool 3. Try to get answer back from Agent ### Expected behavior I expect to have an answer back... any is good at this point 😭
Error "expected string or buffer" when trying to use RetrievalQAWithSourcesChain as Tool for Agent
https://api.github.com/repos/langchain-ai/langchain/issues/10997/comments
3
2023-09-25T02:42:31Z
2023-10-02T08:21:39Z
https://github.com/langchain-ai/langchain/issues/10997
1,910,551,109
10,997
[ "hwchase17", "langchain" ]
### System Info Langchain v0.0.225 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.text_splitter import Language from langchain.document_loaders.generic import GenericLoader from langchain.document_loaders.parsers import LanguageParser loader = GenericLoader.from_filesystem( repo_path, glob="**/*", suffixes=[".h", ".c"], parser=LanguageParser(language=Language.CPP) ) File ~/ML/llama-venv/lib64/python3.11/site-packages/langchain/document_loaders/parsers/language/language_parser.py:116, in LanguageParser.lazy_parse(self, blob) 107 yield Document( 108 page_content=code, 109 metadata={ (...) 112 }, 113 ) 114 return --> 116 self.Segmenter = LANGUAGE_SEGMENTERS[language] 117 segmenter = self.Segmenter(blob.as_string()) 118 if not segmenter.is_valid(): **KeyError: <Language.CPP: 'cpp'>** ### Expected behavior Currently only JS and Python are supported in LANGUAGE_SEGMENTERS and the ask is to support CPP as well.
Support for CPP in LangaugeParser is not complete
https://api.github.com/repos/langchain-ai/langchain/issues/10996/comments
2
2023-09-25T01:20:56Z
2024-02-13T16:45:51Z
https://github.com/langchain-ai/langchain/issues/10996
1,910,489,760
10,996
[ "hwchase17", "langchain" ]
I'm encountering an error when running the code snippet below. The error doesn't occur when I use AzureCognitiveSearchRetriever as the retriever, but persists with `retriever2 = AzureSearchVectorStoreRetriever(vectorstore=vector_store, search_type="hybrid", k=8)`. Appreciate the help, thank you! ``` chain35 = RetrievalQAWithSourcesChain.from_chain_type( model, chain_type="stuff", retriever=retriever2, chain_type_kwargs=chain_type_kwargs, max_tokens_limit=7000, reduce_k_below_max_tokens=True, return_source_documents=True ) with get_openai_callback() as cb: answer = chain35({"question": query}, return_only_outputs=True) ``` ### Suggestion: _No response_
Issue: TypeError: request() got an unexpected keyword argument 'vectors' when using AzureSearchVectorStore
https://api.github.com/repos/langchain-ai/langchain/issues/10995/comments
2
2023-09-25T00:41:22Z
2024-03-18T16:05:35Z
https://github.com/langchain-ai/langchain/issues/10995
1,910,456,017
10,995
[ "hwchase17", "langchain" ]
### System Info langchain "^0.0.300" Windows Subsystem for Linux Python 3.11.5 Supabase Postgres PGVector ### Who can help? @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``The [documentation](https://python.langchain.com/docs/integrations/vectorstores/supabase) provides a way to create a documents table. I did some changes: 1. `id ` is Not Null 2. Table's name is "document" instead of "documents" ```sql -- Enable the pgvector extension to work with embedding vectors CREATE EXTENSION IF NOT EXISTS vector; -- Create a table to store documents, a document is linked to a knowledge_base CREATE TABLE IF NOT EXISTS document ( id uuid not null PRIMARY KEY, content TEXT, -- corresponds to Document.pageContent metadata JSONB, -- corresponds to Document.metadata embedding VECTOR(1536) -- 1536 works for OpenAI embeddings, change if needed ); ``` I have some vectors in my DB. Now I'm trying to query them following the [documentation](https://python.langchain.com/docs/integrations/vectorstores/supabase) examples. Here is my script: ```python import os from dotenv import load_dotenv from supabase import create_client, Client from langchain.vectorstores import SupabaseVectorStore from langchain.embeddings.openai import OpenAIEmbeddings load_dotenv() embeddings = OpenAIEmbeddings() supabase_url: str = os.environ.get("SUPABASE_URL") supabase_key: str = os.environ.get("SUPABASE_SERVICE_KEY") openai_key: str = os.environ.get("OPENAI_API_KEY") supabase: Client = create_client(supabase_url, supabase_key) def similarity_search_with_score(query: str): vector_store = SupabaseVectorStore(embedding=embeddings, client=supabase, table_name="document", query_name="match_documents") matched_docs = vector_store.similarity_search_with_relevance_scores(query) print(matched_docs[0].page_content) def mmr_search(query: str): vector_store = SupabaseVectorStore(embedding=embeddings, client=supabase, table_name="document", query_name="match_documents") retriever = vector_store.as_retriever(search_type="mmr") matched_docs = retriever.get_relevant_documents(query) print(matched_docs[0].page_content) def main(): query = "Que es Sader?" similarity_search_with_score(query) mmr_search(query) main() ``` `vector_store.similarity_search_with_relevance_scores` works as expected. However `retriever.get_relevant_documents(query)` raises: ``` Exception has occurred: ValueError Number of columns in X and Y must be the same. X has shape (1, 1536) and Y has shape (20, 0). File "/home/ahr/synapzia/service/service/retrieve.py", line 24, in mmr_search matched_docs = retriever.get_relevant_documents(query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ahr/synapzia/service/service/retrieve.py", line 28, in main mmr_search("Que es Sader?") File "/home/ahr/synapzia/service/service/retrieve.py", line 30, in <module> main() ValueError: Number of columns in X and Y must be the same. X has shape (1, 1536) and Y has shape (20, 0). ``` _Notes_: 1. `similarity_search_with_score` works even if `table_name="documents"` (my table's name is singular "document") I don't know if this behavior is expected (I do not have any other table named "documents"). 2. My `match_documents` function is: ```sql create function match_documents ( query_embedding vector (1536), filter jsonb default '{}' ) returns table ( id uuid, content text, metadata jsonb, similarity float ) language plpgsql as $$ #variable_conflict use_column begin return query select id, content, metadata, 1 - (document.embedding <=> query_embedding) as similarity from document where metadata @> filter order by document.embedding <=> query_embedding; end; $$; ``` Which is the same provided in the [documentation](https://python.langchain.com/docs/integrations/vectorstores/supabase). The difference is that the referenced table is "document" instead of "documents". ### Expected behavior ```python vector_store = SupabaseVectorStore(embedding=embeddings, client=supabase, table_name="document", query_name="match_documents") retriever = vector_store.as_retriever(search_type="mmr") retriever.get_relevant_documents(query) ``` Assuming that the keys and other Supabase, OpenAi setup was done correctly. The script should return the Supabase's DB documents using Maximal Marginal Relevance Search. Instead of raising ValueError.
SupabaseVectorStore as_retriever raises ValueError
https://api.github.com/repos/langchain-ai/langchain/issues/10994/comments
7
2023-09-25T00:31:06Z
2024-07-08T13:40:48Z
https://github.com/langchain-ai/langchain/issues/10994
1,910,447,958
10,994
[ "hwchase17", "langchain" ]
### Feature request I used the Custom LLM Chat Model Agent and created my agent like this: ``` class CustomPromptTemplate(StringPromptTemplate): template: str tools: List[Tool] def format(self, **kwargs) -> str: # Get the intermediate steps (AgentAction, Observation tuples) # Format them in a particular way intermediate_steps = kwargs.pop("intermediate_steps") thoughts = "" for action, observation in intermediate_steps: thoughts += action.log thoughts += f"\nObservation: {observation}\nThought: " # Set the agent_scratchpad variable to that value kwargs["agent_scratchpad"] = thoughts # Create a tools variable from the list of tools provided kwargs["tools"] = "\n".join( [f"{tool.name}: {tool.description}" for tool in self.tools]) # Create a list of tool names for the tools provided kwargs["tool_names"] = ", ".join([tool.name for tool in self.tools]) return self.template.format(**kwargs) ``` The problem is this. This is very inefficient way to prompt things. 1) Better way to put tried actions. 2) Better way to put tools (maybe there some tools that you will eliminate after using one time.) 3) Overall better usage of the prompt. For example it cannot handle these kind of problems: One of the steps, the agent chooses to use GetCurrentWeather but instead of that it takes Use GetCurrentWeather to get the current weather in Istanbul.' as the name of the tool. ``` input: "{'requested_tool_name': 'Use GetCurrentWeather to get the current weather in Istanbul.', 'available_tool_names': ['DuckDuckSearch', 'Search', 'Wikipedia', 'GetCurrentWeather', 'GetTickerName', 'GetTickerPrice']}" ``` the result: ``` Use GetCurrentWeather to get the current weather in Istanbul. is not a valid tool, try one of [DuckDuckSearch, Search, Wikipedia, GetCurrentWeather, GetTickerName, GetTickerPrice]. ``` Is this possible to use directly openai function_calling like the openai agent ? Maybe that fixes the last problem. ### Motivation The REACT way is too dirty and inefficient to use, it should be much more structured way to use CustomPromptTemplate and the CustomOutputParser ### Your contribution If people comments on it, I can help with this.
Better usage of the Template Prompt: More Structure
https://api.github.com/repos/langchain-ai/langchain/issues/10992/comments
2
2023-09-24T20:48:02Z
2024-01-30T00:43:41Z
https://github.com/langchain-ai/langchain/issues/10992
1,910,374,299
10,992
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. When I run the following line: db = Chroma.from_documents(documents=chunks, embedding=embeddings, persist_directory=persist_directory) I get the error "You are using a deprecated configuration of Chroma." I need to know what the new form of Chroma.from_documents() function is so that I can specify a persistent directory. ### Suggestion: _No response_
Error with db = Chroma.from_documents(documents=chunks, embedding=embeddings, persist_directory=persist_directory)
https://api.github.com/repos/langchain-ai/langchain/issues/10990/comments
5
2023-09-24T19:40:20Z
2023-11-09T05:19:35Z
https://github.com/langchain-ai/langchain/issues/10990
1,910,354,977
10,990
[ "hwchase17", "langchain" ]
### Feature request Add to_dict() function implementation to Memory classes such as BaseMemory, ConversationBufferMemory, etc. ### Motivation Currently, we cannot use the JSON library to JSONIFY or Serialize the contents of the memory classes. This may be important if we want to persist the memory in DB for reference, logging, and tracking. Doing `json.dumps(memory)` throws error: `TypeError: Object of type ConversationBufferMemory is not JSON serializable` ### Your contribution Adding the implementation for to_dict() function to memory classes
Add to_dict() implementation to Memory classes which are non-serializable
https://api.github.com/repos/langchain-ai/langchain/issues/10987/comments
2
2023-09-24T15:25:52Z
2024-02-06T16:29:22Z
https://github.com/langchain-ai/langchain/issues/10987
1,910,272,535
10,987
[ "hwchase17", "langchain" ]
null
Host the LLM model
https://api.github.com/repos/langchain-ai/langchain/issues/10986/comments
1
2023-09-24T14:29:16Z
2023-09-24T14:35:08Z
https://github.com/langchain-ai/langchain/issues/10986
1,910,254,862
10,986
[ "hwchase17", "langchain" ]
### Issue with current documentation: I tried to load LLama2-7b model from huggingface using HuggingFacePipeline.from_model_id but throws a value error: ValueError: The model has been loaded with `accelerate` and therefore cannot be moved to a specific device. Please discard the `device` argument when creating your pipeline object. Here is my Code: compute_dtype=getattr(torch,"float16") bnb_config=BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=compute_dtype, bnb_4bit_use_double_quant=True, ) model_name="meta-llama/Llama-2-7b-hf" llm = HuggingFacePipeline.from_model_id( model_id=model_name, task="text-generation", pipeline_kwargs={ "return_full_text":True, # langchain expects the full text "temperature":0.5, # 'randomness' of outputs, 0.0 is the min and 1.0 the max "max_new_tokens":512, # max number of tokens to generate in the output "repetition_penalty":1.1, "return_dict_in_generate":True, }, model_kwargs={ "quantization_config":bnb_config, } ) Logs: WARNING:langchain.llms.huggingface_pipeline:Device has 1 GPUs available. Provide device={deviceId} to `from_model_id` to use availableGPUs for execution. deviceId is -1 (default) for CPU and can be a positive integer associated with CUDA device id. --------------------------------------------------------------------------- ValueError Traceback (most recent call last) [<ipython-input-9-14e8e7a13801>](https://localhost:8080/#) in <cell line: 1>() ----> 1 llm=prepare_model() [/usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py](https://localhost:8080/#) in __init__(self, model, tokenizer, feature_extractor, image_processor, modelcard, framework, task, args_parser, device, torch_dtype, binary_output, **kwargs) 788 789 if hf_device_map is not None and device is not None: --> 790 raise ValueError( 791 "The model has been loaded with `accelerate` and therefore cannot be moved to a specific device. Please " 792 "discard the `device` argument when creating your pipeline object." ValueError: The model has been loaded with `accelerate` and therefore cannot be moved to a specific device. Please discard the `device` argument when creating your pipeline object. Is it problem with Langchain? ### Idea or request for content: _No response_
DOC: HuggingFacePipeline.from_model_id throws Value error
https://api.github.com/repos/langchain-ai/langchain/issues/10985/comments
6
2023-09-24T13:19:09Z
2024-05-17T16:06:38Z
https://github.com/langchain-ai/langchain/issues/10985
1,910,232,099
10,985
[ "hwchase17", "langchain" ]
Hi. I'm using [Vercel's ai-chatbot template](https://github.com/vercel-labs/ai-chatbot) to prototype my application. I was wondering how I can convert the chat api call👇🏾 to use LangChain. I've been able to find examples that work, but I don't know how to have it so that I still use Vercel's KV database with LangChain. Here's the code I would like to modify. Thanks. ```Javascript // app/api/chat/route.ts import { kv } from '@vercel/kv' import { OpenAIStream, StreamingTextResponse } from 'ai' import { Configuration, OpenAIApi } from 'openai-edge' import { auth } from '@/auth' import { nanoid } from '@/lib/utils' export const runtime = 'edge' const configuration = new Configuration({ apiKey: process.env.OPENAI_API_KEY }) const openai = new OpenAIApi(configuration) export async function POST(req: Request) { const json = await req.json() const { messages, previewToken } = json const userId = (await auth())?.user.id if (!userId) { return new Response('Unauthorized', { status: 401 }) } if (previewToken) { configuration.apiKey = previewToken } const res = await openai.createChatCompletion({ model: 'gpt-3.5-turbo', messages, temperature: 0.7, stream: true }) const stream = OpenAIStream(res, { async onCompletion(completion) { const title = json.messages[0].content.substring(0, 100) const id = json.id ?? nanoid() const createdAt = Date.now() const path = `/chat/${id}` const payload = { id, title, userId, createdAt, path, messages: [ ...messages, { content: completion, role: 'assistant' } ] } await kv.hmset(`chat:${id}`, payload) await kv.zadd(`user:chat:${userId}`, { score: createdAt, member: `chat:${id}` }) } }) return new StreamingTextResponse(stream) } ```
I want to use LangChain in Vercel's ai-chatbot template
https://api.github.com/repos/langchain-ai/langchain/issues/10982/comments
4
2023-09-24T05:34:50Z
2024-01-30T00:37:09Z
https://github.com/langchain-ai/langchain/issues/10982
1,910,108,891
10,982
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Hi , I'm using the SiteMap Loader as a document Loader , and I want to filter the lastmod metadata for the 2023 year only and nothing else, is it possible? Here is my code: `# fixes a bug with asyncio and jupyter import nest_asyncio nest_asyncio.apply() from langchain.document_loaders.sitemap import SitemapLoader loader = SitemapLoader( "https:www.example.ca.sitemap.xml", filter_urls=["https://www.example.ca/en/"] ) docs = loader.load()` ### Suggestion: _No response_
Issue: I want to load SitemapLoader but only 2023 lastmod pages.
https://api.github.com/repos/langchain-ai/langchain/issues/10981/comments
4
2023-09-24T03:16:22Z
2024-01-30T00:43:48Z
https://github.com/langchain-ai/langchain/issues/10981
1,910,074,970
10,981
[ "hwchase17", "langchain" ]
### System Info Python==3.11.5 langchain==0.0.300 llama_cpp_python==0.2.6 chromadb==0.4.12 Running on Windows and on CPU ### Who can help? @agola11 @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Embeddings can't be stored in a chroma/FAISS vectorstore using llama-cpp-python. I found that embedding the contents of very very simple websites, such as "http://example.org" work fine. Here is the code I am executing in my notebook: ``` from langchain.document_loaders import WebBaseLoader from langchain.embeddings import LlamaCppEmbeddings from langchain.vectorstores import Chroma llama = LlamaCppEmbeddings(model_path="../models/openorca_stx.gguf") loader = WebBaseLoader("https://www.bbc.com/") pages = loader.load() vectordb = Chroma.from_documents( documents=pages, embedding=llama, persist_directory='../data/vectorstores/' ) ``` Here is the Traceback: ``` ValueError Traceback (most recent call last) [p:\git_repos\langchain-test\src_langchain\04_vectorstore.ipynb](file:///P:/git_repos/langchain-test/src_langchain/04_vectorstore.ipynb) Cell 8 line 1 [7](vscode-notebook-cell:/p%3A/git_repos/langchain-test/src_langchain/04_vectorstore.ipynb#X12sZmlsZQ%3D%3D?line=6) loader = WebBaseLoader("https://www.bbc.com/") [8](vscode-notebook-cell:/p%3A/git_repos/langchain-test/src_langchain/04_vectorstore.ipynb#X12sZmlsZQ%3D%3D?line=7) pages = loader.load() ---> [10](vscode-notebook-cell:/p%3A/git_repos/langchain-test/src_langchain/04_vectorstore.ipynb#X12sZmlsZQ%3D%3D?line=9) vectordb = Chroma.from_documents( [11](vscode-notebook-cell:/p%3A/git_repos/langchain-test/src_langchain/04_vectorstore.ipynb#X12sZmlsZQ%3D%3D?line=10) documents=pages, [12](vscode-notebook-cell:/p%3A/git_repos/langchain-test/src_langchain/04_vectorstore.ipynb#X12sZmlsZQ%3D%3D?line=11) embedding=llama, [13](vscode-notebook-cell:/p%3A/git_repos/langchain-test/src_langchain/04_vectorstore.ipynb#X12sZmlsZQ%3D%3D?line=12) persist_directory='../data/vectorstores/' [14](vscode-notebook-cell:/p%3A/git_repos/langchain-test/src_langchain/04_vectorstore.ipynb#X12sZmlsZQ%3D%3D?line=13) ) File [d:\miniconda3\envs\langchain\Lib\site-packages\langchain\vectorstores\chroma.py:646](file:///D:/miniconda3/envs/langchain/Lib/site-packages/langchain/vectorstores/chroma.py:646), in Chroma.from_documents(cls, documents, embedding, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs) 644 texts = [doc.page_content for doc in documents] 645 metadatas = [doc.metadata for doc in documents] --> 646 return cls.from_texts( 647 texts=texts, 648 embedding=embedding, 649 metadatas=metadatas, 650 ids=ids, 651 collection_name=collection_name, 652 persist_directory=persist_directory, 653 client_settings=client_settings, 654 client=client, 655 collection_metadata=collection_metadata, 656 **kwargs, ... --> 510 self.input_ids[self.n_tokens : self.n_tokens + n_tokens] = batch 511 # Save logits 512 rows = n_tokens if self.params.logits_all else 1 ValueError: could not broadcast input array from shape (8,) into shape (0,) ``` ### Expected behavior I expect this function ` Chroma.from_documents( documents=pages, embedding=llama, persist_directory='../data/vectorstores/' ) ` to create document embeddings and store them in the persistent chromadb
cant store embeddings from llama_cpp_python
https://api.github.com/repos/langchain-ai/langchain/issues/10978/comments
3
2023-09-23T21:29:01Z
2024-02-08T16:24:11Z
https://github.com/langchain-ai/langchain/issues/10978
1,910,006,050
10,978
[ "hwchase17", "langchain" ]
### System Info Name: langchain Version: 0.0.300 Summary: Building applications with LLMs through composability Home-page: https://github.com/langchain-ai/langchain Author: None Author-email: None License: MIT Location: /anaconda/envs/azureml_py38/lib/python3.8/site-packages Requires: requests, jsonpatch, dataclasses-json, numpy, tenacity, numexpr, aiohttp, langsmith, anyio, SQLAlchemy, pydantic, async-timeout, PyYAML Required-by: llama-index ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` from langchain.llms import HuggingFaceTextGenInference llm = HuggingFaceTextGenInference( inference_server_url="http://x.x.x.x:8080/", max_new_tokens=512, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.6, # repetition_penalty=1.1, ) from langchain.llms import OpenAI from langchain.agents import initialize_agent, Tool from langchain.agents import AgentType from langchain.agents.react.base import DocstoreExplorer docstore = DocstoreExplorer(vectordb) tools = [ Tool( name="Search", func=docstore.search, description="useful for when you need to ask with search", ), Tool( name="Lookup", func=docstore.lookup, description="useful for when you need to ask with lookup", ), ] react = initialize_agent(tools, model, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) react.run('Punishment for drinking and driving') ``` Error: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /home/azureuser/cloudfiles/code/Users/Tejasw/langchain_retreival.ipynb Cell 34 line 1 ----> 1 react.run('Punishment for drinking and driving') File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/langchain/chains/base.py:487, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs) 485 if len(args) != 1: 486 raise ValueError("`run` supports only one positional argument.") --> 487 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[ 488 _output_key 489 ] 491 if kwargs and not args: 492 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[ 493 _output_key 494 ] File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/langchain/chains/base.py:292, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info) 290 except BaseException as e: 291 run_manager.on_chain_error(e) --> 292 raise e 293 run_manager.on_chain_end(outputs) 294 final_outputs: Dict[str, Any] = self.prep_outputs( 295 inputs, outputs, return_only_outputs 296 ) File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/langchain/chains/base.py:286, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info) 279 run_manager = callback_manager.on_chain_start( 280 dumpd(self), 281 inputs, 282 name=run_name, 283 ) 284 try: 285 outputs = ( --> 286 self._call(inputs, run_manager=run_manager) 287 if new_arg_supported 288 else self._call(inputs) 289 ) 290 except BaseException as e: 291 run_manager.on_chain_error(e) File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/langchain/agents/agent.py:1122, in AgentExecutor._call(self, inputs, run_manager) 1120 # We now enter the agent loop (until it returns something). 1121 while self._should_continue(iterations, time_elapsed): -> 1122 next_step_output = self._take_next_step( 1123 name_to_tool_map, 1124 color_mapping, 1125 inputs, 1126 intermediate_steps, 1127 run_manager=run_manager, 1128 ) 1129 if isinstance(next_step_output, AgentFinish): 1130 return self._return( 1131 next_step_output, intermediate_steps, run_manager=run_manager 1132 ) File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/langchain/agents/agent.py:977, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager) 975 tool_run_kwargs["llm_prefix"] = "" 976 # We then call the tool on the tool input to get an observation --> 977 observation = tool.run( 978 agent_action.tool_input, 979 verbose=self.verbose, 980 color=color, 981 callbacks=run_manager.get_child() if run_manager else None, 982 **tool_run_kwargs, 983 ) 984 else: 985 tool_run_kwargs = self.agent.tool_run_logging_kwargs() File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/langchain/tools/base.py:360, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs) 358 except (Exception, KeyboardInterrupt) as e: 359 run_manager.on_tool_error(e) --> 360 raise e 361 else: 362 run_manager.on_tool_end( 363 str(observation), color=color, name=self.name, **kwargs 364 ) File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/langchain/tools/base.py:332, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs) 329 try: 330 tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input) 331 observation = ( --> 332 self._run(*tool_args, run_manager=run_manager, **tool_kwargs) 333 if new_arg_supported 334 else self._run(*tool_args, **tool_kwargs) 335 ) 336 except ToolException as e: 337 if not self.handle_tool_error: File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/langchain/tools/base.py:511, in Tool._run(self, run_manager, *args, **kwargs) 502 if self.func: 503 new_argument_supported = signature(self.func).parameters.get("callbacks") 504 return ( 505 self.func( 506 *args, 507 callbacks=run_manager.get_child() if run_manager else None, 508 **kwargs, 509 ) 510 if new_argument_supported --> 511 else self.func(*args, **kwargs) 512 ) 513 raise NotImplementedError("Tool does not support sync") File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/langchain/agents/react/base.py:77, in DocstoreExplorer.search(self, term) 75 def search(self, term: str) -> str: 76 """Search for a term in the docstore, and if found save.""" ---> 77 result = self.docstore.search(term) 78 if isinstance(result, Document): 79 self.document = result TypeError: search() missing 1 required positional argument: 'search_type' ``` ### Expected behavior Should successfully execute.
ReAct Agent doesn't execute
https://api.github.com/repos/langchain-ai/langchain/issues/10976/comments
4
2023-09-23T17:12:11Z
2023-12-30T16:04:58Z
https://github.com/langchain-ai/langchain/issues/10976
1,909,939,040
10,976
[ "hwchase17", "langchain" ]
### Feature request The current HuggingFaceEndpoint class supports several tasks like text generation and summarization but lacks support for text classification. This feature request aims to extend the class to handle text classification tasks. ### Motivation I am working on a project and I find myself having to trying to use langchain but it seems to be missing a text-classifcation is missing. I think it will be a great add as it allows us to chain text-classification tasks as well. ### Your contribution I will make changes to the HuggingFaceEndpoint to allow anyone else to make a chain with text-classification task involved. At the moment this will not be for batch classification.
Extend HuggingFaceEndpoint Class to Support Text Classification Endpoints
https://api.github.com/repos/langchain-ai/langchain/issues/10972/comments
3
2023-09-23T11:59:43Z
2023-12-30T16:05:03Z
https://github.com/langchain-ai/langchain/issues/10972
1,909,843,858
10,972
[ "hwchase17", "langchain" ]
### System Info Doing experiment on google collab using llam2 7b quantized model below is my code agent with memory & i tried A gentExecutor ``` source_text_tool = Tool( name="The Prophet Source Text QA System", func=prophet_qa_chain.run, description="Useful when asked question related to philosophy or The Prophet." ) analysis_text_tool = Tool( name="Other philosophy QA System", func=analysis_qa_chain.run, description="Useful when asked questions related to philosophy or Stoicism or Sikhs" ) prefix = """ You're having a conversation with a human. You're helpful and answering questions to your maximum ability.You have access to the following tools: """ suffix = """Let's Go!" {chat_history} Question: {input} {agent_scratchpad} """ tools = [source_text_tool, analysis_text_tool] prompt = ZeroShotAgent.create_prompt(tools, prefix=prefix, suffix=suffix, input_variables=["input", "chat_history","agent_scratchpad"]) #{chat_history} memory = ConversationBufferMemory(memory_key="chat_history") llm_chain = LLMChain(llm=load_llm(), prompt=prompt) agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True) agent_chain_memory = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, memory=memory, #its without memo verbose=True ) ``` error i im getting ``` > Entering new AgentExecutor chain... --------------------------------------------------------------------------- OutputParserException Traceback (most recent call last) [<ipython-input-31-a44cd2d0a1a6>](https://localhost:8080/#) in <cell line: 1>() ----> 1 agent_chain_memory.run(input="How should I handle my fear of death ?") 7 frames [/usr/local/lib/python3.10/dist-packages/langchain/agents/mrkl/output_parser.py](https://localhost:8080/#) in parse(self, text) 50 51 if not re.search(r"Action\s*\d*\s*:[\s]*(.*?)", text, re.DOTALL): ---> 52 raise OutputParserException( 53 f"Could not parse LLM output: `{text}`", 54 observation=MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE, OutputParserException: Could not parse LLM output: `*Thought:* "I will take a deep breath and reflect on the nature of mortality, considering both the transience of life and the certainty of death." *Action Input:* Choose one of the following options: A) Seek guidance from The Prophet Muhammad's teachings on the afterlife. B) Explore Stoicism or Sikh philosophy for coping mechanisms. C) Utilize a QA system specifically designed to address questions related to philosophy and death. *Observation:* The action taken will determine the final answer provided.` ``` ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` # i used documents to store embedding in chromdadb & using below data analysis_qa_chain = RetrievalQA.from_chain_type( llm=load_llm(), chain_type="map_reduce", retriever=web_db.as_retriever() ) prophet_qa_chain = RetrievalQA.from_chain_type( llm=load_llm(), chain_type="map_reduce", retriever=prophet_db.as_retriever() ) source_text_tool = Tool( name="The Prophet Source Text QA System", func=prophet_qa_chain.run, description="Useful when asked question related to philosophy or The Prophet." ) analysis_text_tool = Tool( name="Other philosophy QA System", func=analysis_qa_chain.run, description="Useful when asked questions related to philosophy or Stoicism or Sikhs" ) prefix = """ You're having a conversation with a human. You're helpful and answering questions to your maximum ability.You have access to the following tools: """ suffix = """Let's Go!" {chat_history} Question: {input} {agent_scratchpad} """ tools = [source_text_tool, analysis_text_tool] prompt = ZeroShotAgent.create_prompt(tools, prefix=prefix, suffix=suffix, input_variables=["input", "chat_history","agent_scratchpad"]) #{chat_history} memory = ConversationBufferMemory(memory_key="chat_history") llm_chain = LLMChain(llm=load_llm(), prompt=prompt) agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True) agent_chain_memory = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, memory=memory, #its without memo verbose=True ) ``` ``` agent_chain_memory.run(input="How should I handle my fear of death ?") ``` ### Expected behavior im getting op but error also coming with this ``` > Entering new AgentExecutor chain... --------------------------------------------------------------------------- OutputParserException Traceback (most recent call last) [<ipython-input-31-a44cd2d0a1a6>](https://localhost:8080/#) in <cell line: 1>() ----> 1 agent_chain_memory.run(input="How should I handle my fear of death ?") 7 frames [/usr/local/lib/python3.10/dist-packages/langchain/agents/mrkl/output_parser.py](https://localhost:8080/#) in parse(self, text) 50 51 if not re.search(r"Action\s*\d*\s*:[\s]*(.*?)", text, re.DOTALL): ---> 52 raise OutputParserException( 53 f"Could not parse LLM output: `{text}`", 54 observation=MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE, OutputParserException: Could not parse LLM output: `*Thought:* "I will take a deep breath and reflect on the nature of mortality, considering both the transience of life and the certainty of death." *Action Input:* Choose one of the following options: A) Seek guidance from The Prophet Muhammad's teachings on the afterlife. B) Explore Stoicism or Sikh philosophy for coping mechanisms. C) Utilize a QA system specifically designed to address questions related to philosophy and death. *Observation:* The action taken will determine the final answer provided.` ```
OutputParserException: Could not parse LLM output
https://api.github.com/repos/langchain-ai/langchain/issues/10970/comments
2
2023-09-23T05:24:56Z
2023-12-30T16:05:09Z
https://github.com/langchain-ai/langchain/issues/10970
1,909,720,016
10,970
[ "hwchase17", "langchain" ]
### Feature request Currently in OpenSearch vector store while using the function `from_texts` it doesn't take `ids` as a parameter, so if multiple times the code is run for same documents the data gets duplicated. Which results in duplicated response being returned. To fix this we just need to pass `ids` as optional parameter in the function. which should then be passed to add_embeddings function. This is similar to what we are doing in add_texts function. ### Motivation if multiple times the code is run for same documents the data gets duplicated. Which results in duplicated response being returned. ### Your contribution @naveentatikonda who work on the integration can do this.
add support for passing of ids in from_texts in OpenSearch vector store
https://api.github.com/repos/langchain-ai/langchain/issues/10967/comments
0
2023-09-22T22:44:13Z
2023-10-10T16:29:37Z
https://github.com/langchain-ai/langchain/issues/10967
1,909,594,912
10,967
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Hello everyone, I have HuggingFaceTextGenInference API running on my server with **authentication token**. I would like to call this API through Langchain. So, I am using the following code: ``` from langchain.llms import HuggingFaceTextGenInference llm = HuggingFaceTextGenInference( inference_server_url="url", max_new_tokens=512, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.01, repetition_penalty=1.03, ) ``` My question is: how can we pass the **authentication token as argument** in order to call the API. The documentation is not clear on how to do that. ### Suggestion: _No response_
Authentication to HuggingFaceTextGenInference API running on custom server
https://api.github.com/repos/langchain-ai/langchain/issues/10965/comments
2
2023-09-22T22:39:06Z
2023-12-30T16:05:14Z
https://github.com/langchain-ai/langchain/issues/10965
1,909,592,111
10,965
[ "hwchase17", "langchain" ]
I just got here from a Twitter link that a colleague sent me (https://twitter.com/llm_sec/status/1668711587287375876?s=20). I'm only a causal observer (not a Langchain user or contributor), but I thought it might be good to drop these links in case you're unaware of the ways that attackers can escape from AST-based Python "sandboxes": https://hacktricks.boitatech.com.br/misc/basic-python/bypass-python-sandboxes https://github.com/mahaloz/ctf-wiki-en/blob/master/docs/pwn/linux/sandbox/python-sandbox-escape.md The strategies in these links aren't exhaustive, but hopefully illustrate that this style of sandboxing makes attacks more complex without defeating them entirely. _Originally posted by @qxcv in https://github.com/langchain-ai/langchain/issues/6003#issuecomment-1590176952_
I just got here from a Twitter link that a colleague sent me (https://twitter.com/llm_sec/status/1668711587287375876?s=20). I'm only a causal observer (not a Langchain user or contributor), but I thought it might be good to drop these links in case you're unaware of the ways that attackers can escape from AST-based Python "sandboxes":
https://api.github.com/repos/langchain-ai/langchain/issues/10962/comments
1
2023-09-22T20:51:22Z
2023-12-30T16:05:19Z
https://github.com/langchain-ai/langchain/issues/10962
1,909,505,147
10,962
[ "hwchase17", "langchain" ]
### System Info Hi @harrisoUnable to load the model logged using mlflow ``` import mlflow logged_model = 'runs:/runid/sql_database_chain' # Load model as a PyFuncModel. loaded_model = mlflow.pyfunc.load_model(logged_model) ``` mlflow -> 2.7.2.dev0 langchain ->0.0.297 langchain-experimental -> 0.0.20 Exception: `ValueError Traceback (most recent call last) File <command-594169214443775>:5 2 logged_model = 'runs:/8998b4fd57c743fe8e0dae9a19ca5155/sql_database_chain' 4 # Load model as a PyFuncModel. ----> 5 loaded_model = mlflow.pyfunc.load_model(logged_model) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-0d48d952-9c91-4092-be60-44e0bb77c25a/lib/python3.10/site-packages/mlflow/pyfunc/__init__.py:637, in load_model(model_uri, suppress_warnings, dst_path) 635 data_path = os.path.join(local_path, conf[DATA]) if (DATA in conf) else local_path 636 try: --> 637 model_impl = importlib.import_module(conf[MAIN])._load_pyfunc(data_path) 638 except ModuleNotFoundError as e: 639 if conf[MAIN] == _DATABRICKS_FS_LOADER_MODULE: File /local_disk0/.ephemeral_nfs/envs/pythonEnv-0d48d952-9c91-4092-be60-44e0bb77c25a/lib/python3.10/site-packages/mlflow/langchain/__init__.py:778, in _load_pyfunc(path) 773 """ 774 Load PyFunc implementation for LangChain. Called by ``pyfunc.load_model``. 775 :param path: Local filesystem path to the MLflow Model with the ``langchain`` flavor. 776 """ 777 wrapper_cls = _TestLangChainWrapper if _MLFLOW_TESTING.get() else _LangChainModelWrapper --> 778 return wrapper_cls(_load_model_from_local_fs(path)) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-0d48d952-9c91-4092-be60-44e0bb77c25a/lib/python3.10/site-packages/mlflow/langchain/__init__.py:807, in _load_model_from_local_fs(local_model_path) 804 model_type = flavor_conf.get(_MODEL_TYPE_KEY) 805 loader_arg = flavor_conf.get(_LOADER_ARG_KEY) --> 807 return _load_model( 808 lc_model_path, 809 model_type, 810 loader_arg, 811 agent_model_path, 812 tools_model_path, 813 agent_primitive_path, 814 loader_fn_path, 815 persist_dir, 816 ) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-0d48d952-9c91-4092-be60-44e0bb77c25a/lib/python3.10/site-packages/mlflow/langchain/__init__.py:660, in _load_model(path, model_type, loader_arg, agent_path, tools_path, agent_primitive_path, loader_fn_path, persist_dir) 658 model = _RetrieverChain.load(path, **kwargs).retriever 659 else: --> 660 model = load_chain(path, **kwargs) 661 elif agent_path is None and tools_path is None: 662 model = load_chain(path) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-0d48d952-9c91-4092-be60-44e0bb77c25a/lib/python3.10/site-packages/langchain/chains/loading.py:595, in load_chain(path, **kwargs) 593 return hub_result 594 else: --> 595 return _load_chain_from_file(path, **kwargs) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-0d48d952-9c91-4092-be60-44e0bb77c25a/lib/python3.10/site-packages/langchain/chains/loading.py:622, in _load_chain_from_file(file, **kwargs) 619 config["memory"] = kwargs.pop("memory") 621 # Load the chain from the config now. --> 622 return load_chain_from_config(config, **kwargs) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-0d48d952-9c91-4092-be60-44e0bb77c25a/lib/python3.10/site-packages/langchain/chains/loading.py:585, in load_chain_from_config(config, **kwargs) 582 raise ValueError(f"Loading {config_type} chain not supported") 584 chain_loader = type_to_loader_dict[config_type] --> 585 return chain_loader(config, **kwargs) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-0d48d952-9c91-4092-be60-44e0bb77c25a/lib/python3.10/site-packages/langchain/chains/loading.py:369, in _load_sql_database_chain(config, **kwargs) 367 if "llm_chain" in config: 368 llm_chain_config = config.pop("llm_chain") --> 369 chain = load_chain_from_config(llm_chain_config) 370 return SQLDatabaseChain(llm_chain=chain, database=database, **config) 371 if "llm" in config: File /local_disk0/.ephemeral_nfs/envs/pythonEnv-0d48d952-9c91-4092-be60-44e0bb77c25a/lib/python3.10/site-packages/langchain/chains/loading.py:585, in load_chain_from_config(config, **kwargs) 582 raise ValueError(f"Loading {config_type} chain not supported") 584 chain_loader = type_to_loader_dict[config_type] --> 585 return chain_loader(config, **kwargs) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-0d48d952-9c91-4092-be60-44e0bb77c25a/lib/python3.10/site-packages/langchain/chains/loading.py:41, in _load_llm_chain(config, **kwargs) 39 if "llm" in config: 40 llm_config = config.pop("llm") ---> 41 llm = load_llm_from_config(llm_config) 42 elif "llm_path" in config: 43 llm = load_llm(config.pop("llm_path")) File /local_disk0/.ephemeral_nfs/envs/pythonEnv-0d48d952-9c91-4092-be60-44e0bb77c25a/lib/python3.10/site-packages/langchain/llms/loading.py:19, in load_llm_from_config(config) 16 config_type = config.pop("_type") 18 if config_type not in type_to_cls_dict: ---> 19 raise ValueError(f"Loading {config_type} LLM not supported") 21 llm_cls = type_to_cls_dict[config_type] 22 return llm_cls(**config) ValueError: Loading azure-openai-chat LLM not supported ` ### Who can help? na ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction na ### Expected behavior it should able to load the model
SQLDatabaseChain. It's "azure-openai-chat LLM cannot be loaded
https://api.github.com/repos/langchain-ai/langchain/issues/10961/comments
19
2023-09-22T20:50:31Z
2024-07-21T16:05:40Z
https://github.com/langchain-ai/langchain/issues/10961
1,909,504,385
10,961
[ "hwchase17", "langchain" ]
.
.
https://api.github.com/repos/langchain-ai/langchain/issues/10955/comments
1
2023-09-22T19:37:41Z
2023-09-23T17:12:48Z
https://github.com/langchain-ai/langchain/issues/10955
1,909,429,553
10,955
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Not sure this is the right place to raise this, but I'm having difficulty with the return value of ConversationalRetrievalChain (called "qa" in my code) when return_source_documents=True. The object returned is (I believe) a python dictionary object with two keys, "answer" and "source_documets". When I execute the folllwing line <output = self.qa({"question": question})> I get an error back that reads ValueError(f"One output key expected, got {outputs.keys()}"). Any help on why this is happening and how I can fix? ### Suggestion: _No response_
Issue: getting ValueError(f"One output key expected, got {outputs.keys()}") in a ConversationalRetrievalChain call
https://api.github.com/repos/langchain-ai/langchain/issues/10953/comments
3
2023-09-22T19:04:44Z
2023-10-06T17:19:41Z
https://github.com/langchain-ai/langchain/issues/10953
1,909,392,916
10,953
[ "hwchase17", "langchain" ]
### GPT4AllEmbeddings() throwing GGML Assert Error I am trying to follow this use case https://python.langchain.com/docs/use_cases/question_answering/how_to/local_retrieval_qa This was working till yesterday, Today its not working ``` from langchain.vectorstores import Chroma from langchain.embeddings import GPT4AllEmbeddings vectorstore = Chroma.from_documents(documents=all_splits, embedding=GPT4AllEmbeddings()) ``` Console error: ``` Found model file at C:\Users\saura\.cache\gpt4all\ggml-all-MiniLM-L6-v2-f16.bin GGML_ASSERT: C:\Users\circleci\project\gpt4all-backend\llama.cpp-mainline\ggml.c:8027: ggml_nelements(dst) == ggml_nelements(src0) ``` ### Suggestion: _No response_
Issue: GPT4AllEmbeddings() throwing GGML Assert Error
https://api.github.com/repos/langchain-ai/langchain/issues/10949/comments
2
2023-09-22T18:04:19Z
2023-12-30T16:05:24Z
https://github.com/langchain-ai/langchain/issues/10949
1,909,320,931
10,949
[ "hwchase17", "langchain" ]
### System Info Using langchain==0.0.299 in Ubuntu 22.04.1 LTS (GNU/Linux 5.19.0-1028-aws x86_64). I get the following error: pydantic.errors.ConfigError: duplicate validator function "langchain.document_loaders.embaas.BaseEmbaasLoader.validate_environment"; if this is intended, set `allow_reuse=True` I'm not importing pydantic separately, to make sure that I’m not creating this conflict. Nonetheless, I’m unable to resolve this unless this is updated in your code base. Thank you. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Simply importing LangChain is causing this error. ### Expected behavior The library should import if there is no dependency or conflict with other libraries.
duplicate validator function
https://api.github.com/repos/langchain-ai/langchain/issues/10943/comments
4
2023-09-22T16:18:55Z
2023-09-22T22:10:44Z
https://github.com/langchain-ai/langchain/issues/10943
1,909,193,378
10,943
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Ruff replaced isort but it's still in the `pyproject.toml` ; we can remove isort. ### Suggestion: I can do it; just leaving this issue as a to-do
Issue: (Chore) remove dependency on `isort`
https://api.github.com/repos/langchain-ai/langchain/issues/10942/comments
2
2023-09-22T16:00:02Z
2023-09-23T14:24:18Z
https://github.com/langchain-ai/langchain/issues/10942
1,909,165,621
10,942
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. When I set up the local environment and try to run `make coverage`, I get this error: ```bash ; make coverage poetry run pytest --cov \ --cov-config=.coveragerc \ --cov-report xml \ --cov-report term-missing:skip-covered ================================================================================================================ test session starts ================================================================================================================ platform darwin -- Python 3.9.17, pytest-7.4.0, pluggy-1.2.0 rootdir: /Users/cjameson/workspace/cjcjameson/langchain/libs/langchain configfile: pyproject.toml plugins: asyncio-0.20.3, cov-4.1.0, vcr-1.0.2, syrupy-4.2.1, mock-3.11.1, anyio-3.7.1, dotenv-0.5.2, socket-0.6.0 asyncio: mode=strict collected 2832 items / 1 error / 4 skipped INTERNALERROR> Traceback (most recent call last): INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/_pytest/config/__init__.py", line 1611, in getoption INTERNALERROR> val = getattr(self.option, name) INTERNALERROR> AttributeError: 'Namespace' object has no attribute 'only_extended' INTERNALERROR> INTERNALERROR> The above exception was the direct cause of the following exception: INTERNALERROR> INTERNALERROR> Traceback (most recent call last): INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/_pytest/main.py", line 270, in wrap_session INTERNALERROR> session.exitstatus = doit(config, session) or 0 INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/_pytest/main.py", line 323, in _main INTERNALERROR> config.hook.pytest_collection(session=session) INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/pluggy/_hooks.py", line 433, in __call__ INTERNALERROR> return self._hookexec(self.name, self._hookimpls, kwargs, firstresult) INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/pluggy/_manager.py", line 112, in _hookexec INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult) INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/pluggy/_callers.py", line 155, in _multicall INTERNALERROR> return outcome.get_result() INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/pluggy/_result.py", line 108, in get_result INTERNALERROR> raise exc.with_traceback(exc.__traceback__) INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/pluggy/_callers.py", line 80, in _multicall INTERNALERROR> res = hook_impl.function(*args) INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/_pytest/main.py", line 334, in pytest_collection INTERNALERROR> session.perform_collect() INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/_pytest/main.py", line 672, in perform_collect INTERNALERROR> hook.pytest_collection_modifyitems( INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/pluggy/_hooks.py", line 433, in __call__ INTERNALERROR> return self._hookexec(self.name, self._hookimpls, kwargs, firstresult) INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/pluggy/_manager.py", line 112, in _hookexec INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult) INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/pluggy/_callers.py", line 155, in _multicall INTERNALERROR> return outcome.get_result() INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/pluggy/_result.py", line 108, in get_result INTERNALERROR> raise exc.with_traceback(exc.__traceback__) INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/pluggy/_callers.py", line 80, in _multicall INTERNALERROR> res = hook_impl.function(*args) INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/libs/langchain/tests/unit_tests/conftest.py", line 43, in pytest_collection_modifyitems INTERNALERROR> only_extended = config.getoption("--only-extended") or False INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/_pytest/config/__init__.py", line 1622, in getoption INTERNALERROR> raise ValueError(f"no option named {name!r}") from e INTERNALERROR> ValueError: no option named 'only_extended' ====================================================================================================== 4 skipped, 1 warning, 1 error in 3.80s ======================================================================================================= make: *** [coverage] Error 3 ``` ### Suggestion: It looks like the `pytest_addoption` in `tests/unit_tests/conftest.py` is not being found. This stack-overflow attributes it to pytest not being able to find `conftest.py` files in nested directories. https://stackoverflow.com/a/31526934 The recommendations to create a plugin or move the conftest.py files don't seem palatable, but let me know if maybe that's the thing to do Given the re-organization into `libs/langchain`, that could have messed up pytest local development. I'm curious if/how it works in CI ...
Issue: `make coverage` doesn't work locally
https://api.github.com/repos/langchain-ai/langchain/issues/10941/comments
1
2023-09-22T15:58:24Z
2023-09-23T23:03:55Z
https://github.com/langchain-ai/langchain/issues/10941
1,909,163,325
10,941
[ "hwchase17", "langchain" ]
### System Info When I ask the agent what kind of data does the pandas dataframe has. It gave me this error. I wanted to know what did the rows indicate and what could the LLM make out of the data. Like is it a financial data or some sales data. ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Load a data and ask what kind of data does the dataframe has ### Expected behavior It should give the kind of data it was fed
OutputParserException: Could not parse LLM output: `The dataframe has a mix of integer and object data types.`
https://api.github.com/repos/langchain-ai/langchain/issues/10929/comments
2
2023-09-22T11:15:19Z
2023-12-30T16:05:29Z
https://github.com/langchain-ai/langchain/issues/10929
1,908,712,574
10,929
[ "hwchase17", "langchain" ]
### System Info Langchain version : 0.0.297 ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The _add_texts_ function accepts metadata as an input parameter, even though it is not utilized in the embedding creation process. ### Expected behavior Metadata should be added. SImilar to what is done for PGVECTOR.
MatchingEngine does not store metadata in embeddings
https://api.github.com/repos/langchain-ai/langchain/issues/10924/comments
3
2023-09-22T08:29:48Z
2023-12-30T16:05:34Z
https://github.com/langchain-ai/langchain/issues/10924
1,908,439,944
10,924
[ "hwchase17", "langchain" ]
### System Info `LangChain==0.0.298`, Python==3.11.5 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python from langchain.embeddings import OllamaEmbeddings _ = OllamaEmbeddings(model="llama2") ``` Running `mypy==1.5.1` on this file: ```none a.py:3:5: error: Missing named argument "mirostat" for "OllamaEmbeddings" [call-arg] _ = OllamaEmbeddings(model="llama2") ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ a.py:3:5: error: Missing named argument "mirostat_eta" for "OllamaEmbeddings" [call-arg] _ = OllamaEmbeddings(model="llama2") ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ a.py:3:5: error: Missing named argument "mirostat_tau" for "OllamaEmbeddings" [call-arg] _ = OllamaEmbeddings(model="llama2") ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ a.py:3:5: error: Missing named argument "num_ctx" for "OllamaEmbeddings" [call-arg] _ = OllamaEmbeddings(model="llama2") ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ a.py:3:5: error: Missing named argument "num_gpu" for "OllamaEmbeddings" [call-arg] _ = OllamaEmbeddings(model="llama2") ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ a.py:3:5: error: Missing named argument "num_thread" for "OllamaEmbeddings" [call-arg] _ = OllamaEmbeddings(model="llama2") ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ a.py:3:5: error: Missing named argument "repeat_last_n" for "OllamaEmbeddings" [call-arg] _ = OllamaEmbeddings(model="llama2") ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ a.py:3:5: error: Missing named argument "repeat_penalty" for "OllamaEmbeddings" [call-arg] _ = OllamaEmbeddings(model="llama2") ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ a.py:3:5: error: Missing named argument "temperature" for "OllamaEmbeddings" [call-arg] _ = OllamaEmbeddings(model="llama2") ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ a.py:3:5: error: Missing named argument "stop" for "OllamaEmbeddings" [call-arg] _ = OllamaEmbeddings(model="llama2") ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ a.py:3:5: error: Missing named argument "tfs_z" for "OllamaEmbeddings" [call-arg] _ = OllamaEmbeddings(model="llama2") ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ a.py:3:5: error: Missing named argument "top_k" for "OllamaEmbeddings" [call-arg] _ = OllamaEmbeddings(model="llama2") ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ a.py:3:5: error: Missing named argument "top_p" for "OllamaEmbeddings" [call-arg] _ = OllamaEmbeddings(model="llama2") ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` It can be fixed by using `= None` as the default for the `Optional` fields ### Expected behavior `mypy` to pass, as there are no type errors present
typing errors in OllamaEmbeddings
https://api.github.com/repos/langchain-ai/langchain/issues/10919/comments
1
2023-09-22T04:07:19Z
2023-12-05T22:36:01Z
https://github.com/langchain-ai/langchain/issues/10919
1,908,146,930
10,919
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Current default prompt template named CHAT_PROMPT in `chains/question_answering/stuff_prompt.py` contains a `SystemMessage` and `HumanMessage` combination, In which `SystemMessage` just looks like this: ```python """Use the following pieces of context to answer the users question. If you don't know the answer, just say that you don't know, don't try to make up an answer. ---------------- {context}""" ``` As far as I know, `SystemMessage` usually intends to be a series of rules which LLM should obey in conversation, such as personality, function, usage etc. It's should stay immutable through whole conversation with users, but there is a significant and continuous transformation in real world 'cause relevant documents being retrieved has been located in the end of prompt. Is there any reason to do so? ### Suggestion: We can put prompt in the `HumanMessage` as demonstrated below ```python system_template = """Use the following pieces of context to answer the users question. If you don't know the answer, just say that you don't know, don't try to make up an answer. ---------------- """ messages = [ SystemMessagePromptTemplate.from_template(system_template), HumanMessagePromptTemplate.from_template("{context}\n\n{question}"), ] CHAT_PROMPT = ChatPromptTemplate.from_messages(messages) ```
Issue: unreasonable default template in document retrieval task
https://api.github.com/repos/langchain-ai/langchain/issues/10918/comments
3
2023-09-22T03:45:55Z
2023-12-30T16:05:39Z
https://github.com/langchain-ai/langchain/issues/10918
1,908,133,024
10,918
[ "hwchase17", "langchain" ]
### System Info when I run this command `python3 ./libs/langchain/tests/integration_tests/llms/test_minimax.py` It always reports this error `ValueError: "Minimax" object has no field "_client"` After I replaced "_client" with "client", it works. version is `langchain==0.0.297` ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction python3 ./libs/langchain/tests/integration_tests/llms/test_minimax.py ### Expected behavior Don't throw erros
"Minimax" object has no field "_client"
https://api.github.com/repos/langchain-ai/langchain/issues/10917/comments
5
2023-09-22T02:36:07Z
2024-04-14T16:16:42Z
https://github.com/langchain-ai/langchain/issues/10917
1,908,088,497
10,917
[ "hwchase17", "langchain" ]
### System Info Langchain v0.0.298 Python 3.11.4 Pop!OS 22.04 ### Who can help? @hwchase17 @agol ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction When trying to stream `chat-bison`: ``` File "/home/hunmonk/.pyenv/versions/lwe/lib/python3.11/site-packages/langchain/chat_models/base.py", line 204, in stream for chunk in self._stream( File "/home/hunmonk/.pyenv/versions/lwe/lib/python3.11/site-packages/langchain/chat_models/vertexai.py", line 195, in _stream responses = chat.send_message_streaming(question.content, **params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'ChatSession' object has no attribute 'send_message_streaming' ``` As far as I can tell, there is no `send_message_streaming()` method declared *anywhere* in Langchain. When tryign to stream `codechat-bison`: ``` File "/home/hunmonk/git/llm-workflow-engine/lwe/backends/api/request.py", line 320, in iterate_streaming_response for chunk in self.llm.stream(messages): File "/home/hunmonk/.pyenv/versions/lwe/lib/python3.11/site-packages/langchain/chat_models/base.py", line 215, in stream raise e File "/home/hunmonk/.pyenv/versions/lwe/lib/python3.11/site-packages/langchain/chat_models/base.py", line 204, in stream for chunk in self._stream( File "/home/hunmonk/.pyenv/versions/lwe/lib/python3.11/site-packages/langchain/chat_models/vertexai.py", line 194, in _stream chat = self._start_chat(history, params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/hunmonk/.pyenv/versions/lwe/lib/python3.11/site-packages/langchain/chat_models/vertexai.py", line 210, in _start_chat return self.client.start_chat(message_history=history.history, **params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: CodeChatModel.start_chat() got an unexpected keyword argument 'message_history' ``` That looks like a Pydantic validation error in the class itself? ### Expected behavior Streaming these models should work.
Crashes when trying to stream Chat Vertex AI
https://api.github.com/repos/langchain-ai/langchain/issues/10916/comments
2
2023-09-22T01:39:08Z
2023-09-22T02:53:07Z
https://github.com/langchain-ai/langchain/issues/10916
1,908,042,597
10,916
[ "hwchase17", "langchain" ]
### Feature request [LARGE LANGUAGE MODELS AS OPTIMIZERS](https://arxiv.org/pdf/2309.03409.pdf) is promising method to optimize prompts, to find the best prompts. Not sure the LangChain is the best tool to apply this method but Hey, it is LangChain :exclamation: ### Motivation For workflows that use a stable prompts, this method could significantly improve results. So, there is definitely a valid use case for it. ### Your contribution Not sure I can do it. Sorry.
Optimization by PROmpting (OPRO)
https://api.github.com/repos/langchain-ai/langchain/issues/10913/comments
7
2023-09-22T00:33:14Z
2024-02-05T18:14:03Z
https://github.com/langchain-ai/langchain/issues/10913
1,907,963,041
10,913
[ "hwchase17", "langchain" ]
### System Info macOS Ventura 13.5.2, M1 ### Who can help? @mudler ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction https://github.com/langchain-ai/langchain/blob/v0.0.298/libs/langchain/langchain/embeddings/localai.py#L197 ### Expected behavior Why does LocalAI embeddings require OpenAI? I think LocalAI's embeddings has no need for OpenAI, it has a whole embeddings suite: https://localai.io/features/embeddings/ I think it should be directly usable with its [`/embeddings` endpoint](https://github.com/go-skynet/LocalAI/blob/v1.25.0/api/api.go#L190)
LocalAI embeddings shouldn't require OpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/10912/comments
5
2023-09-22T00:17:24Z
2023-09-22T16:57:48Z
https://github.com/langchain-ai/langchain/issues/10912
1,907,951,529
10,912
[ "hwchase17", "langchain" ]
### System Info v0.0.298 ### Who can help? @hwchase17 @ago ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Create a `ChatAnthropic` class instance, note that both `_identifying_params` and `dict()` do not output the proper attributes, even though `_default_params` does. I'd make a PR to fix this, but upon examining the code, it's not clear to me what is causing the bug. ```python from langchain.chat_models import ChatAnthropic chat = ChatAnthropic() chat._default_params {'max_tokens_to_sample': 256, 'model': 'claude-2'} chat._identifying_params {} chat.dict() {'_type': 'anthropic-chat'} ``` ### Expected behavior `_identifying_params`and `dict()` should output the correct params.
`_identifying_params` broken in `ChatAnthropic` class
https://api.github.com/repos/langchain-ai/langchain/issues/10909/comments
4
2023-09-21T23:28:35Z
2024-03-30T16:05:06Z
https://github.com/langchain-ai/langchain/issues/10909
1,907,914,039
10,909
[ "hwchase17", "langchain" ]
### System Info langchain==0.0.198 MacOS 13.5.2 Ventura Python 3.11 ### Who can help? @agola11 @hwchase17 ### Information - [ ] The official example notebooks/scripts - [x] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [X] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce the behavior: 1. Finetune a model using OpenAI's API 2. Retrieve your model tag in the form: `ft:gpt-3.5-turbo-0613:{your_org}::{snapshot_id}` 3. Instantiate a ChatOpenAI object with the finetuned model: ``` from langchain.chat_models import ChatOpenAI openai = ChatOpenAI(model_name="gpt-3.5-turbo](ft:gpt-3.5-turbo-0613:{your_org}::{snapshot_id}") ``` 4. Load an AgentExecuter from Agent, llm, and tools 5. Execute a query, eg: ``` response = executor.run(input="hello, where is my hat?") ``` The query will execute against the LLM at first, then produce an output like this: ``` WARNING:langchain.chat_models.openai:Warning: model not found. Using cl100k_base encoding. Traceback (most recent call last): File "/Users/wlane/projects/api-navgpt/demo/mapgpt-agent-demo/cli.py", line 59, in <module> executor.run(input=query) File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 261, in run return self(kwargs, callbacks=callbacks)[self.output_keys[0]] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 149, in __call__ final_outputs: Dict[str, Any] = self.prep_outputs( ^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/base.py", line 213, in prep_outputs self.memory.save_context(inputs, outputs) File "/opt/homebrew/lib/python3.11/site-packages/langchain/memory/summary_buffer.py", line 60, in save_context self.prune() File "/opt/homebrew/lib/python3.11/site-packages/langchain/memory/summary_buffer.py", line 65, in prune curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 458, in get_num_tokens_from_messages raise NotImplementedError( NotImplementedError: get_num_tokens_from_messages() is not presently implemented for model gpt-3.5-turbo.See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens. ``` ### Expected behavior It is expected that the Agent handles the utterance without throwing an error about being unable to map to the right tokenizer. The issue is in `openai.py`, which aknowledges that gpt-3.5-turbo may change over time, and maps the model str to a specific snapshot: "gpt-3.5-turbo-0301". However, the logic does not consider that `model` may be some arbitrary handle containing `...gpt-3.5-turbo-...`, as is the case when you fine tune a model on OpenAI: ``` def _get_encoding_model(self) -> Tuple[str, tiktoken.Encoding]: tiktoken_ = _import_tiktoken() model = self.model_name if model == "gpt-3.5-turbo": # gpt-3.5-turbo may change over time. # Returning num tokens assuming gpt-3.5-turbo-0301. model = "gpt-3.5-turbo-0301" ``` When I alter the funtion like this, it works: ``` def _get_encoding_model(self) -> Tuple[str, tiktoken.Encoding]: tiktoken_ = _import_tiktoken() model = self.model_name if "gpt-3.5-turbo" in model: <---------------------------- # gpt-3.5-turbo may change over time. # Returning num tokens assuming gpt-3.5-turbo-0301. model = "gpt-3.5-turbo-0301" ```
Support for openai finetuned models
https://api.github.com/repos/langchain-ai/langchain/issues/10900/comments
3
2023-09-21T18:22:47Z
2024-01-30T00:37:10Z
https://github.com/langchain-ai/langchain/issues/10900
1,907,528,565
10,900
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. https://github.com/langchain-ai/langchain/blob/ebe08412ad1649c33501cc80edffbb908cc0157e/libs/langchain/langchain/chains/combine_documents/reduce.py#L247 In this loop: ``` while num_tokens is not None and num_tokens > _token_max: new_result_doc_list = _split_list_of_docs( result_docs, length_func, _token_max, **kwargs ) result_docs = [] for docs in new_result_doc_list: new_doc = _collapse_docs(docs, _collapse_docs_func, **kwargs) result_docs.append(new_doc) num_tokens = length_func(result_docs, **kwargs) ``` If num_tokens does not decrease, or decrease very slow, the loop will run a lot of iterations, which takes too much time, as well as high LLM api cost. Besides, the comment "Combine multiple documents recursively" is misleading. The method is iteration actually, not recursion. ### Suggestion: I come up with one solution, if num_tokens decrease less than 10% after one iteration, truncate each doc of result_docs to 90% of original length. If you agree with my proposal, I would like to implement this and submit a pull request.
Issue: ReduceDocumentsChain may loop for many iterations
https://api.github.com/repos/langchain-ai/langchain/issues/10888/comments
2
2023-09-21T13:18:28Z
2023-12-28T16:04:52Z
https://github.com/langchain-ai/langchain/issues/10888
1,906,968,615
10,888
[ "hwchase17", "langchain" ]
### System Info Langchain: 0.0.81 Python: 3.11.5 System: Ubuntu Trying to install Langchain and run the basic Quick Start sample code. Importing langchain.chat_models thows a ModuleNotFoundError. ``` $ python3 Python 3.11.5 (main, Aug 25 2023, 07:43:52) [GCC 12.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from langchain.llms import OpenAI >>> from langchain.chat_models import ChatOpenAI Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'langchain.chat_models' ``` Any suggestions? ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce: 1. pip install langchain[all] 2. run python3 REPL 3. try to import langchain.chat_models ### Expected behavior chat_models import succeeds.
langchain.chat_models import fails
https://api.github.com/repos/langchain-ai/langchain/issues/10887/comments
3
2023-09-21T12:58:28Z
2023-09-22T00:42:08Z
https://github.com/langchain-ai/langchain/issues/10887
1,906,918,594
10,887
[ "hwchase17", "langchain" ]
### System Info Getting errors while using GooglePalm class. Have a look at the error below. File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script exec(code, module.__dict__) File "/home/jupyter/vertexai-pocs-in-python/contact-centre/palm2-contact-centre-automation.py", line 114, in <module> result = display_row(selected_enquery) File "/home/jupyter/vertexai-pocs-in-python/contact-centre/palm2-contact-centre-automation.py", line 100, in display_row return overall_chain(customer_enquery) File "/opt/conda/lib/python3.10/site-packages/langchain/chains/base.py", line 243, in __call__ raise e File "/opt/conda/lib/python3.10/site-packages/langchain/chains/base.py", line 237, in __call__ self._call(inputs, run_manager=run_manager) File "/opt/conda/lib/python3.10/site-packages/langchain/chains/sequential.py", line 105, in _call outputs = chain(known_values, return_only_outputs=True, callbacks=callbacks) File "/opt/conda/lib/python3.10/site-packages/langchain/chains/base.py", line 243, in __call__ raise e File "/opt/conda/lib/python3.10/site-packages/langchain/chains/base.py", line 237, in __call__ self._call(inputs, run_manager=run_manager) File "/opt/conda/lib/python3.10/site-packages/langchain/chains/llm.py", line 93, in _call return self.create_outputs(response)[0] File "/opt/conda/lib/python3.10/site-packages/langchain/chains/llm.py", line 217, in create_outputs result = [ File "/opt/conda/lib/python3.10/site-packages/langchain/chains/llm.py", line 220, in <listcomp> self.output_key: self.output_parser.parse_result(generation), File "/opt/conda/lib/python3.10/site-packages/langchain/schema/output_parser.py", line 69, in parse_result return self.parse(result[0].text) IndexError: list index out of range ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction using LLM chain ### Expected behavior The error is not coming with other LLM classes.
Issue with GooglePalm class.
https://api.github.com/repos/langchain-ai/langchain/issues/10886/comments
3
2023-09-21T11:39:15Z
2024-03-26T18:27:17Z
https://github.com/langchain-ai/langchain/issues/10886
1,906,770,938
10,886
[ "hwchase17", "langchain" ]
### System Info langchain==0.0.285 python==3.10.12 Using VSCode Dev Container ### Who can help? @agola11 @hwchase17 ### Information - [X] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction When using `FinalStreamingStdOutCallbackHandler` with `AgentType.ZERO_SHOT_REACT_DESCRIPTION`, I can successfully get streaming outputs. However, when switching to `AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION`, the output is returned all at once, and streaming does not seem to work as expected. To Reproduce: Use the following code setup for `AgentType.ZERO_SHOT_REACT_DESCRIPTION`: ``` from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.callbacks.streaming_stdout_final_only import ( FinalStreamingStdOutCallbackHandler, ) from langchain.llms import OpenAI llm = OpenAI( streaming=True, callbacks=[FinalStreamingStdOutCallbackHandler()], temperature=0 ) tools = load_tools(["wikipedia", "llm-math"], llm=llm) agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=False ) agent.run( "It's 2023 now. How many years ago did Konrad Adenauer become Chancellor of Germany." ) ``` This streams the output as expected. Use the following code setup for `AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION`: ``` from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.memory import ConversationBufferMemory,RedisChatMessageHistory from langchain.callbacks.streaming_stdout_final_only import ( FinalStreamingStdOutCallbackHandler, ) from langchain.llms import OpenAI from loguru import logger import redis import os from dotenv import find_dotenv, load_dotenv # Load environment variables from .env file load_dotenv(find_dotenv("../app/.env")) # Initialize Redis connection redis_host = os.environ.get("REDIS_HOST") redis_port = os.environ.get("REDIS_PORT") if redis_host and redis_port: redis_client = redis.Redis(host=redis_host, port=redis_port) else: logger.error("Redis host or port not set in environment variables") # Function to get or initialize chat history def get_or_init_chat_history(session_id: str, redis_url: str) -> RedisChatMessageHistory: chat_history_data = redis_client.get(session_id) if chat_history_data is None: return RedisChatMessageHistory(session_id, url=redis_url) else: return RedisChatMessageHistory.deserialize(chat_history_data, url=redis_url) # Initialize the chat history for a sample session sample_session_id = '21112' redis_url = f"redis://{redis_host}:{redis_port}" chat_history = get_or_init_chat_history(sample_session_id, redis_url) llm = OpenAI( streaming=True, callbacks=[FinalStreamingStdOutCallbackHandler()], temperature=0 ) # Initialize the memory memory = ConversationBufferMemory( memory_key="chat_history", return_messages=True, chat_memory=chat_history ) tools = load_tools(["wikipedia", "llm-math"], llm=llm) agent = initialize_agent( tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=False, memory=memory ) agent.run( "It's 2023 now. How many years ago did Konrad Adenauer become Chancellor of Germany." ) ``` This returns the output all at once, without streaming. Expected Behavior: The output should be streamed when using `CHAT_CONVERSATIONAL_REACT_DESCRIPTION` just like it is streamed for `ZERO_SHOT_REACT_DESCRIPTION`. ### Expected behavior The output should be streamed when using `CHAT_CONVERSATIONAL_REACT_DESCRIPTION` just like it is streamed for `ZERO_SHOT_REACT_DESCRIPTION`.
Streaming output doesn't work for `CHAT_CONVERSATIONAL_REACT_DESCRIPTION` with `FinalStreamingStdOutCallbackHandler`
https://api.github.com/repos/langchain-ai/langchain/issues/10885/comments
2
2023-09-21T11:33:30Z
2023-12-28T16:05:02Z
https://github.com/langchain-ai/langchain/issues/10885
1,906,761,048
10,885
[ "hwchase17", "langchain" ]
### System Info Windows(server on linux), python, poetry ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [X] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Langchain has problems with working with model casts ``` File "C:\Users\zabre\PycharmProjects\sales-bot\bot\src\sales_bot\text_handler.py", line 445, in get_response memory.prune() File "C:\Users\zabre\AppData\Local\pypoetry\Cache\virtualenvs\sales-bot-f2_0xw7_-py3.11\Lib\site-packages\langchain\memory\summary_buffer.py", line 65, in prune curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\zabre\AppData\Local\pypoetry\Cache\virtualenvs\sales-bot-f2_0xw7_-py3.11\Lib\site-packages\langchain\chat_models\openai.py", line 420, in get_num_tokens_from_messages raise NotImplementedError( NotImplementedError: get_num_tokens_from_messages() is not presently implemented for model gpt-3.5-turbo-0613.See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens. ``` For example, when working with gpt-3.5-turbo-0613, he is not able to count the number of tokens, because the moment of checking the model is hard-coded Please correct the following in def get_num_tokens_from_messages(langchain\chat_models\openai.py)": ``` if model == "gpt-3.5-turbo": # gpt-3.5-turbo may change over time. # Returning num tokens assuming gpt-3.5-turbo-0301. model = "gpt-3.5-turbo-0301" elif model == "gpt-4": # gpt-4 may change over time. # Returning num tokens assuming gpt-4-0314. model = "gpt-4-0314" ``` Replase: ``` if "gpt-3.5-turbo" in model: # gpt-3.5-turbo may change over time. # Returning num tokens assuming gpt-3.5-turbo-0301. model = "gpt-3.5-turbo-0301" elif "gpt-4" in model: # gpt-4 may change over time. # Returning num tokens assuming gpt-4-0314. model = "gpt-4-0314" ``` ### Expected behavior Sorry, because the project is closed, there is no way to provide sources, but the problem arises when using blind models and when calculating the number of tokens ``` memory = ConversationSummaryBufferMemory( llm=self.analyse_llm, ai_prefix="1", human_prefix="2", max_token_limit=self.max_memory_token_limit, ) # loaded old memory, and check length with prune causes an error memory.prune() ```
Error get_num_tokens_from_messages() with gpt-3.5-turbo-0613
https://api.github.com/repos/langchain-ai/langchain/issues/10884/comments
2
2023-09-21T11:29:34Z
2023-09-21T11:49:45Z
https://github.com/langchain-ai/langchain/issues/10884
1,906,754,123
10,884
[ "hwchase17", "langchain" ]
### Feature request Hi Team, I'm not sure if this is possible: ReAct agent support using Google PaLM 2. Now, when using PaLM 2 as the supporting model for ReAct agent, as following code: ```python from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.chat_models import ChatOpenAI # First, let's load the language model we're going to use to control the agent. llm = ChatOpenAI(model_name='google/palm-2-chat-bison', temperature=0, openai_api_key='...', openai_api_base='https://...') # Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in. tools = load_tools(["llm-math"], llm=llm) # Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use. agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) # Now let's test it out! agent.run("What is the result of 1+3") ``` An error will occur: ``` OutputParserException: Parsing LLM output produced both a final answer and a parse-able action:: I need to use the calculator to answer this question Action: Calculator Action Input: 1+3 Observation: 4 Thought: I now know the final answer Final Answer: 4 ``` It looks like PaLM doesn't understand the prompt requirement of using the tool and waiting for the result. Are there any solutions to this problem so far? Or is PaLM unable to implement this function? ### Motivation We need to build a smart scheduler based on PaLM 2. ### Your contribution If there is no ready-made solution in LangChain, I will try to see if I can use PaLM's dialogue capabilities to achieve this requirement. I also hope to get your feedback and judge whether this is feasible based on experience.
ReAct agent support using Google PaLM 2
https://api.github.com/repos/langchain-ai/langchain/issues/10883/comments
3
2023-09-21T10:41:49Z
2023-12-28T16:05:07Z
https://github.com/langchain-ai/langchain/issues/10883
1,906,662,893
10,883
[ "hwchase17", "langchain" ]
### Feature request It should be possible to modify input keys, output keys and memory keys for all chains. Currently, this does not seem to be the case. For instance, while all these keys can be modified for the ConversationChain, not all of them can be modified for the ConversationalRetrievalChain. **ConversationChain** In the ConversationChain, I can freely rename the memory_key, input_key and output_key as the example below shows. This is as expected. ``` from langchain.chat_models import ChatOpenAI from langchain.chains import ConversationChain from langchain.memory import ConversationBufferMemory from langchain.prompts.prompt import PromptTemplate # Keys can be freely adjusted memory_key = "foo" input_key = "bar" output_key = "baz" # Initialize the context with a prompt template template = r"""The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: {} Human: {} AI:""".format("{" + memory_key + "}", "{" + input_key + "}") prompt = PromptTemplate( input_variables=[memory_key, input_key], template=template ) # Initialize memory to store conversation history memory = ConversationBufferMemory( memory_key=memory_key, input_key=input_key, output_key=output_key ) # Initialize large language model model_kwargs = {"engine": "gpt-35-turbo-0613", "headers": {"x-api-key": os.environ["OPENAI_API_KEY"]}} llm = ChatOpenAI(model_kwargs=model_kwargs, temperature=0.0) # Initialize and return conversation chain chain = ConversationChain( llm=llm, memory=memory, prompt=prompt, verbose=True, input_key=input_key, output_key=output_key ) # Example call user_input = "What is GPT in 5 words?" chain({input_key: user_input})[output_key] ``` **ConversationalRetrievalChain** If I try to pass an `input_key` (or `input_keys`) argument to `ConversationalRetrievalChain.from_llm`, then I get the following error: ``` ValidationError: 1 validation error for ConversationalRetrievalChain input_key extra fields not permitted (type=value_error.extra) ``` It seems that with the current implementation, one has to use the input keys ["question", "chat_history"] (see [code here](https://github.com/langchain-ai/langchain/blob/777b33b873b50339659963f2b5aba17b69412f5a/libs/langchain/langchain/chains/conversational_retrieval/base.py#L86)). However, the ConversationalRetrievalChain does, thankfully, accept the input argument `output_key`. I.e., the output_key can be modified but the input_keys cannot. ### Motivation Provide consistency and ease of use across all chains. Currently, users may face difficulty in modifying input keys, output keys and memory keys for the ConversationalRetrievalChain which could result in inconsistency across the chains. By allowing modifications to these keys for all chains, users can have a consistent and standardized approach to building chatbots. ### Your contribution TBD
Possibility to rename input keys, output keys and memory keys for all chains consistently
https://api.github.com/repos/langchain-ai/langchain/issues/10882/comments
1
2023-09-21T10:22:11Z
2023-12-28T16:05:12Z
https://github.com/langchain-ai/langchain/issues/10882
1,906,629,969
10,882
[ "hwchase17", "langchain" ]
### System Info BigQuery table is not able to load. Here is the code: ```python from langchain.document_loaders import BigQueryLoader BASE_QUERY = """ SELECT * FROM arindam-2banerjee-0525-pimy.sf_transport.bikeshare_trips """ loader = BigQueryLoader(BASE_QUERY) loader.load() ``` Error message: ``` --------------------------------------------------------------------------- NotFound Traceback (most recent call last) [<ipython-input-24-4ebe0f70b252>](https://localhost:8080/#) in <cell line: 9>() 7 loader = BigQueryLoader(BASE_QUERY) 8 ----> 9 loader.load() 9 frames [~/.local/lib/python3.10/site-packages/langchain/document_loaders/bigquery.py](https://localhost:8080/#) in load(self) 59 60 bq_client = bigquery.Client(credentials=self.credentials, project=self.project) ---> 61 query_result = bq_client.query(self.query).result() 62 docs: List[Document] = [] 63 [/usr/local/lib/python3.10/dist-packages/google/cloud/bigquery/client.py](https://localhost:8080/#) in query(self, query, job_config, job_id, job_id_prefix, location, project, retry, timeout, job_retry, api_method) 3401 ) 3402 elif api_method == enums.QueryApiMethod.INSERT: -> 3403 return _job_helpers.query_jobs_insert( 3404 self, 3405 query, [/usr/local/lib/python3.10/dist-packages/google/cloud/bigquery/_job_helpers.py](https://localhost:8080/#) in query_jobs_insert(client, query, job_config, job_id, job_id_prefix, location, project, retry, timeout, job_retry) 112 return query_job 113 --> 114 future = do_query() 115 # The future might be in a failed state now, but if it's 116 # unrecoverable, we'll find out when we ask for it's result, at which [/usr/local/lib/python3.10/dist-packages/google/cloud/bigquery/_job_helpers.py](https://localhost:8080/#) in do_query() 89 90 try: ---> 91 query_job._begin(retry=retry, timeout=timeout) 92 except core_exceptions.Conflict as create_exc: 93 # The thought is if someone is providing their own job IDs and they get [/usr/local/lib/python3.10/dist-packages/google/cloud/bigquery/job/query.py](https://localhost:8080/#) in _begin(self, client, retry, timeout) 1308 1309 try: -> 1310 super(QueryJob, self)._begin(client=client, retry=retry, timeout=timeout) 1311 except exceptions.GoogleAPICallError as exc: 1312 exc.message = _EXCEPTION_FOOTER_TEMPLATE.format( [/usr/local/lib/python3.10/dist-packages/google/cloud/bigquery/job/base.py](https://localhost:8080/#) in _begin(self, client, retry, timeout) 691 # job has an ID. 692 span_attributes = {"path": path} --> 693 api_response = client._call_api( 694 retry, 695 span_name="BigQuery.job.begin", [/usr/local/lib/python3.10/dist-packages/google/cloud/bigquery/client.py](https://localhost:8080/#) in _call_api(self, retry, span_name, span_attributes, job_ref, headers, **kwargs) 811 name=span_name, attributes=span_attributes, client=self, job_ref=job_ref 812 ): --> 813 return call() 814 815 return call() [/usr/local/lib/python3.10/dist-packages/google/api_core/retry.py](https://localhost:8080/#) in retry_wrapped_func(*args, **kwargs) 347 self._initial, self._maximum, multiplier=self._multiplier 348 ) --> 349 return retry_target( 350 target, 351 self._predicate, [/usr/local/lib/python3.10/dist-packages/google/api_core/retry.py](https://localhost:8080/#) in retry_target(target, predicate, sleep_generator, timeout, on_error, **kwargs) 189 for sleep in sleep_generator: 190 try: --> 191 return target() 192 193 # pylint: disable=broad-except [/usr/local/lib/python3.10/dist-packages/google/cloud/_http/__init__.py](https://localhost:8080/#) in api_request(self, method, path, query_params, data, content_type, headers, api_base_url, api_version, expect_json, _target_object, timeout, extra_api_info) 492 493 if not 200 <= response.status_code < 300: --> 494 raise exceptions.from_http_response(response) 495 496 if expect_json and response.content: NotFound: 404 POST https://bigquery.googleapis.com/bigquery/v2/projects//jobs?prettyPrint=false: Request couldn't be served. Location: None Job ID: 77604d22-bf1b-4dca-b6f7-ab553ac6aab8 ``` ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The code mentioned above. ### Expected behavior Load a BigQuery query with one document per row.
BigQuery - document loader, load() function throws error
https://api.github.com/repos/langchain-ai/langchain/issues/10879/comments
2
2023-09-21T08:45:23Z
2023-10-13T15:24:30Z
https://github.com/langchain-ai/langchain/issues/10879
1,906,441,411
10,879
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I am using langchain.document_loaders.image.UnstructuredImageLoader to load data from an image with a table on it. I chose PaddleOCR by setting os.environ["TABLE_OCR"] = "paddle" . However, I didn't find a place to set the detection language to other language. Could you please let me know how to do it. I have gotten stuck on it for a while. ### Suggestion: _No response_
Issue: How to set the language (default is English) when using UnstructuredImageLoader with PaddleOCR
https://api.github.com/repos/langchain-ai/langchain/issues/10876/comments
4
2023-09-21T07:05:38Z
2023-12-30T16:05:44Z
https://github.com/langchain-ai/langchain/issues/10876
1,906,261,175
10,876