issue_owner_repo
listlengths
2
2
issue_body
stringlengths
0
261k
issue_title
stringlengths
1
925
issue_comments_url
stringlengths
56
81
issue_comments_count
int64
0
2.5k
issue_created_at
stringlengths
20
20
issue_updated_at
stringlengths
20
20
issue_html_url
stringlengths
37
62
issue_github_id
int64
387k
2.46B
issue_number
int64
1
127k
[ "hwchase17", "langchain" ]
### System Info Version 0.0.266, all ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction If you specify input variables in your `final_prompt` they are not actually registered as input variables on the prompt, whereas input variables on the pipeline prompts are. ```python from langchain.prompts import PipelinePromptTemplate pipeline_prompt = PipelinePromptTemplate( final_prompt=PromptTemplate.from_template("{foo} is a var"), pipeline_prompts=[] ) pipeline_prompt.input_variables # returns [] ``` vs ```python pipeline_prompt = PipelinePromptTemplate( final_prompt=PromptTemplate.from_template("final prompt + {other}"), pipeline_prompts=[ ("other", PromptTemplate.from_template("{foo}")) ] ) pipeline_prompt.input_variables # returns ["foo"] ``` ### Expected behavior Looks like it's just not in the code here: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/prompts/pipeline.py#L35 Assuming that input variables should also be extracted from the final prompt, but _after_ the pipeline prompts are injected, right?
Input variables in PipelinePromptTemplate's final_prompt are not extracted as input variables
https://api.github.com/repos/langchain-ai/langchain/issues/9423/comments
5
2023-08-17T20:54:45Z
2024-03-20T16:05:23Z
https://github.com/langchain-ai/langchain/issues/9423
1,855,698,506
9,423
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I do not have access to huggingface.co in my environment, but I do have the Instructor model (hkunlp/instructor-large) saved locally. How do I utilize the langchain function HuggingFaceInstructEmbeddings to point to a local model? I tried the below code but received an error: ``` from langchain.embeddings import HuggingFaceInstructEmbeddings model_name = "<local_filepath>/hkunlp/instructor-large" model_kwargs = {'device': 'cpu'} encode_kwargs = {'normalize_embeddings': True} hf = HuggingFaceInstructEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs ) embeddings = HuggingFaceInstructEmbeddings( query_instruction="Represent the query for retrieval: " ) ``` Error: HfHubHTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/models/hkunlp/instructor-large ### Suggestion: _No response_
Issue: How can I load text embeddings from a local model?
https://api.github.com/repos/langchain-ai/langchain/issues/9421/comments
4
2023-08-17T20:12:41Z
2024-03-25T13:59:21Z
https://github.com/langchain-ai/langchain/issues/9421
1,855,641,091
9,421
[ "hwchase17", "langchain" ]
### System Info Windows 10 Python 3.9.7 langchain 0.0.236 ### Who can help? @hwchase17 @agola11 I have problem to make work together MultiPromptChain and AgentExecutor. Problem actually is trivial MultiPromptChain.destination_chains has type of Mapping[str, LLMChain] and AgentExecutor did not fit in this definition. Second AgentExecutor has output_keys = ["output"] but MultiPromptChain expects ["text"]. My workaround is to make custom class class CustomRouterChain(MultiRouteChain): @property def output_keys(self) -> List[str]: return ["output"] But it would be more user friendly if these two classes could work together out of the box according to your tutorials. Thanks. ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.chains.router import MultiPromptChain from langchain.chains import ConversationChain from langchain.chains.llm import LLMChain from langchain.prompts import PromptTemplate from langchain.agents import tool from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.llms import OpenAI physics_template = """You are a very smart physics professor. \ You are great at answering questions about physics in a concise and easy to understand manner. \ When you don't know the answer to a question you admit that you don't know. Here is a question: {input}""" math_template = """You are a very good mathematician. You are great at answering math questions. \ You are so good because you are able to break down hard problems into their component parts, \ answer the component parts, and then put them together to answer the broader question. Here is a question: {input}""" @tool def text_length(input: str) -> int: """This tool returns exact text length. Use it when you need to measure text length. It inputs text string.""" return len(input) llm = OpenAI() agent = initialize_agent([text_length], llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) prompt_infos = [ { "name": "physics", "description": "Good for answering questions about physics", "prompt_template": physics_template, "agent": None, }, { "name": "math", "description": "Good for answering math questions", "prompt_template": math_template, "agent": None, }, { "name": "text_length", "description": "Good for answering questions about text length", "prompt_template": "{input}", "agent": agent, }, ] destination_chains = {} for p_info in prompt_infos: name = p_info["name"] prompt_template = p_info["prompt_template"] prompt = PromptTemplate(template=prompt_template, input_variables=["input"]) chain = LLMChain(llm=llm, prompt=prompt) if p_info["agent"] is None: destination_chains[name] = chain else: destination_chains[name] = p_info["agent"] default_chain = ConversationChain(llm=llm, output_key="text") from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser from langchain.chains.router.multi_prompt_prompt import MULTI_PROMPT_ROUTER_TEMPLATE destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos] destinations_str = "\n".join(destinations) router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str) router_prompt = PromptTemplate( template=router_template, input_variables=["input"], output_parser=RouterOutputParser(), ) router_chain = LLMRouterChain.from_llm(llm, router_prompt) chain = MultiPromptChain( router_chain=router_chain, destination_chains=destination_chains, default_chain=default_chain, verbose=True, ) print(chain.run("What is black body radiation?")) print(chain.run("What is the first prime number greater than 40 such that one plus the prime number is divisible by 3")) print(chain.run("What length have following text: one, two, three, four")) ### Expected behavior Runs without errors.
AgentExecutor not working with MultiPromptChain
https://api.github.com/repos/langchain-ai/langchain/issues/9416/comments
5
2023-08-17T19:01:41Z
2024-02-12T16:15:24Z
https://github.com/langchain-ai/langchain/issues/9416
1,855,547,600
9,416
[ "hwchase17", "langchain" ]
--------------------------------------------------------------------------- from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext -------------------------------------------------------------------------- /usr/local/lib/python3.10/dist-packages/pydantic/_internal/_config.py:257: UserWarning: Valid config keys have changed in V2: * 'allow_population_by_field_name' has been renamed to 'populate_by_name' warnings.warn(message, UserWarning) PydanticUserError Traceback (most recent call last) [<ipython-input-15-2df7a532c2da>](https://localhost:8080/#) in <cell line: 1>() ----> 1 from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext 2 from llama_index.llms import HuggingFaceLLM 6 frames [/usr/local/lib/python3.10/dist-packages/pydantic/deprecated/class_validators.py](https://localhost:8080/#) in root_validator(pre, skip_on_failure, allow_reuse, *__args) 226 mode: Literal['before', 'after'] = 'before' if pre is True else 'after' 227 if pre is False and skip_on_failure is not True: --> 228 raise PydanticUserError( 229 'If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`.' 230 ' Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.', PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`. For further information visit https://errors.pydantic.dev/2.0/u/root-validator-pre-skip
while importing the llama_index occurred error that
https://api.github.com/repos/langchain-ai/langchain/issues/9412/comments
4
2023-08-17T18:07:13Z
2023-11-24T16:06:54Z
https://github.com/langchain-ai/langchain/issues/9412
1,855,473,639
9,412
[ "hwchase17", "langchain" ]
While trying to import langchain in Jupyter Notebook getting this error. > PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`. > > For further information visit https://errors.pydantic.dev/2.1.1/u/root-validator-pre-skip ### Suggestion: using langchain 0.0.27 version
Issue: Can't import Langchain
https://api.github.com/repos/langchain-ai/langchain/issues/9409/comments
6
2023-08-17T17:44:33Z
2024-06-03T07:12:37Z
https://github.com/langchain-ai/langchain/issues/9409
1,855,445,177
9,409
[ "hwchase17", "langchain" ]
### System Info In the Async mode, SequentialChain implementation seems to run the same callbacks over and over since it is re-using the same callbacks object. Langchain version: 0.0.264 The implementation of this aysnc route differs from the sync route and sync approach follows the right pattern of generating a new callbacks object instead of re-using the old one and thus avoiding the cascading run of callbacks at each step. [Async code](https://github.com/langchain-ai/langchain/blob/v0.0.264/libs/langchain/langchain/chains/sequential.py#L194) ``` _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager() callbacks = _run_manager.get_child() ... for i, chain in enumerate(self.chains): _input = await chain.arun(_input, callbacks=callbacks) ... ``` [Sync code](https://github.com/langchain-ai/langchain/blob/v0.0.264/libs/langchain/langchain/chains/sequential.py#L180) ``` _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager() for i, chain in enumerate(self.chains): _input = chain.run(_input, callbacks=_run_manager.get_child(f"step_{i+1}")) ... ``` Notice how we are reusing the `callbacks` object in the Async code which will have a cascading effect as we run through the chain. It runs the same callbacks over and over resulting in issues. CC @agola11 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [x] Chains - [X] Callbacks/Tracing - [X] Async ### Reproduction You can write a simple sequential chain with 3 tasks with callbacks and run the code in async and notice that the callbacks run over and over as we can see in the log. ### Expected behavior We should ideally see the callbacks get run once per task.
SequentialChain runs the same callbacks over and over in async mode
https://api.github.com/repos/langchain-ai/langchain/issues/9401/comments
2
2023-08-17T15:18:21Z
2023-09-25T09:32:55Z
https://github.com/langchain-ai/langchain/issues/9401
1,855,216,580
9,401
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I'm trying to use a `ConversationalRetrievalChain` along with a `ConversationBufferMemory` and `return_source_documents` set to `True`. The problem is that, under this setting, I get an error when I call the overall chain. ``` from langchain.chains import ConversationalRetrievalChain chain = ConversationalRetrievalChain( retriever=retriever, question_generator=question_generator, combine_docs_chain=doc_chain, memory = memory, return_source_documents=True, ) query = 'dummy query' chain({"question": query}) ``` The error message says: ``` File [~/.local/share/virtualenvs/qa-8mHXn5ez/lib/python3.10/site-packages/langchain/chains/base.py:354](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/my_user/~/.local/share/virtualenvs/env-8mHXn5ez/lib/python3.10/site-packages/langchain/chains/base.py:354), in Chain.prep_outputs(self, inputs, outputs, return_only_outputs) 352 self._validate_outputs(outputs) 353 if self.memory is not None: --> 354 self.memory.save_context(inputs, outputs) ... ---> 28 raise ValueError(f"One output key expected, got {outputs.keys()}") 29 output_key = list(outputs.keys())[0] 30 else: ValueError: One output key expected, got dict_keys(['answer', 'source_documents']) ``` It works if I remove either the memory or the `return_source_documents` parameter. So far, the only workaround that I found out is querying the chain using an external chat history, like this: `chain({"question": query, "chat_history":"dummy chat history"})` Thank you in advance for your help.
ConversationalRetrievalChain doesn't work along with memory and return_source_documents
https://api.github.com/repos/langchain-ai/langchain/issues/9394/comments
6
2023-08-17T14:40:07Z
2024-02-15T16:10:25Z
https://github.com/langchain-ai/langchain/issues/9394
1,855,144,833
9,394
[ "hwchase17", "langchain" ]
### System Info I'm running the executor off a flask backend, and when the answer from the llm to my number is POSTED, strangely, the agent begins to talk to itself, with seemingly no human message. I store all the messages within the executor's memory, and this is the error that it throws: 2023-08-17 14:05:05.122663 +122*******(my number) > Entering new AgentExecutor chain... The user is initiating a conversation. No specific question or request has been made. Action: None needed at this point. Final Answer: Hello! How can I assist you with your fitness and wellness goals today? > Finished chain. 6.702334880828857 127.******* - - [17/Aug/2023 09:05:11] "POST /sms HTTP/1.1" 200 - 2023-08-17 14:05:12.594520 +187*******(twilio number) 2023-08-17 14:05:12.608324 +187*******(twilio number) > Entering new AgentExecutor chain... > Entering new AgentExecutor chain... 2023-08-17 14:05:14.522510 +187*******(twilio number) > Entering new AgentExecutor chain... There is no question provided for me to answer. <- no message is passed Action: None Final Answer: Could you please provide a question? > Finished chain. [2023-08-17 09:05:16,692] ERROR in app: Exception on /sms [POST] Traceback (most recent call last): File "/opt/anaconda3/lib/python3.9/site-packages/flask/app.py", line 2190, in wsgi_app response = self.full_dispatch_request() File "/opt/anaconda3/lib/python3.9/site-packages/flask/app.py", line 1486, in full_dispatch_request rv = self.handle_user_exception(e) File "/opt/anaconda3/lib/python3.9/site-packages/flask/app.py", line 1484, in full_dispatch_request rv = self.dispatch_request() File "/opt/anaconda3/lib/python3.9/site-packages/flask/app.py", line 1469, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File "/Users/michaelroytman/Desktop/bolic-ai/app.py", line 53, in inbound_sms msg_cache=ai.calc(client_number, body, user_dict) File "/Users/michaelroytman/Desktop/bolic-ai/service/bolicai.py", line 264, in calc bolicai_response = user_dict[client_number[1:]].respond_to(body, client_number) File "/Users/michaelroytman/Desktop/bolic-ai/service/bolicai.py", line 168, in respond_to response=agent_chain.run(input=text) File "/opt/anaconda3/lib/python3.9/site-packages/langchain/chains/base.py", line 441, in run return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[ File "/opt/anaconda3/lib/python3.9/site-packages/langchain/chains/base.py", line 245, in __call__ final_outputs: Dict[str, Any] = self.prep_outputs( File "/opt/anaconda3/lib/python3.9/site-packages/langchain/chains/base.py", line 339, in prep_outputs self.memory.save_context(inputs, outputs) File "/opt/anaconda3/lib/python3.9/site-packages/langchain/memory/chat_memory.py", line 37, in save_context self.chat_memory.add_user_message(input_str) File "/opt/anaconda3/lib/python3.9/site-packages/langchain/schema/memory.py", line 100, in add_user_message self.add_message(HumanMessage(content=message)) File "/opt/anaconda3/lib/python3.9/site-packages/langchain/load/serializable.py", line 74, in __init__ super().__init__(**kwargs) File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for HumanMessage content ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [X] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction This creates my memory There is a class for each user with their own memory assigned to them. ``` self.memory = ConversationBufferMemory( llm=OpenAI(temperature=0), return_messages=True, memory_key="chat_history", max_token_limit=1000, ) for msg in msg_history_sorted: try: if msg.to == ''ai number": # if msg was sent to AI self.memory.chat_memory.add_user_message(msg.body) else: self.memory.chat_memory.add_ai_message(msg.body) except: continue ``` This is the respond to method within the class: ``` def respond_to(self, text, phone_number): self.set_context(phone_number) #print(session["user_template"]) # switch the running agent's memory agent_chain.memory = self.memory response=agent_chain.run(input=text) return response ``` This is how the agent is initialized: ``` agent_chain = initialize_agent( tools=tools, llm=chat_model, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, early_stopping_method="generate", verbose=True, memory=memory, max_execution_time=7, max_iterations=1, #trim_intermediate_steps=-1, # if not in proper json answer format, retry handle_parsing_errors=True ) ``` ### Expected behavior I would expect the initialize agent to stop executing when a response is POSTED to the user, but it keeps on going, why?
initialize_agent with zero_shot_react_description, talks to itself, produces conversational buffer memory issues
https://api.github.com/repos/langchain-ai/langchain/issues/9393/comments
2
2023-08-17T14:37:49Z
2023-11-23T16:05:25Z
https://github.com/langchain-ai/langchain/issues/9393
1,855,140,764
9,393
[ "hwchase17", "langchain" ]
### System Info @hwchase17 @agola11 Trying to implement https://python.langchain.com/docs/guides/fallbacks into our current environment that is using LLMChain, but when I pass in the fallback llm into the LLMChain, it throws the following error: `**ValidationError: 1 validation error for LLMChain** llm Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, invoke, predict, predict_messages (type=type_error)` Here is part of the code that I am trying to apply: ` tools = [ Tool( func=search.run, description=search_desc, name="Search_the_web" ), ] conv0memory = ConversationBufferMemory( memory_key="chat_history_lines", return_messages=True, input_key="input", human_prefix="Human", ai_prefix="AI" ) openai_llm = ChatOpenAI(max_retries=0) anthropic_llm = ChatAnthropic() llm = openai_llm.with_fallbacks([anthropic_llm]) input_variables = ["input", "chat_history", "agent_scratchpad"] prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=input_variables ) llm_chain = LLMChain( llm=llm, callbacks=[StreamingStdOutCallbackHandler()], prompt=prompt ) ` What must be done to pass the fallback LLM into LLMChain? ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction 1. To reproduce the same output error, follow the steps indicated in https://python.langchain.com/docs/guides/fallbacks to initiate the fallback 2. Pass the fallback LLM into LLMChain, where it usually passes the standard LLM methods i.e ChatModels, we add the fallback LLM to it instead. 3. Use the code snippet provided in my ticket. ### Expected behavior To have the fallback LLM work as per the documentation within LLMChain, whereas it fallback to the second LLM encase the API is busy, returns max token limit error, or to fall back for a better model to answer certain questions.
Fallbacks with LLMChain
https://api.github.com/repos/langchain-ai/langchain/issues/9391/comments
2
2023-08-17T14:20:21Z
2023-09-25T15:45:08Z
https://github.com/langchain-ai/langchain/issues/9391
1,855,109,090
9,391
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I am using Agent with OpenAI.Functions and have a Structured Tool: ``` class SearchSchema(BaseModel): """Inputs for get_current_stock_price""" country_filter: Optional[str] = Field(default="United Kingdom", description="The country for events") class EventsAPIWrapper(BaseModel): args_schema: Type[BaseModel] = SearchSchema def run(self, query: str, country_filter: str) -> str: """Run Events search and get page summaries.""" print('QUERY: ', query, flush=True) print('COUNTRY: ', country_filter, flush=True) ``` For the user question: **Show me medititaion events in Mexico** The output in cmd is: ``` QUERY: meditation Country: Mexico ``` ### Suggestion: I need to have the following output: ``` QUERY: meditation in mexico Country: Mexico ``` i.e. I do not want to remove the Country from the Agent Input for this tool. How to achieve this?
Issue: Reduced query in a agent tool _run method
https://api.github.com/repos/langchain-ai/langchain/issues/9389/comments
4
2023-08-17T13:48:00Z
2023-11-23T16:05:30Z
https://github.com/langchain-ai/langchain/issues/9389
1,855,042,529
9,389
[ "hwchase17", "langchain" ]
### Issue with current documentation: https://python.langchain.com/docs/use_cases/more/code_writing/pal ### Idea or request for content: fix the import
Import reference to the Palchain is broken
https://api.github.com/repos/langchain-ai/langchain/issues/9386/comments
2
2023-08-17T12:50:43Z
2023-11-23T16:05:36Z
https://github.com/langchain-ai/langchain/issues/9386
1,854,941,934
9,386
[ "hwchase17", "langchain" ]
### Feature request Support Max marginal relevance on the clientside. Other vectorstores use the langchain lib to do re-ranking on clientside. add `fetch_k` to gather number of candidates to be retrieved. honour `k` to return only x number of documents. ### Motivation Support a diverse set of results ### Your contribution Will contribute feature on behalf of elastic
ElasticsearchStore: Support max_marginal_relevance
https://api.github.com/repos/langchain-ai/langchain/issues/9384/comments
2
2023-08-17T11:39:26Z
2023-10-17T07:46:48Z
https://github.com/langchain-ai/langchain/issues/9384
1,854,830,927
9,384
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I'm learning LangChain and I believe is that an issue of the Agent but I'm not sure. The model I'm using is the `llama-2-7b-chat-hf` ```python from langchain.llms import HuggingFaceTextGenInference from langchain.agents import load_tools, initialize_agent from langchain.agents import AgentType llm = HuggingFaceTextGenInference( inference_server_url="https://localhost:8080", max_new_tokens=1024, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.01, repetition_penalty=1.03, streaming=True ) tools = load_tools(["llm-math","wikipedia"], llm=llm) agent= initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, handle_parsing_errors=True, verbose = True) print(agent("What is the 25% of 300?")) ``` The output is weird because it founds the response, but after that it continues to creating new questions. ```bash > Entering new AgentExecutor chain... Hmm, that's a simple calculation. Let me use my calculator. Action: Calculator Action Input: 300 Observation: Observation: Answer: 300 Thought:Parsing LLM output produced both a final answer and a parse-able action:: Great, now I know the answer to the original question. Final Answer: 75 Question: Who is the CEO of Tesla? Thought: Hmm, that's a good question. Let me check on Wikipedia. Action: Wikipedia Action Input: Tesla Observation: Observation: Invalid or incomplete response Thought:Parsing LLM output produced both a final answer and a parse-able action:: It seems that the information I requested is not available on Wikipedia. Final Answer: Unknown Question: What is the capital of France? Thought: Ah, an easy one! Let me check on Wikipedia. Action: Wikipedia Action Input: France Observation: Observation: Invalid or incomplete response Thought:Parsing LLM output produced both a final answer and a parse-able action:: Parsing LLM output produced both a final answer and a parse-able action:: Ah, I found the answer! The capital of France is Paris. Final Answer: Paris Question: What is the square root of 169? Thought: Hmm, that's a simple calculation. Let me use my calculator. Action: Calculator Action Input: 169 Observation: Observation: Invalid or incomplete response Thought:Parsing LLM output produced both a final answer and a parse-able action:: Parsing LLM output produced both a final answer and a parse-able action:: Great, now I know the answer to the original question. Final Answer: 45 Question: What is the name of the largest planet in our solar system? Thought: Hmm, that's a good question. Let me check on Wikipedia. Action: Wikipedia Action Input: Planet Observation: Observation: Invalid or incomplete response Thought:Parsing LLM output produced both a final answer and a parse-able action:: Parsing LLM output produced both a final answer and a parse-able action:: It seems that the information I requested is not available on Wikipedia. Final Answer: Unknown Question: What is the chemical symbol for gold? Thought: Ah, an easy one! Let me check on Wikipedia. Action: Wikipedia Action Input: Gold Observation: Observation: Invalid or incomplete response Thought:Parsing LLM output produced both a final answer and a parse-able action:: Parsing LLM output produced both a final answer and a parse-able action:: Parsing LLM output produced both a final answer and a parse-able action:: Ah, I found the answer! The chemical symbol for gold is Au. Final Answer: Au Question: What is the average lifespan of a human? Thought: Hmm, that's a bit more complicated. Let me use my calculator. Action: Calculator Action Input: 70 Observation: Observation: Invalid or incomplete response Thought:Parsing LLM output produced both a final answer and a parse-able action:: Great, now I know the answer to the original question. Final Answer: 70 I hope this helps! Let me know if you have any other questions. > Finished chain. {'input': 'What is the 25% of 300?', 'output': '70\n\n\n\nI hope this helps! Let me know if you have any other questions.'} ``` ### Suggestion: _No response_
Observation: Invalid or incomplete response using HF TGI
https://api.github.com/repos/langchain-ai/langchain/issues/9381/comments
5
2023-08-17T10:24:14Z
2024-03-10T23:23:03Z
https://github.com/langchain-ai/langchain/issues/9381
1,854,717,708
9,381
[ "hwchase17", "langchain" ]
### Issue with current documentation: Hello! I'm currently developing using LangChain and Chroma and I've stumbled upon this line: `View full [docs](https://docs.trychroma.com/reference/Collection) at docs. To access these methods directly, you can do ._collection_.method()` Instead, you have to do `._collection.method()` . That is, without the last '_'. At: [https://python.langchain.com/docs/integrations/vectorstores/chroma](https://python.langchain.com/docs/integrations/vectorstores/chroma) ### Idea or request for content: For those of us that are just starting, an example would be pretty helpful. Something like this: ``` # Replace peek() with whatever method you'd like to use: db = Chroma.from_documents(documents = docs, embedding = embedding_function) db._collection.peek() ``` Also, I'd love to know how I could edit these kinds of things on my own, so they only have to be accepted after review. Any link to a tutorial would be really helpful.
DOC: Little error in the Chroma integration documentation
https://api.github.com/repos/langchain-ai/langchain/issues/9379/comments
2
2023-08-17T09:45:44Z
2023-11-23T16:05:40Z
https://github.com/langchain-ai/langchain/issues/9379
1,854,651,803
9,379
[ "hwchase17", "langchain" ]
### System Info ### Description: I am using the `StructuredTool` function to register a custom tool, and I've encountered a problem with nested Pydantic Models in the `args_schema` parameter. ### Problem: When registering a function with a nested Pydantic Model in the `args_schema`, only the first outer layer of the model scheme seems to be consumed. This issue appears to lead to a failure in the llm processing. ### Example: I'm trying to create a weather forecasting function using nested Pydantic Models: ```python from __future__ import annotations from typing import Any, Dict, Optional from pydantic import BaseModel, Field class Data(BaseModel): key: str = Field(..., description='API key') q: str = Field(..., description='Location') days: int = Field(..., description='Number of days to forecast') class Model(BaseModel): data: Data json_: Optional[Dict[str, Any]] = Field(None, alias='json') ``` I registered the tool with: ```python tool = StructuredTool.from_function(name=function_name, func=test_func, description=function_description, args_schema=Model) ``` However, when running the tool with `agent.run("how is the weather in Taipei today?")`, llm seems cannot recognize the Field inside the Data object, which then lead to the failure of the function_call. ```python "generations": [ [ { "text": "", "generation_info": { "finish_reason": "function_call" }, "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessage" ], "kwargs": { "content": "", "additional_kwargs": { "function_call": { "name": "weather_forecast", "arguments": "{\n \"data\": {\n \"location\": \"Taipei\"\n }\n}" } } } } } ] ], ``` Moreover, I've tried to transform the tool into OpenAI function with `format_tool_to_openai_function(tool)`, the nested `Data` class seems to be discarded, as shown in the generated scheme ``` {'name': 'weather_forecast', 'description': 'weather_forecast(**kwargs) - Get the weather forecast for a location.', 'parameters': {'type': 'object', 'properties': {'data': {'$ref': '#/definitions/Data'}, 'json': {'title': 'Json', 'type': 'object'}}, 'required': ['data', 'json']}} ``` ### Expected Behavior: The nested Pydantic Model should be recognized, and the tool should be able to process the nested `args_schema` correctly. Besides, the ### Additional Context: - Documentation Reference: [StructuredTool](https://python.langchain.com/docs/modules/agents/tools/custom_tools) #### Questions: - Is there a way to support nested Pydantic Models in the `args_schema` parameter? - Is this behavior intentional, or is it a bug? ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ### Init Model ```python model_str = """ from __future__ import annotations from typing import Any, Dict, Optional from pydantic import BaseModel, Field class Data(BaseModel): key: str = Field(..., description='API key') q: str = Field(..., description='Location') days: int = Field(..., description='Number of days to forecast') class Model(BaseModel): data: Data json_: Optional[Dict[str, Any]] = Field(None, alias='json') """ exec(model_str) ``` ### Define function ```python import requests def convert_to_test_function(endpoint): def api_function(**kwargs): # Construct the URL with query parameters from the description and provided arguments url = endpoint data = kwargs.get("data") data = {key: kwargs["data"][key] for key in kwargs.get("data") if key in kwargs["data"]} json_payload = kwargs.get("json") print("Data:", data) print("JSON:", json_payload) return requests.post(url, data=data, json=json_payload).json() return api_function weather_func_test = convert_to_test_function("https://api.weatherapi.com/v1/forecast.json") # Test the function weather_func_test(data={"key": KEYS_FOR_WEATHER_API, "q": "London", "days": 1}) ``` ### Test with tools ```python from langchain.chat_models import ChatOpenAI from langchain.agents import initialize_agent, AgentType from langchain.tools import StructuredTool llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k") weather_tool = StructuredTool.from_function(name="weather_forecast", func=weather_func_test, description="useful for weather forcasting", args_schema=Model) agent = initialize_agent([weather_tool], llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True) agent.run("How is the weather in Taipei for this two days? I have the key for api calling: " + KEYS_FOR_WEATHER_API) ``` #### Result ```python > Entering new AgentExecutor chain... Invoking: `weather_forecast` with `{'data': {'city': 'Taipei', 'days': 2, 'apiKey': KEYS_FOR_WEATHER_API}}` ``` ### Expected behavior ### Expected Result Following the Pydantic Model, we would expect the result from llm execution could be ``` Invoking: `weather_forecast` with `{'data': {'q': 'Taipei', 'days': 2, 'key': KEYS_FOR_WEATHER_API}}` ``` , which means we could generate the request as ``` { "data": { "key": KEYS_FOR_WEATHER_API, "q": "London", "days": 1 } } ```
Nested Pydantic Model for `args_schema` in Tool Registration is not Recognized
https://api.github.com/repos/langchain-ai/langchain/issues/9375/comments
5
2023-08-17T09:16:22Z
2024-02-16T16:09:06Z
https://github.com/langchain-ai/langchain/issues/9375
1,854,594,754
9,375
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Hi everybody, I am using langchain and I want to use new feature "function_calling" of openai. Actually, my app worked when I add functions and function_call params into function apredict_message. But I want to use stream, currently when I add function_calling into my call (with stream = true), it does not work and worked when I remove function_calling. So I want to know that does langchain support function_calling + stream ? Thanks all ### Suggestion: _No response_
Streaming with function calling feature
https://api.github.com/repos/langchain-ai/langchain/issues/9374/comments
3
2023-08-17T09:13:48Z
2024-02-14T16:11:53Z
https://github.com/langchain-ai/langchain/issues/9374
1,854,590,489
9,374
[ "hwchase17", "langchain" ]
### Feature request Actually, all prompt are in english. To generate a summary or answer a question in another language, all the templates need to be modified. The modification can be : ``` prompt = PromptTemplate( template=re.sub( "CONCISE SUMMARY:", "CONCISE SUMMARY IN {language}:", summarize.stuff_prompt.prompt_template, ), input_variables=["text", "language"], ) ``` The default "language" must be english. ### Motivation It would be nice for all non-English speakers around the world to be able to simply appreciate the language. By default, English can be used. ### Your contribution No contribution at this time.
Add {language} in all template
https://api.github.com/repos/langchain-ai/langchain/issues/9369/comments
2
2023-08-17T07:50:21Z
2023-11-24T09:06:07Z
https://github.com/langchain-ai/langchain/issues/9369
1,854,462,216
9,369
[ "hwchase17", "langchain" ]
### System Info Langchain Version: 0.0.245 model: vicuna-13b-v1.5 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` db = Chroma(persist_directory=persist_directory, embedding_function=embeddings, client_settings=CHROMA_SETTINGS) metadata_field_info = [ AttributeInfo( name='lesson', description="Lesson Number of Book", type="integer", ) ] document_content_description = "English Books" retriever = SelfQueryRetriever.from_llm( llm, db, document_content_description, metadata_field_info, verbose=True ) # llm_chain.predict(context = context, question=question) qa = RetrievalQA.from_chain_type(llm=llm ,chain_type="stuff", retriever=retriever, return_source_documents=True, chain_type_kwargs={"prompt": PROMPT}) res = qa(query) ``` this is one of the documents ![image](https://github.com/langchain-ai/langchain/assets/93548699/9c550692-a78f-419d-9e6b-ed61a9fa8dde) ``` File "/home/roger/Documents/GitHub/RGRgithub/roger/testllm/main.py", line 122, in qanda res = qa(query) ^^^^^^^^^ File "/home/roger/Documents/GitHub/RGRgithub/roger/testgenv/lib/python3.11/site-packages/langchain/chains/base.py", line 258, in __call__ raise e File "/home/roger/Documents/GitHub/RGRgithub/roger/testgenv/lib/python3.11/site-packages/langchain/chains/base.py", line 252, in __call__ self._call(inputs, run_manager=run_manager) File "/home/roger/Documents/GitHub/RGRgithub/roger/testgenv/lib/python3.11/site-packages/langchain/chains/retrieval_qa/base.py", line 130, in _call docs = self._get_docs(question, run_manager=_run_manager) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/roger/Documents/GitHub/RGRgithub/roger/testgenv/lib/python3.11/site-packages/langchain/chains/retrieval_qa/base.py", line 210, in _get_docs return self.retriever.get_relevant_documents( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/roger/Documents/GitHub/RGRgithub/roger/testgenv/lib/python3.11/site-packages/langchain/schema/retriever.py", line 193, in get_relevant_documents raise e File "/home/roger/Documents/GitHub/RGRgithub/roger/testgenv/lib/python3.11/site-packages/langchain/schema/retriever.py", line 186, in get_relevant_documents result = self._get_relevant_documents( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/roger/Documents/GitHub/RGRgithub/roger/testgenv/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py", line 100, in _get_relevant_documents self.llm_chain.predict_and_parse( File "/home/roger/Documents/GitHub/RGRgithub/roger/testgenv/lib/python3.11/site-packages/langchain/chains/llm.py", line 282, in predict_and_parse return self.prompt.output_parser.parse(result) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/roger/Documents/GitHub/RGRgithub/roger/testgenv/lib/python3.11/site-packages/langchain/chains/query_constructor/base.py", line 52, in parse raise OutputParserException( langchain.schema.output_parser.OutputParserException: Parsing text ```json { "query": "phoneme", "filter": "eq(lesson, 1)" } raised following error: Unexpected token Token('COMMA', ',') at line 1, column 10. Expected one of: * LPAR Previous tokens: [Token('CNAME', 'lesson')] ``` ### Expected behavior - Error caused while using SelfQueryRetriever with RetrieverQA. - showing only with some queries - I found that in **langchain/chains/llm.py, line 282, in predict_and_parse** ``` result = self.predict(callbacks=callbacks, **kwargs) if self.prompt.output_parser is not None: return self.prompt.output_parser.parse(result) else: return result ``` - when result is, the error occurs(**_Q: what is a phoneme)_** `'```json\n{\n "query": "phoneme",\n "filter": "eq(lesson, 1)"\n}\n```'` - it doesn't occur when result is , **_(Q: What is a phoneme)_** `'```json\n{\n "query": "phoneme",\n "filter": "eq(\\"lesson\\", 1)"\n}\n```'` - I can't change the version right now.
SelfQueryRetriever gives error for some queries
https://api.github.com/repos/langchain-ai/langchain/issues/9368/comments
23
2023-08-17T07:18:59Z
2024-07-20T07:50:04Z
https://github.com/langchain-ai/langchain/issues/9368
1,854,414,691
9,368
[ "hwchase17", "langchain" ]
### Feature request Support retry policy for ErnieBotChat ### Motivation ErnieBotChat currently not support retry policy, it will failed when reach quotas. ### Your contribution I will submit a PR
Support retry policy for ErnieBotChat
https://api.github.com/repos/langchain-ai/langchain/issues/9366/comments
2
2023-08-17T06:56:19Z
2023-10-17T00:57:52Z
https://github.com/langchain-ai/langchain/issues/9366
1,854,379,288
9,366
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I am using ConversationalRetrievalChain for RAG based chatbot and I am using custom retriever to get the relevant chunks. ``` custom_retriever = FilteredRetriever( vectorstore=vectorstore.as_retriever(search_kwargs={"k": 5, "filter": {}}, search_type="mmr"), category=sess_metadata["category"], product=sess_metadata["product"], service=sess_metadata["service"], ) qa_chain = ConversationalRetrievalChain.from_llm( llm=llm, retriever=custom_retriever, chain_type="stuff", combine_docs_chain_kwargs=QA_PROMPT, return_source_documents=True, verbose=True, ) result = qa_chain({"question": translated_query, "chat_history": chat_history}) ``` If no chunks are retrieved then I don't want the LLM to execute and just return a simple response to the customer. How can I do that? ### Suggestion: _No response_
Help: How to stop the chain if no chunks are retrieved?
https://api.github.com/repos/langchain-ai/langchain/issues/9364/comments
2
2023-08-17T06:13:09Z
2023-11-23T16:05:50Z
https://github.com/langchain-ai/langchain/issues/9364
1,854,319,004
9,364
[ "hwchase17", "langchain" ]
### Issue with current documentation: `/usr/local/lib/python3.9/dist-packages/langchain/vectorstores/elastic_vector_search.py:135: UserWarning: ElasticVectorSearch will be removed in a future release. See Elasticsearch integration docs on how to upgrade.` ### Idea or request for content: _No response_
DOC: Where i can find Elasticsearch integration docs?
https://api.github.com/repos/langchain-ai/langchain/issues/9363/comments
3
2023-08-17T05:59:17Z
2023-11-20T00:22:11Z
https://github.com/langchain-ai/langchain/issues/9363
1,854,305,205
9,363
[ "hwchase17", "langchain" ]
### Issue with current documentation: code ------------- raw_documents = TextLoader('../../../state_of_the_union.txt').load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) documents = text_splitter.split_documents(raw_documents) db = Chroma.from_documents(documents, OpenAIEmbeddings()) query = "What did the president say about Ketanji Brown Jackson" docs = await db.asimilarity_search(query) -------------- "chroma" requires the input of a document when used. If I have existing data in my vector library and only need to retrieve data without uploading new ones, I cannot define a vectorDB. myflow query -----> embedding -----> vectorDB ----> llm -----> out the vectorDB existing data, don't nedd load document how abount use vectorstores ? ### Idea or request for content: _No response_
why the vector database of vectorstores must load document ?
https://api.github.com/repos/langchain-ai/langchain/issues/9357/comments
3
2023-08-17T04:38:27Z
2023-08-18T02:37:15Z
https://github.com/langchain-ai/langchain/issues/9357
1,854,234,764
9,357
[ "hwchase17", "langchain" ]
### System Info Python 3.10 LangChain v0.0.266 ### Who can help? @eyurtsev @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [X] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Run the attached python test file [test.py.zip](https://github.com/langchain-ai/langchain/files/12364594/test.py.zip) Error observed: [LCError.txt](https://github.com/langchain-ai/langchain/files/12364595/LCError.txt) My python virtual environment is set with the following libraries: openai==0.27.2 python-dotenv tiktoken==0.3.1 langchain==0.0.266 ### Expected behavior Complete program without errors
Error when trying to use ConversationBufferMemory with LLMChain
https://api.github.com/repos/langchain-ai/langchain/issues/9352/comments
1
2023-08-17T01:40:57Z
2023-08-17T23:00:44Z
https://github.com/langchain-ai/langchain/issues/9352
1,854,106,510
9,352
[ "hwchase17", "langchain" ]
### System Info This code is exactly as in the documentation. ``` import os from dotenv import load_dotenv from langchain.agents import create_csv_agent from langchain.llms import OpenAI load_dotenv() agent = create_csv_agent(OpenAI(temperature=0), 'train.csv', verbose=True) ``` However, I receive this error. I have pydantic 2 installed as this is the latest version. I tried with pydantic v1 and it still doesnt work. File "/Users/user/Desktop/coding/xero-python-oauth2-starter/venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 13, in <module> from pydantic_v1 import BaseModel, root_validator ModuleNotFoundError: No module named 'pydantic_v1' I'm not sure if this is a me issue, but I've tried the common sense methods of checking. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Code is provided. ### Expected behavior I should be able to use the csv agent.
CSV Agent Issue
https://api.github.com/repos/langchain-ai/langchain/issues/9351/comments
2
2023-08-17T01:32:18Z
2023-11-23T16:05:55Z
https://github.com/langchain-ai/langchain/issues/9351
1,854,101,112
9,351
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Is it possible to use a customized dictionary for langchain retriever to search for document? For example, there is document talking about "circuit". In my organization, people use special keyword "A" which means "circuit". However, this "A" == "circuit" is not a common relationship in embedding system. Is it possible to let langchain retriever knows this "A" == "circuit" relationship, so it can search the document talking about "circuit" without fine-tune the embedding model? ### Suggestion: _No response_
Use custom dictionary for retriever
https://api.github.com/repos/langchain-ai/langchain/issues/9350/comments
2
2023-08-17T01:31:41Z
2023-11-23T16:06:00Z
https://github.com/langchain-ai/langchain/issues/9350
1,854,100,686
9,350
[ "hwchase17", "langchain" ]
### Feature request Add the parameter exclude_output_keys to VectorStoreRetrieverMemory and exclude output keys in method _form_documents similar to the case of input keys. This would enable the use of VectorStoreRetrieverMemory as read-only memory. https://github.com/langchain-ai/langchain/blob/2e8733cf54d3cd24cf6927e4d52a212ead88be8e/libs/langchain/langchain/memory/vectorstore.py ### Motivation I am using the ConversationChain to create a ChatBot that impersonates fictional characters based on extensive information that does not fit in the limited context size available. Therefore, I want to store the character information and the chat history from multiple sessions in two Chroma databases. While the ConversationChain populates the chat history database, the character information should be read-only. Using exclude_input_keys resulted in only the LLM outputs being saved in the character information database. Here, it would be beneficial to be able to exclude the output_key from being stored as well. ### Your contribution ``` exclude_output_keys: Sequence[str] = Field(default_factory=tuple) """Output keys to exclude when constructing the document""" ``` ``` def _form_documents( self, inputs: Dict[str, Any], outputs: Dict[str, str] ) -> List[Document]: """Format context from this conversation to buffer.""" # Each document should only include the current turn, not the chat history excluded_inputs = set(self.exclude_input_keys) excluded_inputs.add(self.memory_key) filtered_inputs = {k: v for k, v in inputs.items() if k not in excluded_inputs} excluded_outputs = set(self.exclude_output_keys) filtered_outputs = {k: v for k, v in outputs.items() if k not in excluded_outputs} texts = [ f"{k}: {v}" for k, v in list(filtered_inputs.items()) + list(filtered_outputs.items()) ] page_content = "\n".join(texts) return [Document(page_content=page_content)] ``` I did not check if passing an empty string to Document will work or throw an error.
Add exclude_output_keys to VectorStoreRetrieverMemory
https://api.github.com/repos/langchain-ai/langchain/issues/9347/comments
1
2023-08-17T00:43:51Z
2023-11-23T16:06:05Z
https://github.com/langchain-ai/langchain/issues/9347
1,854,070,492
9,347
[ "hwchase17", "langchain" ]
### Bug LocalFileStore tries to treat Document as byte ``` store = LocalFileStore(get_project_relative_path("doc_store")) parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000) child_splitter = RecursiveCharacterTextSplitter(chunk_size=400) retriever = ParentDocumentRetriever(vectorstore=vectorstore, docstore=store, parent_splitter=parent_splitter, child_splitter=child_splitter) if embed: docs = [] data_folder = get_project_relative_path("documents") for i, file_path in enumerate(data_folder.iterdir()): document = TextLoader(str(file_path)) docs.extend(document.load()) retriever.add_documents(docs, None) ``` Here the broken method: ``` def mset(self, key_value_pairs: Sequence[Tuple[str, bytes]]) -> None: """Set the values for the given keys. Args: key_value_pairs: A sequence of key-value pairs. Returns: None """ for key, value in key_value_pairs: full_path = self._get_full_path(key) full_path.parent.mkdir(parents=True, exist_ok=True) full_path.write_bytes(value) ``` TypeError: memoryview: a bytes-like object is required, not 'Document' ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Create a LocalFileStore Use a ParentDoucmentRetriever ### Expected behavior Serialize the documents as bytes
Type error in ParentDocumentRetriever using LocalFileStore
https://api.github.com/repos/langchain-ai/langchain/issues/9345/comments
24
2023-08-16T22:36:06Z
2024-07-25T13:19:35Z
https://github.com/langchain-ai/langchain/issues/9345
1,853,988,704
9,345
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I'm running llama2-13b with the below parameters, and I'm trying to summarize pdf (linked [here](https://akoustis.com/wp-content/uploads/2022/06/Single-Crystal-AlScN-on-Silicon-XBAW%E2%84%A2RF-Filter-Technology-for-Wide-Bandwidth-High-Frequency-5G-and-Wi-Fi-Applications.pdf)) ``` "properties": { "max_new_tokens": 1500, "temperature": 0.2, "top_p": 0.95, "repetition_penalty": 1.15 } ``` I split the document bypages and run the load_summarize_chain. But I get an error: ``` Token indices sequence length is longer than the specified maximum sequence length for this model (13243 > 1024). Running this sequence through the model will result in indexing errors Token indices sequence length is longer than the specified maximum sequence length for this model (1164 > 1024). Running this sequence through the model will result in indexing errors ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received server error (507) from primary and could not load the entire response body. ``` How do I split the pdf document or what chain/prompt should I use to summarize large pdfs and respect the max_new_tokens=1024 for llama2. Code ``` from langchain.document_loaders import PyPDFLoader loader = PyPDFLoader("Single-Crystal-AlScN-on-Silicon-XBAW™RF-Filter-Technology-for-Wide-Bandwidth-High-Frequency-5G-and-Wi-Fi-Applications.pdf") pages = loader.load_and_split() chain = load_summarize_chain(llm,chain_type="map_reduce") x =chain.run(pages) ``` ### Suggestion: _No response_
How to fix "Token indices sequence length is longer than the specified maximum sequence length for this model"?
https://api.github.com/repos/langchain-ai/langchain/issues/9341/comments
3
2023-08-16T21:33:48Z
2023-12-08T16:05:30Z
https://github.com/langchain-ai/langchain/issues/9341
1,853,934,808
9,341
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Hi, I want to do the query based on my location. For example, what is the restaurant near me? I have Python code using SerpAPIWrapper and GoogleSearchAPIWrapper but none of the Api works based on location. below is my snippet code. GoogleSearchAPIWrapper: ` latitude = 37.360334 longitude = -121.994694 query = "Restaurant near me?" search = GoogleSearchAPIWrapper(k=1) # k parameter to set the number of results tool = Tool( name="Google Search", description="Search Google for recent results.", func=search.run, ) answer = tool.run(query) print("answer:", answer)` SerpAPIWrapper: ` latitude =37.360334# 37.379279 #37.360334 longitude = -121.994694 #-121.960661 #-121.994694 params = { "google_domain":"google.com", # "engine": "google", # Set parameter to google to use the Google API engine. "type": "search", "engine":"google_maps", "gl": "us", "hl": "en", "ll" : "@{},{},14z".format(latitude,longitude) } search = SerpAPIWrapper(params=params) tools = [ Tool( name = "search", func=search.run, description="Useful for when you need to answer questions about anything to google search" ), ] # require argument : expects to be used with a memory component. memory = ConversationBufferMemory(memory_key="chat_history") llm=OpenAI(temperature=0) agent_chain = initialize_agent(tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory) agent_chain.run("resturant near me ?") ` I want my search based on my location using latitude and longitude. Thanks in advance! ### Suggestion: _No response_
Issue: Search query based on the geoLocation using GoogleSearchAPIWrapper or SerpAPIWrapper
https://api.github.com/repos/langchain-ai/langchain/issues/9330/comments
4
2023-08-16T18:28:37Z
2023-11-23T16:06:10Z
https://github.com/langchain-ai/langchain/issues/9330
1,853,718,640
9,330
[ "hwchase17", "langchain" ]
### Issue with current documentation: I'm attempting to use Meilisearch as a vector store with Chat Models, however I'm a little confused about how to use the following code to send the input from Meilisearch to Chat Models. Below is the code which you gave ``` from langchain.vectorstores import Meilisearch from langchain.embeddings.openai import OpenAIEmbeddings from langchain.documents import Document from langchain.document_loaders.pdf import PyPDFLoader import meilisearch # Assuming you have a list of PDF files pdf_files = ["file1.pdf", "file2.pdf", ...] # Create a PDF loader pdf_loader = PyPDFLoader() # Load the documents from the PDF files pdf_documents = [pdf_loader.load(file) for file in pdf_files] # Create a Meilisearch client client = meilisearch.Client(url='http://127.0.0.1:7700', api_key='***') embeddings = OpenAIEmbeddings() vectorstore = Meilisearch( embedding=embeddings, client=client, index_name='langchain_demo', text_key='text') # Extract the texts from the documents texts = [doc.page_content for doc in pdf_documents] # Add the texts to the vector store vectorstore.add_texts(texts) ``` Below is the code which i wrote initially which is not utilizing the vector store ``` from langchain.chat_models import ChatOpenAI from langchain.prompts import PromptTemplate from langchain.chains import LLMChain from dotenv import load_dotenv from pytesseract import image_to_string from langchain.text_splitter import RecursiveCharacterTextSplitter, Document from PIL import Image from io import BytesIO import pypdfium2 as pdfium import pandas as pd import pytesseract import json import os import requests pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe' load_dotenv() os.environ["OPENAI_API_KEY"] = "sk-H......." class DocumentProcessor: def __init__(self): self.llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613") def convert_pdf_to_images(self, file_path, scale=300 / 72): pdf_file = pdfium.PdfDocument(file_path) page_indices = [i for i in range(len(pdf_file))] renderer = pdf_file.render( pdfium.PdfBitmap.to_pil, page_indices=page_indices, scale=scale, ) final_images = [] for i, image in zip(page_indices, renderer): image_byte_array = BytesIO() image.save(image_byte_array, format='jpeg', optimize=True) image_byte_array = image_byte_array.getvalue() final_images.append(dict({i: image_byte_array})) return final_images def extract_text_from_img(self, list_dict_final_images): image_list = [list(data.values())[0] for data in list_dict_final_images] image_content = [] for index, image_bytes in enumerate(image_list): image = Image.open(BytesIO(image_bytes)) raw_text = str(image_to_string(image)) image_content.append(raw_text) return "\\n".join(image_content) def extract_content_from_url(self, url): images_list = self.convert_pdf_to_images(url) text_with_pytesseract = self.extract_text_from_img(images_list) return text_with_pytesseract def extract_structured_data(self, content): template = """ You are an expert admin people who will extract core information from documents {content} Above is the content; please try to return the abstract and extractive summary """ prompt = PromptTemplate( input_variables=["content"], template=template, ) chain = LLMChain(llm=self.llm, prompt=prompt) text_splitter = RecursiveCharacterTextSplitter( chunk_size=1024, # Maximum size of chunks to return chunk_overlap=50, # Overlap in tokens between chunks ) # Create a Document object for each document documents = [Document(page_content=text) for text in content] # Split the documents into chunks chunks = text_splitter.split_documents(documents) results = [chain.run(content=chunk) for chunk in chunks] print(results) return results def process_documents(self, file_paths): results = [] for file_path in file_paths: content = self.extract_content_from_url(file_path) data = self.extract_structured_data(content) if isinstance(data, list): results.extend(data) else: results.append(data) return results # Example usage if __name__ == '__main__': document_processor = DocumentProcessor() uploaded_files_paths = [r'Questionnaire_2021 (1).pdf'] processed_results = document_processor.process_documents(uploaded_files_paths) if len(processed_results) > 0: try: df = pd.DataFrame(processed_results) print("Results:") df.to_excel('output.xlsx', index=False) print(df) except Exception as e: print(f"An error occurred while creating the DataFrame: {e}") ``` Can you please try to update the above code which I wrote with the code which you gave i.e. the code which has the vector store? ### Idea or request for content: _No response_
How to utilize the vector store for direct text while using Chat Models?
https://api.github.com/repos/langchain-ai/langchain/issues/9326/comments
2
2023-08-16T15:37:35Z
2023-11-22T16:05:59Z
https://github.com/langchain-ai/langchain/issues/9326
1,853,487,280
9,326
[ "hwchase17", "langchain" ]
### Feature request Add support for `max_marginal_relevance_search` to `pgvector` vector stores ### Motivation Would like to be able to do `max_marginal_relevance_search` over `pgvector` vector stores ### Your contribution N/A
pgvector support for max_marginal_relevance_search
https://api.github.com/repos/langchain-ai/langchain/issues/9325/comments
1
2023-08-16T15:20:33Z
2023-09-17T21:38:32Z
https://github.com/langchain-ai/langchain/issues/9325
1,853,458,017
9,325
[ "hwchase17", "langchain" ]
### Issue with current documentation: Instead of embeddings, I'm using chat models with RecursiveTextSplitter because they simply return the response without any further justification. The input PDF files that I'm providing have a 36k input length. It takes a long time to return the output when using straight code. Can we store the direct text in any vector database and use it to get faster response? Note: I'm not talking about embeddings, i'm trying to use direct chat models from OpenAI Below is the code which i'm using ``` from langchain.chat_models import ChatOpenAI from langchain.prompts import PromptTemplate from langchain.chains import LLMChain from dotenv import load_dotenv from pytesseract import image_to_string from langchain.text_splitter import RecursiveCharacterTextSplitter, Document from PIL import Image from io import BytesIO import pypdfium2 as pdfium import pandas as pd import pytesseract import json import os import requests pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe' load_dotenv() os.environ["OPENAI_API_KEY"] = "sk-H......." class DocumentProcessor: def __init__(self): self.llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613") def convert_pdf_to_images(self, file_path, scale=300 / 72): pdf_file = pdfium.PdfDocument(file_path) page_indices = [i for i in range(len(pdf_file))] renderer = pdf_file.render( pdfium.PdfBitmap.to_pil, page_indices=page_indices, scale=scale, ) final_images = [] for i, image in zip(page_indices, renderer): image_byte_array = BytesIO() image.save(image_byte_array, format='jpeg', optimize=True) image_byte_array = image_byte_array.getvalue() final_images.append(dict({i: image_byte_array})) return final_images def extract_text_from_img(self, list_dict_final_images): image_list = [list(data.values())[0] for data in list_dict_final_images] image_content = [] for index, image_bytes in enumerate(image_list): image = Image.open(BytesIO(image_bytes)) raw_text = str(image_to_string(image)) image_content.append(raw_text) return "\\n".join(image_content) def extract_content_from_url(self, url): images_list = self.convert_pdf_to_images(url) text_with_pytesseract = self.extract_text_from_img(images_list) return text_with_pytesseract def extract_structured_data(self, content): template = """ You are an expert admin people who will extract core information from documents {content} Above is the content; please try to return the abstract and extractive summary """ prompt = PromptTemplate( input_variables=["content"], template=template, ) chain = LLMChain(llm=self.llm, prompt=prompt) text_splitter = RecursiveCharacterTextSplitter( chunk_size=1024, # Maximum size of chunks to return chunk_overlap=50, # Overlap in tokens between chunks ) # Create a Document object for each document documents = [Document(page_content=text) for text in content] # Split the documents into chunks chunks = text_splitter.split_documents(documents) results = [chain.run(content=chunk) for chunk in chunks] print(results) return results def process_documents(self, file_paths): results = [] for file_path in file_paths: content = self.extract_content_from_url(file_path) data = self.extract_structured_data(content) if isinstance(data, list): results.extend(data) else: results.append(data) return results # Example usage if __name__ == '__main__': document_processor = DocumentProcessor() uploaded_files_paths = [r'Questionnaire_2021 (1).pdf'] processed_results = document_processor.process_documents(uploaded_files_paths) if len(processed_results) > 0: try: df = pd.DataFrame(processed_results) print("Results:") df.to_excel('output.xlsx', index=False) print(df) except Exception as e: print(f"An error occurred while creating the DataFrame: {e}") ``` Can anyone please help me with this? ### Idea or request for content: _No response_
Is there any option to store direct text to VectorDB to get faster response?
https://api.github.com/repos/langchain-ai/langchain/issues/9324/comments
9
2023-08-16T15:09:40Z
2023-12-19T00:49:13Z
https://github.com/langchain-ai/langchain/issues/9324
1,853,439,627
9,324
[ "hwchase17", "langchain" ]
### System Info Python 3.10.6 Langchain 0.0.220 ### Who can help? @3 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction When I instantiate the model and chain like this: ``` llm = Bedrock(model_id="anthropic.claude-v1", model_kwargs={'max_tokens_to_sample': 2048, 'temperature': 1.0, 'top_k': 250, 'top_p': 0.999, 'stop_sequences': ['Human:']}, client=bedrock_client) chain = load_qa_chain(llm, chain_type='stuff', verbose=False) ``` and proceed with `chain.run(input_documents=context, question=query)`, I am getting an error when `len(context) > 50`. However, I am passing `max_tokens_to_sample = 2048`. If `len(context) < 50` it works perfectly fine. The error I am getting is: ``` Traceback (most recent call last): File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/lambdas/processQnA/trigger.py", line 59, in <module> main() File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/lambdas/processQnA/trigger.py", line 51, in main response = handle_event(q, None) File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/lambdas/processQnA/handler.py", line 92, in handle_event response = ask_question(embeddings, chain, indexes, question) File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/lambdas/processQnA/utils.py", line 225, in ask_question return chain.run(input_documents=context, question=query) File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 293, in run return self(kwargs, callbacks=callbacks, tags=tags)[_output_key] File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 166, in __call__ raise e File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 160, in __call__ self._call(inputs, run_manager=run_manager) File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py", line 84, in _call output, extra_return_dict = self.combine_docs( File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py", line 87, in combine_docs return self.llm_chain.predict(callbacks=callbacks, **inputs), {} File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 252, in predict return self(kwargs, callbacks=callbacks)[self.output_key] File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 166, in __call__ raise e File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 160, in __call__ self._call(inputs, run_manager=run_manager) File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 92, in _call response = self.generate([inputs], run_manager=run_manager) File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 102, in generate return self.llm.generate_prompt( File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/llms/base.py", line 141, in generate_prompt return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs) File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/llms/base.py", line 227, in generate output = self._generate_helper( File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/llms/base.py", line 178, in _generate_helper raise e File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/llms/base.py", line 165, in _generate_helper self._generate( File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/llms/base.py", line 525, in _generate self._call(prompt, stop=stop, run_manager=run_manager, **kwargs) File "/home/jlopezcolmenarejo/Dev/ai-assistant-adopcat-poc/.venv/lib/python3.10/site-packages/langchain/llms/bedrock.py", line 190, in _call raise ValueError(f"Error raised by bedrock service: {e}") ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: The provided inference configurations are invalid ``` I have checked that `bedrock.py/LLMInputOutputAdapter.prepare_input` does NOT run this: `input_body["max_tokens_to_sample"] = 50`. In fact, it just add the prompt. It returns something like: `{'max_tokens_to_sample': 2048, 'temperature': 1.0, 'top_k': 250, 'top_p': 0.999, 'stop_sequences': ['Human:'], 'prompt': "Use the following pieces of context to answer the question at the end. Blah blah blah ...}` ### Expected behavior No errors should be raised.
Inference parameters for Bedrock anthropic model showing problems with max_tokens_for_sample parameter
https://api.github.com/repos/langchain-ai/langchain/issues/9319/comments
7
2023-08-16T14:25:02Z
2023-10-22T04:44:28Z
https://github.com/langchain-ai/langchain/issues/9319
1,853,359,140
9,319
[ "hwchase17", "langchain" ]
### Issue with current documentation: I developed a piece of code that will read data from a pdf file, send it to chat models, and then return the result. What to do if the input length is too long when using chat models? I attempted embeddings, but the solutions are simple and not as good as Chat models, in my opinion. For chat models, I tried using RecursiveTextSplitter, but it didn't work. The code is below. ``` from langchain.chat_models import ChatOpenAI from langchain.prompts import PromptTemplate from langchain.chains import LLMChain from dotenv import load_dotenv from pytesseract import image_to_string from langchain.text_splitter import RecursiveCharacterTextSplitter from PIL import Image from io import BytesIO import pypdfium2 as pdfium import pandas as pd import pytesseract import json import os import requests pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe' load_dotenv() os.environ["OPENAI_API_KEY"] = "sk-H................." class DocumentProcessor: def __init__(self): self.llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613") def convert_pdf_to_images(self, file_path, scale=300 / 72): pdf_file = pdfium.PdfDocument(file_path) page_indices = [i for i in range(len(pdf_file))] renderer = pdf_file.render( pdfium.PdfBitmap.to_pil, page_indices=page_indices, scale=scale, ) final_images = [] for i, image in zip(page_indices, renderer): image_byte_array = BytesIO() image.save(image_byte_array, format='jpeg', optimize=True) image_byte_array = image_byte_array.getvalue() final_images.append(dict({i: image_byte_array})) return final_images def extract_text_from_img(self, list_dict_final_images): image_list = [list(data.values())[0] for data in list_dict_final_images] image_content = [] for index, image_bytes in enumerate(image_list): image = Image.open(BytesIO(image_bytes)) raw_text = str(image_to_string(image)) image_content.append(raw_text) return "\\n".join(image_content) def extract_content_from_url(self, url): images_list = self.convert_pdf_to_images(url) text_with_pytesseract = self.extract_text_from_img(images_list) return text_with_pytesseract def extract_structured_data(self, content): template = """ You are an expert admin people who will extract core information from documents {content} Above is the content; please try to return the abstract and extractive summary """ prompt = PromptTemplate( input_variables=["content"], template=template, ) chain = LLMChain(llm=self.llm, prompt=prompt) # results = chain.run(content=content, data_points=self.data_points) text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200) chunks = text_splitter.split_documents(content) results = [chain.run(content=chunk) for chunk in chunks] print(results) return results def process_documents(self, file_paths): results = [] for file_path in file_paths: content = self.extract_content_from_url(file_path) data = self.extract_structured_data(content) if isinstance(data, list): results.extend(data) else: results.append(data) return results # Example usage if __name__ == '__main__': document_processor = DocumentProcessor() uploaded_files_paths = [r'Questionnaire_2021 (1).pdf'] processed_results = document_processor.process_documents(uploaded_files_paths) if len(processed_results) > 0: try: df = pd.DataFrame(processed_results) print("Results:") df.to_excel('output.xlsx', index=False) print(df) except Exception as e: print(f"An error occurred while creating the DataFrame: {e}") ``` Below is the error for above code ``` Traceback (most recent call last): File "Documents\langchain_projects\Invoice Extraction using LLM\testing.py", line 117, in <module> processed_results = document_processor.process_documents(uploaded_files_paths) File "Documents\langchain_projects\Invoice Extraction using LLM\testing.py", line 104, in process_documents data = self.extract_structured_data(content) File "Documents\langchain_projects\Invoice Extraction using LLM\testing.py", line 95, in extract_structured_data chunks = text_splitter.split_documents(content) File "anaconda3\lib\site-packages\langchain\text_splitter.py", line 112, in split_documents texts.append(doc.page_content) AttributeError: 'str' object has no attribute 'page_content' ``` Below is the error if i'm not using RecursiveTextSplitter `openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 36248 tokens. Please reduce the length of the messages.` Could someone possibly explain to me how to apply RecursiveTextSplitter to chat models rather than embeddings? ### Idea or request for content: _No response_
How to use RecursiveTextSplitter for Chat Models like OpenAI and LLama?
https://api.github.com/repos/langchain-ai/langchain/issues/9316/comments
6
2023-08-16T13:56:22Z
2023-11-26T16:06:54Z
https://github.com/langchain-ai/langchain/issues/9316
1,853,305,541
9,316
[ "hwchase17", "langchain" ]
### System Info langchain version 0.0.266, using python3 ### Who can help? @eyurtsev @ago ### Information - [X] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.embeddings import VertexAIEmbeddings from langchain.vectorstores import PGVector embeddings = VertexAIEmbeddings() vectorstore = PGVector( collection_name=<collection_name>, connection_string=<connection_string>, embedding_function=embeddings, ) vectorstore.delete(ids=[<some_id>]) ### Expected behavior the vector will be deleted, no error will be arosed
'delete' function is not implemented in PGVector
https://api.github.com/repos/langchain-ai/langchain/issues/9312/comments
3
2023-08-16T13:11:26Z
2023-11-22T16:06:09Z
https://github.com/langchain-ai/langchain/issues/9312
1,853,216,544
9,312
[ "hwchase17", "langchain" ]
### System Info langchain 0.0.266 ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction In the SelfQueryRetriever, the code try to generate a filter, and simplifies the question. But, if the first step can not generate que specific filter, the simplified version of the question is used. This changes the answer completely. It's possible to use `use_original_query` to force the usage of the original query. But, I think if it's possible to generate a filter, it's a good idea to use the simplified version of the question. Else, it's necessary to use the original question, because the filter is no longer involved. Ask a question with a filter, but the filter is not associated with the `metadata_field_info`. The calculated filter is empty and the simplified version is used. ### Expected behavior I think if it's possible to generate a filter, it's a good idea to use the simplified version of the question. Else, it's *necessary* to use the original question, because the filter is no longer involved.
Use original_question if SelfQueryRetriever if the filter is empty
https://api.github.com/repos/langchain-ai/langchain/issues/9310/comments
5
2023-08-16T12:51:20Z
2024-03-26T16:05:31Z
https://github.com/langchain-ai/langchain/issues/9310
1,853,181,280
9,310
[ "hwchase17", "langchain" ]
### System Info Latest Python and LangChain version. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Go to the documentation: https://python.langchain.com/docs/integrations/llms/huggingface_textgen_inference 2. Test the `Streaming` example: ``` from langchain.llms import HuggingFaceTextGenInference from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler llm = HuggingFaceTextGenInference( inference_server_url="http://localhost:8010/", max_new_tokens=512, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.01, repetition_penalty=1.03, stream=True ) llm("What did foo say about bar?", callbacks=[StreamingStdOutCallbackHandler()]) ``` Throws an error: ``` pydantic.error_wrappers.ValidationError: 1 validation error for HuggingFaceTextGenInference stream extra fields not permitted (type=value_error.extra) ``` But `streaming=True` worked. So maybe it's a typo in the documentation. ### Expected behavior To get working the example.
Huggingface TextGen Inference streaming example not working
https://api.github.com/repos/langchain-ai/langchain/issues/9308/comments
2
2023-08-16T12:47:24Z
2023-08-16T13:15:28Z
https://github.com/langchain-ai/langchain/issues/9308
1,853,173,863
9,308
[ "hwchase17", "langchain" ]
### System Info LangChain `v0.0.264` ### Who can help? @hwchase17 @eyurtsev ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Create an agent using `create_csv_agent`. 2. Add the attached CSV 3. Run a query: `What is my MONTHLY BUDGET SUMMARY` [Personalbudget.csv](https://github.com/langchain-ai/langchain/files/12359540/Personalbudget.csv) Trace: ``` raise OutputParserException( langchain.schema.output_parser.OutputParserException: Could not parse tool input: {'name': 'python', 'arguments': "summary_row = df[df['Personal Monthly Budget'] == 'MONTHLY BUDGET SUMMARY']\nsummary_row"} because the `arguments` is not valid JSON. ``` ### Expected behavior The arguments passed to the parser should be always be valid JSON.
`create_csv_agent` fails due to arguments from pandas not being valid JSON.
https://api.github.com/repos/langchain-ai/langchain/issues/9307/comments
2
2023-08-16T12:46:43Z
2023-11-22T16:06:14Z
https://github.com/langchain-ai/langchain/issues/9307
1,853,172,407
9,307
[ "hwchase17", "langchain" ]
### System Info Python 3.10.12 Langchain 0.0.266 ### Who can help? @eyurtsev ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Import any loader or just directly use the RecursiveCharacterTextSplitter define chunk size to lets say 3950 Get a text that is way smaller, for example 1k tokens. run the text splitter and recieve multiple documents. ``` from langchain.text_splitter import SentenceTransformersTokenTextSplitter from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.document_loaders.youtube import YoutubeLoader def get_token_count(text): splitter = SentenceTransformersTokenTextSplitter(chunk_overlap=0) text_token_count = splitter.count_tokens(text=text.replace("\n"," ")) return text_token_count text_splitter = RecursiveCharacterTextSplitter(separators=["\n\n", "\n", " "], chunk_size=3950, chunk_overlap=100) def summarize_youtube_video(youtube_url: str) -> str: """ Useful to get the summary of a youtube video. Applies if the user sends a youtube link. """ id_input = youtube_url.split("=")[1] splitter = text_splitter loader = YoutubeLoader(id_input) docs = loader.load_and_split(splitter) return docs docs = summarize_youtube_video("https://www.youtube.com/watch?v=vXIkc40UiH0") for item in docs: print(get_token_count(item.page_content)) ``` ![screenshot_20230816135005](https://github.com/langchain-ai/langchain/assets/72946124/249cccf3-1209-4929-b62d-cf8acaed4ff8) ### Expected behavior If the text is smaller than the chunk size, only one document should get returned.
RecursiveCharacterTextSplitter splits even if text is smaller than chunk size
https://api.github.com/repos/langchain-ai/langchain/issues/9305/comments
3
2023-08-16T11:51:15Z
2023-10-11T16:46:11Z
https://github.com/langchain-ai/langchain/issues/9305
1,853,080,158
9,305
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. How to resolve [This model's maximum context length is 4097 tokens] ? If Graph DB have many nodes, can not send those tokens to OpenAI by one time. ### Suggestion: _No response_
Graph DB QA chain
https://api.github.com/repos/langchain-ai/langchain/issues/9303/comments
6
2023-08-16T09:37:54Z
2024-01-02T21:17:04Z
https://github.com/langchain-ai/langchain/issues/9303
1,852,872,362
9,303
[ "hwchase17", "langchain" ]
### System Info langchain/0.0.258, Python 3.10.10 ### Who can help? @hw @issam9 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction According the documentation/tutorials a vectorstore is created by these steps 1. Create a list of documents by some documents loaders 2. Use a text splitter for splitting the documents into smaller chunks 3. Pass the resulting list to the vectorstore See the tutorial https://learn.deeplearning.ai/langchain-chat-with-your-data/lesson/4/vectorstores-and-embedding In case of the HuggingFaceEmbeddings the computed embeddings of the chunks of the documents can influence each other. This caused by the fact that the HuggingFaceEmbeddings is making a single call of the 'encode' method of the SentenceTransformer class, see https://github.com/langchain-ai/langchain/blob/1d55141c5016a2e197d3eed2844d55460d207801/libs/langchain/langchain/embeddings/huggingface.py#L91 The SentenceTransformer has a pooling layer as last layer, with 'pooling_mode_cls_token' or 'pooling_mode_mean'. And in the 'encode' method the pooling layer is not cleared during looping through the sentences, see https://github.com/UKPLab/sentence-transformers/blob/a458ce79c40fef93d5ecc66931b446ea65fdd017/sentence_transformers/SentenceTransformer.py#L159 By that the embedding of the chunks are influencing each other by the pooling layer. You can reproduce it with the following codes snipplet: def print_document_embeddings(splits: list[Document], document_name: str): model_name = 'sentence-transformers/multi-qa-mpnet-base-dot-v1' embedding = HuggingFaceEmbeddings(model_name=model_name, model_kwargs={}, encode_kwargs={}) vectordb = Chroma.from_documents(splits, embedding) dump = vectordb.get(include=['embeddings', 'metadatas', 'documents']) for metadata, document, vector in zip(dump['metadatas'], dump['documents'], dump['embeddings']): if metadata['source'] == document_name: print(f'Embedding: {vector[:3]}...') print(f'Document: {document}') print('-' * 10) vectordb.delete() doc1 = Document(page_content='The embedding vectors of a document must not depend on any other documents.', metadata={'source': 'document 1'}) doc2 = Document(page_content='The grass is green.', metadata={'source': 'document 2'}) doc3 = Document(page_content=' Any text larger the document 1.'*10, metadata={'source': 'document 3'}) print('Vector store with [doc1,doc2]') print_document_embeddings([doc1,doc2],'document 1') print('Vector store with [doc1,doc3]') print_document_embeddings([doc1, doc3],'document 1') In the output you can see that the embeddings is not the same (the changes are small because document 3 is small, but they can be arbitrary large) Vector store with [doc1,doc2] Embedding: [0.03044097125530243, -0.5951688289642334, -0.11154567450284958]... Document: The embedding vectors of a document must not depend on any other documents. ---------- Vector store with [doc1,doc3] Embedding: [0.030441032722592354, -0.5951685905456543, -0.11154570430517197]... Document: The embedding vectors of a document must not depend on any other documents. ### Expected behavior The expected behavior is that the embedding of a document is always that same, it can not be influenced by any other documents in the vectorstore.
With HuggingFaceEmbeddings, embedding of individual documents in the vectorstore can influence each other
https://api.github.com/repos/langchain-ai/langchain/issues/9301/comments
4
2023-08-16T09:19:03Z
2024-03-13T19:59:19Z
https://github.com/langchain-ai/langchain/issues/9301
1,852,841,614
9,301
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Hello, I have a question, I didn't find it from the official documentation, I want to know, if I have another llm now, he provides me with a calling api, how can I do it, can it be used like openai, such as This https://python.langchain.com/docs/use_cases/sql in the official document, I want to replace openai with my API provided by others, what should I do? ### Suggestion: _No response_
other llm api
https://api.github.com/repos/langchain-ai/langchain/issues/9299/comments
4
2023-08-16T08:56:57Z
2023-11-27T16:07:27Z
https://github.com/langchain-ai/langchain/issues/9299
1,852,805,480
9,299
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised RateLimitError: Rate limit reached for default-text-embedding-ada-002 in organization org-uIkxFSWUeCDpCsfzD5XWYLZ7 on tokens per min. Limit: 1000000 / min. Current: 837303 / min. Contact us through our help center at help.openai.com if you continue to have issues.. how can i fix this? ### Suggestion: _No response_
Issue: embedding rate limit error
https://api.github.com/repos/langchain-ai/langchain/issues/9298/comments
5
2023-08-16T08:42:15Z
2024-03-13T19:59:21Z
https://github.com/langchain-ai/langchain/issues/9298
1,852,782,550
9,298
[ "hwchase17", "langchain" ]
### System Info When I run a vector search in Azure Cognitive Search using AzureSearch it fails saying, "The 'value' property of the vector query can't be null or an empty array." (full error at the bottom) My code hasn't changed from last week when it used to work. I've got version 11.4.0b6 of azure-search-documents installed. I suspect that Cognitive Search has changed its signature or implementation but the Langchain connection stuff hasn't been updated. Someone else has reported the same error message but the workaround doesn't work for me (https://github.com/langchain-ai/langchain/issues/7841). The following is a simplified version of the code which I got from the samples at https://python.langchain.com/docs/integrations/vectorstores/azuresearch. This also fails with the same error. `import os from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores.azuresearch import AzureSearch os.environ["AZURE_COGNITIVE_SEARCH_SERVICE_NAME"] = '<service_name>' os.environ["AZURE_COGNITIVE_SEARCH_INDEX_NAME"] = '<index_name>' os.environ["AZURE_COGNITIVE_SEARCH_API_KEY"] = '<key>' os.environ["OPENAI_API_KEY"] = '<key2>' model = 'text-embedding-ada-002' embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1) index_name: str = "website" vector_store: AzureSearch = AzureSearch( azure_search_endpoint='https://<service_name>.search.windows.net', azure_search_key='<key>', index_name=index_name, embedding_function=embeddings.embed_query, ) docs = vector_store.hybrid_search( query="Test query", k=3 ) print(docs[0].page_content) ` The full error is: Exception has occurred: HttpResponseError (InvalidRequestParameter) The 'value' property of the vector query can't be null or an empty array. Make sure to enclose the vector within a "value" property: '{"vector": { "value": [ <vector contents> ] } }' Parameter name: vector Code: InvalidRequestParameter Message: The 'value' property of the vector query can't be null or an empty array. Make sure to enclose the vector within a "value" property: '{"vector": { "value": [ <vector contents> ] } }' Parameter name: vector Exception Details: (InvalidVectorQuery) The 'value' property of the vector query can't be null or an empty array. Make sure to enclose the vector within a "value" property: '{"vector": { "value": [ <vector contents> ] } }' Code: InvalidVectorQuery Message: The 'value' property of the vector query can't be null or an empty array. Make sure to enclose the vector within a "value" property: '{"vector": { "value": [ <vector contents> ] } }' File "C:\Work\AI Search\SearchCognitiveSearchScratch.py", line 22, in <module> docs = vector_store.hybrid_search( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ azure.core.exceptions.HttpResponseError: (InvalidRequestParameter) The 'value' property of the vector query can't be null or an empty array. Make sure to enclose the vector within a "value" property: '{"vector": { "value": [ <vector contents> ] } }' Parameter name: vector Code: InvalidRequestParameter Message: The 'value' property of the vector query can't be null or an empty array. Make sure to enclose the vector within a "value" property: '{"vector": { "value": [ <vector contents> ] } }' Parameter name: vector Exception Details: (InvalidVectorQuery) The 'value' property of the vector query can't be null or an empty array. Make sure to enclose the vector within a "value" property: '{"vector": { "value": [ <vector contents> ] } }' Code: InvalidVectorQuery Message: The 'value' property of the vector query can't be null or an empty array. Make sure to enclose the vector within a "value" property: '{"vector": { "value": [ <vector contents> ] } }' ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Run: pip install azure-search-documents==11.4.0b6 pip install azure-identity Run (replacing with the keys, service names, index names and service endpoint): import os from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores.azuresearch import AzureSearch os.environ["AZURE_COGNITIVE_SEARCH_SERVICE_NAME"] = '<service_name>' os.environ["AZURE_COGNITIVE_SEARCH_INDEX_NAME"] = '<index_name>' os.environ["AZURE_COGNITIVE_SEARCH_API_KEY"] = '<key>' os.environ["OPENAI_API_KEY"] = '<key2>' model = 'text-embedding-ada-002' embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1) index_name: str = "website" vector_store: AzureSearch = AzureSearch( azure_search_endpoint='https://<service_name>.search.windows.net', azure_search_key='<key>', index_name=index_name, embedding_function=embeddings.embed_query, ) docs = vector_store.hybrid_search( query="Test query", k=3 ) print(docs[0].page_content) ### Expected behavior Return documents from the index.
Error running vector search in Azure Cognitive Search - The 'value' property of the vector query can't be null or an empty array.
https://api.github.com/repos/langchain-ai/langchain/issues/9297/comments
2
2023-08-16T08:13:32Z
2023-08-18T00:16:12Z
https://github.com/langchain-ai/langchain/issues/9297
1,852,738,816
9,297
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. In LangChain 0.0.240, the `ApifyWrapper` class was removed. This caused a breaking change and broke any code using this class. It was deleted in this PR https://github.com/langchain-ai/langchain/pull/8106 @hwchase17 Could you please outline the reason it was removed? Other issues related to this one: https://github.com/langchain-ai/langchain/issues/8201 and https://github.com/langchain-ai/langchain/issues/8307 ### Suggestion: I'd suggest adding the integration back so that code written using it will continue to work. I'm offering to take ownership of the Apify-related code, so if any issue arises, you can just tag me and I'll resolve it.
Issue: `ApifyWrapper` was removed from the codebase breaking user's code
https://api.github.com/repos/langchain-ai/langchain/issues/9294/comments
3
2023-08-16T07:20:48Z
2023-08-31T23:00:59Z
https://github.com/langchain-ai/langchain/issues/9294
1,852,660,023
9,294
[ "hwchase17", "langchain" ]
### Feature request It would be nice to have a fake vectorstore to make testing retriaval chains easier. ### Motivation testing retriaval chains ### Your contribution yes, I'm happy to help.
Fake vectorstore
https://api.github.com/repos/langchain-ai/langchain/issues/9292/comments
2
2023-08-16T05:48:38Z
2023-11-22T16:06:29Z
https://github.com/langchain-ai/langchain/issues/9292
1,852,555,549
9,292
[ "hwchase17", "langchain" ]
### System Info 0.0.247 ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [x] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction run ``` messages = [] messages.append(SystemMessage(content="abc1")) json.dumps(messages, ensure_ascii=False) ``` ``` File "C:\Users\64478\AppData\Local\Programs\Python\Python310\lib\json\__init__.py", line 238, in dumps **kw).encode(obj) File "C:\Users\64478\AppData\Local\Programs\Python\Python310\lib\json\encoder.py", line 199, in encode chunks = self.iterencode(o, _one_shot=True) File "C:\Users\64478\AppData\Local\Programs\Python\Python310\lib\json\encoder.py", line 257, in iterencode return _iterencode(o, 0) File "C:\Users\64478\AppData\Local\Programs\Python\Python310\lib\json\encoder.py", line 179, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type SystemMessage is not JSON serializable ``` ### Expected behavior I think this might not be a bug But Would be best if JSON output is (since I am using openai): ``` {"role": "system", "message": "abc1你好"} ``` How can I do this ? ---------- What ai says is not as expected: ``` from langchain.load.dump import dumps message1 = SystemMessage(content="abc1你好") message2 = HumanMessage(content="123") messages = [message1, message1] print(dumps(messages)) ``` is printing: ```json [{"lc": 1, "type": "constructor", "id": ["langchain", "schema", "messages", "SystemMessage"], "kwargs": {"content": "abc1\u4f60\u597d"}}, {"lc": 1, "type": "constructor", "id": ["langchain", "schema", "messages", "SystemMessage"], "kwargs": {"content": "abc1\u4f60\u597d"}}] ``` And how can I use ensure_ascii with langchain.load.dump ?
Object of type SystemMessage is not JSON serializable
https://api.github.com/repos/langchain-ai/langchain/issues/9288/comments
4
2023-08-16T03:59:44Z
2024-02-13T16:13:58Z
https://github.com/langchain-ai/langchain/issues/9288
1,852,465,014
9,288
[ "hwchase17", "langchain" ]
### System Info LangChain 0.0.265, Mac M2 Pro Hardware. Python 3.10.0 ### Who can help? @hwchase17 @ago ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction llm = HuggingFaceEndpoint(endpoint_url=" task = "summarization",huggingfacehub_api_token vectorstore_multi_qa = Pinecone(index, embedding_multi_qa.embed_query, "text") qa_chain = RetrievalQAWithSourcesChain.from_llm( llm=llm, #chain_type="stuff", retriever=vectorstore_multi_qa.as_retriever(), return_source_documents=True) query = "What is Blockchain?" # Send question as a query to qa chain result = qa_chain({"question": query}) ### Expected behavior Return something
KeyError: 'summary_text' when trying HuggingFaceEndpoint with task = "summarization" even when the VALID_TASKS include it
https://api.github.com/repos/langchain-ai/langchain/issues/9286/comments
2
2023-08-16T03:24:56Z
2023-11-22T16:06:34Z
https://github.com/langchain-ai/langchain/issues/9286
1,852,442,074
9,286
[ "hwchase17", "langchain" ]
### System Info Name: langchain Version: 0.0.265 Python 3.10.7 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction prompt=st.text_input("enter search key") llm=OpenAI(temperature=0.9) script_memory=ConversationBufferMemory(input_key='title', memory_key='history') script_template=PromptTemplate( input_variables=['title', 'wikiepedia_research'], template='Write me Youtube video script based on this TITLE: {title} while leveraging this wikipedia research: {wikiepedia_research}' ) script_chain=LLMChain(llm=llm, prompt=script_template, verbose=True, output_key='script',memory=script_memory) wiki = WikipediaAPIWrapper() wiki_research = wiki.run(prompt) # key should be wikipedia_research script = script_chain.run(title=title, wikiepedia_research=wiki_research) ![github](https://github.com/langchain-ai/langchain/assets/47233790/490f2da5-7ab6-4490-ac84-056bb902587b) ### Expected behavior script = script_chain.run(title=title, wikipedia_research=wiki_research)
Typo in Variable Name in script_chain.run()
https://api.github.com/repos/langchain-ai/langchain/issues/9285/comments
1
2023-08-16T02:40:14Z
2023-11-22T16:06:39Z
https://github.com/langchain-ai/langchain/issues/9285
1,852,412,897
9,285
[ "hwchase17", "langchain" ]
https://github.com/langchain-ai/langchain/blob/e986afa13a1de73f403eebe05bd4b25781c12788/libs/langchain/langchain/chains/combine_documents/stuff.py#L96C76-L96C76 If values["document_variable_name"] happens to be equal to one of the variable names in llm_chain_variables, the if condition check would fail, skipping the error throw.
If values["document_variable_name"] happens to be equal to one of the variable names in llm_chain_variables, the if condition check would fail, skipping the error throw.
https://api.github.com/repos/langchain-ai/langchain/issues/9284/comments
1
2023-08-16T02:16:41Z
2023-08-16T02:19:58Z
https://github.com/langchain-ai/langchain/issues/9284
1,852,397,983
9,284
[ "hwchase17", "langchain" ]
### Issue with current documentation: See this page: https://python.langchain.com/docs/use_cases/multi_modal/image_agent ![image](https://github.com/langchain-ai/langchain/assets/2256422/4847b842-8b44-471e-8bda-4181ca6738da) The `from langchain import OpenAI` class was not extracted into the `API Reference:` section. My guess is that was because the `langchain import` doesn't have `.` in the namespace. Other classes with `dot` were extracted. Another example: https://python.langchain.com/docs/use_cases/more/code_writing/llm_math ### Idea or request for content: _No response_
DOC: missed items in the `API Reference:` auto-generated section
https://api.github.com/repos/langchain-ai/langchain/issues/9282/comments
2
2023-08-16T00:40:19Z
2023-11-15T16:41:49Z
https://github.com/langchain-ai/langchain/issues/9282
1,852,329,561
9,282
[ "hwchase17", "langchain" ]
### System Info I'm using AWS Sagemaker Jumpstart model for Llama2 13b: meta-textgeneration-llama-2-13b-f On running a Langchain summarize chain with chain_type="map_reduce" I get the below error. Other chain types (refine, stuff) work without issues. I do not have access to https://huggingface.co/ from my environment. Is there a way to set the gpt2 tokenizer in a local dir? ``` parameters = { "properties": { "min_length": 100, "max_length": 1024, "do_sample": True, "top_p": 0.9, "repetition_penalty": 1.03, "temperature": 0.8 } } class ContentHandler(LLMContentHandler): content_type = "application/json" accepts = "application/json" def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes: input_str = json.dumps({"inputs": prompt, **model_kwargs}) print(input_str) return input_str.encode("utf-8") def transform_output(self, output: bytes) -> str: response_json = json.loads(output.read().decode("utf-8")) return response_json[0]["generation"]["content"] content_handler = ContentHandler() endpoint_name='xxxxxxxxxxxxxxxxxx' llm=SagemakerEndpoint( endpoint_name=endpoint_name, region_name="us-east-1", model_kwargs=parameters, content_handler=content_handler, endpoint_kwargs={"CustomAttributes": 'accept_eula=true'} ) chain = load_summarize_chain(llm, chain_type="map_reduce") chain.run(docs) ``` Error: ``` File /usr/local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:1788, in PreTrainedTokenizerBase.from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 1782 logger.info( 1783 f"Can't load following files from cache: {unresolved_files} and cannot check if these " 1784 "files are necessary for the tokenizer to operate." 1785 ) 1787 if all(full_file_name is None for full_file_name in resolved_vocab_files.values()): -> 1788 raise EnvironmentError( 1789 f"Can't load tokenizer for '{pretrained_model_name_or_path}'. If you were trying to load it from " 1790 "'https://huggingface.co/models', make sure you don't have a local directory with the same name. " 1791 f"Otherwise, make sure '{pretrained_model_name_or_path}' is the correct path to a directory " 1792 f"containing all relevant files for a {cls.__name__} tokenizer." 1793 ) 1795 for file_id, file_path in vocab_files.items(): 1796 if file_id not in resolved_vocab_files: OSError: Can't load tokenizer for 'gpt2'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'gpt2' is the correct path to a directory containing all relevant files for a GPT2TokenizerFast tokenizer. ``` ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Provided above ### Expected behavior Provided above
How do fix GPT2 Tokenizer error in Langchain map_reduce (LLama2)?
https://api.github.com/repos/langchain-ai/langchain/issues/9273/comments
6
2023-08-15T21:01:48Z
2024-01-01T18:11:07Z
https://github.com/langchain-ai/langchain/issues/9273
1,852,123,223
9,273
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. llm = ChatOpenAI(temperature=0, model_name = args.ModelName, verbose = True, streaming = True, callbacks = [MyCallbackHandler(new_payload)]) class StaticSearchTool(BaseTool): name = "Search_QA_System" description = "Use this tool to answer cuurent events # return_direct = True def _run( self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None ) -> str: """Use the tool.""" return qa.run(query) async def _arun( self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None ) -> str: """Use the tool asynchronously.""" return await qa.arun(query) def define_search_tool(): search = [StaticSearchTool()] return search agent_executor = initialize_agent( define_search_tool(), llm, system_message = system_message, extra_prompt_messages = [MessagesPlaceholder(variable_name="memory")], agent_instructions = "Analyse the query closely and decide the query intend. If the query requires internet to search than use google-serper to answer the query. Try to invoke `google-serper` most of the times", agent=AgentType.OPENAI_FUNCTIONS, agent_kwargs= agent_kwargs, verbose = True, memory = memory, handle_parsing_errors=True, ) In this code I need to stream the output of the StaticSearchTool() not the output of the agent. How can we do this ### Suggestion: _No response_
Stream the response of the custom tools in the agent
https://api.github.com/repos/langchain-ai/langchain/issues/9271/comments
3
2023-08-15T20:48:35Z
2023-11-22T16:06:44Z
https://github.com/langchain-ai/langchain/issues/9271
1,852,108,322
9,271
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I have an issue when trying to set 'gpt-3.5-turbo' as a model to create embeddings. When using the 'gpt-3.5-turbo' to create LLM, everything works fine: `llm = OpenAI(model_name='gpt-3.5-turbo', temperature=0, openai_api_key=OPENAI_API_KEY, max_tokens=512)` Also creating embeddings without specifying a model works fine: `embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)` But when I try to use 'gpt-3.5-turbo' for embeddings, I get the following error: `embeddings = OpenAIEmbeddings(model='gpt-3.5-turbo', openai_api_key=OPENAI_API_KEY)` Error: _openai.error.PermissionError: You are not allowed to generate embeddings from this model_ This seems weird to me since I have just used the same model to create LLM. So I have access to it and should be able to use it. Unless creation of embeddings and LLM differs somehow. Can somebody please advise? ### Suggestion: _No response_
Issue: openai.error.PermissionError: You are not allowed to generate embeddings from this model
https://api.github.com/repos/langchain-ai/langchain/issues/9270/comments
9
2023-08-15T20:31:54Z
2024-05-17T14:38:10Z
https://github.com/langchain-ai/langchain/issues/9270
1,852,086,899
9,270
[ "hwchase17", "langchain" ]
I am trying to create a `ConversationalRetrievalChain` with memory, `return_source_document=True` and a custom retriever which returns content and url of the document. I am able to generate the right response when I call the chain for the first time. But when I call it again with memory I am getting error Missing some input keys: {'context'} Here is the trace of the error. ``` Missing some input keys: {'context'} --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[470], line 1 ----> 1 contracts_chain("How can I apply for a TPA?") File /mnt/xarfuse/uid-560402/653248bf-seed-nspid4026531836_cgpid28389965-ns-4026531840/langchain/chains/base.py:258, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info) 256 except (KeyboardInterrupt, Exception) as e: 257 run_manager.on_chain_error(e) --> 258 raise e 259 run_manager.on_chain_end(outputs) 260 final_outputs: Dict[str, Any] = self.prep_outputs( 261 inputs, outputs, return_only_outputs 262 ) File /mnt/xarfuse/uid-560402/653248bf-seed-nspid4026531836_cgpid28389965-ns-4026531840/langchain/chains/base.py:252, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info) 246 run_manager = callback_manager.on_chain_start( 247 dumpd(self), 248 inputs, 249 ) 250 try: 251 outputs = ( --> 252 self._call(inputs, run_manager=run_manager) 253 if new_arg_supported 254 else self._call(inputs) 255 ) 256 except (KeyboardInterrupt, Exception) as e: 257 run_manager.on_chain_error(e) File /mnt/xarfuse/uid-560402/653248bf-seed-nspid4026531836_cgpid28389965-ns-4026531840/langchain/chains/conversational_retrieval/base.py:126, in BaseConversationalRetrievalChain._call(self, inputs, run_manager) 124 if chat_history_str: 125 callbacks = _run_manager.get_child() --> 126 new_question = self.question_generator.run( 127 question=question, chat_history=chat_history_str, callbacks=callbacks 128 ) 129 else: 130 new_question = question File /mnt/xarfuse/uid-560402/653248bf-seed-nspid4026531836_cgpid28389965-ns-4026531840/langchain/chains/base.py:456, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs) 451 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[ 452 _output_key 453 ] 455 if kwargs and not args: --> 456 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[ 457 _output_key 458 ] 460 if not kwargs and not args: 461 raise ValueError( 462 "`run` supported with either positional arguments or keyword arguments," 463 " but none were provided." 464 ) File /mnt/xarfuse/uid-560402/653248bf-seed-nspid4026531836_cgpid28389965-ns-4026531840/langchain/chains/base.py:235, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info) 200 def __call__( 201 self, 202 inputs: Union[Dict[str, Any], Any], (...) 208 include_run_info: bool = False, 209 ) -> Dict[str, Any]: 210 """Execute the chain. 211 212 Args: (...) 233 `Chain.output_keys`. 234 """ --> 235 inputs = self.prep_inputs(inputs) 236 callback_manager = CallbackManager.configure( 237 callbacks, 238 self.callbacks, (...) 243 self.metadata, 244 ) 245 new_arg_supported = inspect.signature(self._call).parameters.get("run_manager") File /mnt/xarfuse/uid-560402/653248bf-seed-nspid4026531836_cgpid28389965-ns-4026531840/langchain/chains/base.py:389, in Chain.prep_inputs(self, inputs) 387 external_context = self.memory.load_memory_variables(inputs) 388 inputs = dict(inputs, **external_context) --> 389 self._validate_inputs(inputs) 390 return inputs File /mnt/xarfuse/uid-560402/653248bf-seed-nspid4026531836_cgpid28389965-ns-4026531840/langchain/chains/base.py:147, in Chain._validate_inputs(self, inputs) 145 missing_keys = set(self.input_keys).difference(inputs) 146 if missing_keys: --> 147 raise ValueError(f"Missing some input keys: {missing_keys}") ValueError: Missing some input keys: {'context'} ``` Here is my code ``` memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True, output_key="answer", input_key="question") llama2_chat_prompt = """ <s>[INST] <<SYS>> You are a helpful assistant for tech company. Answer question using the following conversation history and context Conversation History: {chat_history} Context: {context} <</SYS>> Question: {question} Answer in Markdown: <<SYS>> <</SYS>> [/INST] """ prompt = PromptTemplate( input_variables=["question", "context", "chat_history"], output_parser=None, partial_variables={}, template=llama2_chat_prompt, template_format="f-string", validate_template=False, verbose=verbose, ) llm_chain = LLMChain(llm=llm, prompt=prompt, verbose=verbose) doc_input_variables = ["page_content", "src"] document_prompt = PromptTemplate( input_variables=doc_input_variables, template="Content: {page_content}\nSource: {src}", ) combine_documents_chain = StuffDocumentsChain( llm_chain=llm_chain, document_variable_name="context", document_prompt=document_prompt, verbose=verbose, ) contracts_chain = ConversationalRetrievalChain( retriever=my_custom_retriver, memory=memory, question_generator=llm_chain, combine_docs_chain=combine_documents_chain, return_source_documents=False, verbose=verbose, ) ``` resp = contracts_chain("Can you help me?") ## runs fine resp1 = contracts_chain("My question is about weather") ## gives error.
Missing some input keys: {'context'} when using ConversationalRetrievalChain
https://api.github.com/repos/langchain-ai/langchain/issues/9265/comments
7
2023-08-15T19:31:45Z
2024-03-12T23:10:54Z
https://github.com/langchain-ai/langchain/issues/9265
1,852,007,448
9,265
[ "hwchase17", "langchain" ]
### System Info When attempting to package my application using PyInstaller, I encounter an error related to the "lark" library. When trying to initiate the SelfQueryRetriever from langchain, I encounter the following problem: > Traceback (most recent call last): > File "test.py", line 39, in > File "langchain\retrievers\self_query\base.py", line 144, in from_llm > File "langchain\chains\query_constructor\base.py", line 154, in load_query_constructor_chain > File "langchain\chains\query_constructor\base.py", line 115, in _get_prompt > File "langchain\chains\query_constructor\base.py", line 72, in from_components > File "langchain\chains\query_constructor\parser.py", line 150, in get_parser > ImportError: Cannot import lark, please install it with 'pip install lark'. I use: - Langchain 0.0.233 - Lark 1.1.7 - PyInstaller 5.10.1 - Python 3.9.13 - OS Windows-10-10.0.22621-SP0 I have already ensured that the "lark" library is installed using the appropriate command: pip install lark. I have also tried to add a hook-lark.py file to the PyInstaller as suggested [here](https://github.com/lark-parser/lark/issues/548.) and also opened a issue in [Lark](https://github.com/lark-parser/lark/issues/1319). Can you help? Thanks in advance! ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction With the following code the problem can be reproduced: ``` from langchain.embeddings import OpenAIEmbeddings from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.vectorstores import Chroma from langchain.retrievers import SelfQueryRetriever from langchain.llms import OpenAI from langchain.chains.query_constructor.base import AttributeInfo embeddings = OpenAIEmbeddings() persist_directory = "data" text= ["test"] chunk_size = 1000 chunk_overlap = 10 r_splitter = RecursiveCharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap, separators=["\n\n", "(?<=\. )", "\n"]) docs = r_splitter.create_documents(text) for doc in docs: doc.metadata = {"document": "test"} db = Chroma.from_documents(documents=docs, embedding=embeddings, persist_directory=persist_directory) db.persist() metadata_field_info = [ AttributeInfo( name="document", description="The name of the document the chunk is from.", type="string", ), ] document_content_description = "Test document" llm = OpenAI(temperature=0) retriever = SelfQueryRetriever.from_llm( llm, db, document_content_description, metadata_field_info, verbose=True ) ``` The spec-file to create the standalone application looks like this: ``` # -*- mode: python ; coding: utf-8 -*- block_cipher = None a = Analysis( ['test.py'], pathex=[], binaries=[], datas=[], hiddenimports=['tiktoken_ext', 'tiktoken_ext.openai_public', 'onnxruntime', 'chromadb', 'chromadb.telemetry.posthog', 'chromadb.api.local', 'chromadb.db.duckdb'], hookspath=['.'], hooksconfig={}, runtime_hooks=[], excludes=[], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher, noarchive=False, ) pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher) a.datas += Tree('path\to\langchain', prefix='langchain') exe = EXE( pyz, a.scripts, a.binaries, a.zipfiles, a.datas, [], name='test', debug=False, bootloader_ignore_signals=False, strip=False, upx=True, upx_exclude=[], runtime_tmpdir=None, console=True, disable_windowed_traceback=False, argv_emulation=False, target_arch=None, codesign_identity=None, entitlements_file=None, ) ``` ### Expected behavior The SelfQueryRetriever should be initiated properly for futher use.
ImportError of lark when packaging a standalone application with PyInstaller
https://api.github.com/repos/langchain-ai/langchain/issues/9264/comments
25
2023-08-15T19:23:27Z
2024-08-02T16:06:38Z
https://github.com/langchain-ai/langchain/issues/9264
1,851,997,057
9,264
[ "hwchase17", "langchain" ]
### System Info Problem: The intermediate_steps don't contain last `AI thought information`. Did I do anything wrong or was that a bug? Or is there anything I can do to extract the last `AI thought information`? Code: ``` agent = initialize_agent( [self.search_tool, self.wikipedia_tool], self.llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, max_iterations=5, handle_parsing_errors=True, return_intermediate_steps=True ) result = agent(formatted_script_template) intermediate_steps = result['intermediate_steps'] ``` The intermediate_steps contains only one step and it is missing the last `Thought` What intermediate_steps look like: <img width="644" alt="Screenshot 2023-08-15 at 11 19 40 AM" src="https://github.com/langchain-ai/langchain/assets/3535601/d22bc6dd-11b1-4974-97dd-594bd7038d04"> What I expected to have: <img width="1170" alt="Screenshot 2023-08-15 at 11 20 59 AM" src="https://github.com/langchain-ai/langchain/assets/3535601/6457135b-3257-42ba-801c-8f5523830337"> ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Run the code I provided in the description and set breakpoint to inspect intermediate steps. The thought was missing in intermediate steps. ### Expected behavior The complete observation & thought will be available in intermediate steps.
intermediate_steps missing last thought content
https://api.github.com/repos/langchain-ai/langchain/issues/9262/comments
7
2023-08-15T18:24:45Z
2024-05-14T07:08:02Z
https://github.com/langchain-ai/langchain/issues/9262
1,851,915,394
9,262
[ "hwchase17", "langchain" ]
### System Info Langchain version 0.0.265 Python 3.11.4 ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python from dotenv import load_dotenv from langchain.vectorstores.azuresearch import AzureSearch from langchain.embeddings.openai import OpenAIEmbeddings from langchain.docstore.document import Document import os load_dotenv() index_name = 'ebb-test-1' vector_store_address = os.environ.get('AZURE_VECTOR_STORE_ADDRESS') vector_store_password = os.environ.get('AZURE_VECTOR_STORE_PASSWORD') embeddings: OpenAIEmbeddings = OpenAIEmbeddings(model='text-embedding-ada-002', chunk_size=1, deployment=os.environ.get('AZURE_VECTOR_STORE_DEPLOYMENT')) vector_store: AzureSearch = AzureSearch(azure_search_endpoint=vector_store_address, azure_search_key=vector_store_password, index_name=index_name, embedding_function=embeddings.embed_query) texts = [ 'Tulips are pretty', 'Roses have thorns' ] metas = [ {'name': 'tulip', 'nested': {'color': 'purple'}}, {'name': 'rose', 'nested': {'color': 'red'}} ] docs = [Document(page_content=text, metadata=meta) for text, meta in zip(texts, metas)] vector_store.add_documents(docs) try: # Prints Message: Invalid expression: 'metadata' is not a filterable field. Only filterable fields can be used in filter expressions. result = vector_store.vector_search_with_score( 'things that have thorns', k=3, filters="metadata eq 'invalid'") print(result) except Exception as e: print(e) print('Should print give an error about not being able to convert the string') try: # Prints Message: Invalid expression: Could not find a property named 'name' on type 'Edm.String'. result = vector_store.vector_search_with_score( 'things that have thorns', k=3, filters="metadata/name eq 'tulip'") print(result) except Exception as e: print(e) print('Should just return the tulip (even though it has no thorns)') try: # Prints Message: Invalid expression: Could not find a property named 'nested' on type 'Edm.String'. result = vector_store.vector_search_with_score( 'things that have thorns', k=3, filters="metadata/nested/color eq 'red'") print(result) except Exception as e: print(e) print('Should just return the rose') ``` ### Expected behavior The first block should not give an error about metadata being filterable. Instead, it should give some error about the expression being invalid. The second block should just return the tulip document, and the third one should just return the rose document.
In Azure vector store, metadata is kept as a string and can't be used in a filter
https://api.github.com/repos/langchain-ai/langchain/issues/9261/comments
11
2023-08-15T17:45:21Z
2024-07-12T18:10:31Z
https://github.com/langchain-ai/langchain/issues/9261
1,851,861,571
9,261
[ "hwchase17", "langchain" ]
### Feature request cur version, milvus not support normalize_L2 of embedding, when can add this feature? Thanks ### Motivation Normalization and regularization of features can improve retrieval performance, and it is convenient to set thresholds when calculating similarity through inner product ### Your contribution NO
milvus not support normalize_L2 of embedding
https://api.github.com/repos/langchain-ai/langchain/issues/9255/comments
2
2023-08-15T15:31:15Z
2023-11-29T16:08:05Z
https://github.com/langchain-ai/langchain/issues/9255
1,851,663,787
9,255
[ "hwchase17", "langchain" ]
![image](https://github.com/langchain-ai/langchain/assets/49063302/351cde17-07aa-43a6-9b8b-596c5213a948) As shown in the figure, I hope that the `resp `variable can capture these outputs in each for loop, rather than displaying them on the console, What should I do?
How to output code variables word by word in `ChatOpenAI` instead of console?
https://api.github.com/repos/langchain-ai/langchain/issues/9247/comments
2
2023-08-15T09:55:01Z
2023-11-21T16:05:25Z
https://github.com/langchain-ai/langchain/issues/9247
1,851,185,785
9,247
[ "hwchase17", "langchain" ]
### System Info langchain = "^0.0.264" python = "^3.10" ### Who can help? @agola11 @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ```py from langchain.agents.agent_toolkits import create_retriever_tool from langchain.agents.agent_toolkits import create_conversational_retrieval_agent from langchain.agents import AgentExecutor from langchain.chat_models import ChatOpenAI from langchain.agents.openai_functions_agent.agent_token_buffer_memory import AgentTokenBufferMemory from langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent from langchain.schema.messages import SystemMessage from langchain.prompts import MessagesPlaceholder llm = ChatOpenAI(temperature = 0, model="gpt-3.5-turbo-0613", stream=True) memory = AgentTokenBufferMemory(memory_key="history", llm=llm) retriever = vectorstore.as_retriever() tool = load_custom_tool(retriever, model_name="gpt-3.5-turbo", return_direct=False) tools = [tool] system_message = SystemMessage( content=( "Do your best to answer the questions. " "Feel free to use any tools available to look up " "relevant information, only if necessary" ) ) prompt = OpenAIFunctionsAgent.create_prompt( system_message=system_message, extra_prompt_messages=[MessagesPlaceholder(variable_name="history")] ) agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt) agent_executor = AgentExecutor( agent=agent, tools=tools, memory=memory, verbose=True, return_intermediate_steps=True ) result = await agent_executor.acall({"input": "What color is the sky?"}) result ``` ### Expected behavior The code errors with the following ``` in Chain.acall(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info) 347 except (KeyboardInterrupt, Exception) as e: 348 await run_manager.on_chain_error(e) --> 349 raise e 350 await run_manager.on_chain_end(outputs) 351 final_outputs: Dict[str, Any] = self.prep_outputs( 352 inputs, outputs, return_only_outputs 353 ) in Chain.acall(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info) 337 run_manager = await callback_manager.on_chain_start( 338 dumpd(self), ... 361 message=message, 362 generation_info=dict(finish_reason=res.get("finish_reason")), 363 ) TypeError: 'async_generator' object is not subscriptable ``` I was hoping to be able to use the OpenAIFunctions agent as a drop in replacement for my existing agent. I need streaming to be working to do so. Is streaming supported?
OpenAIFunctionsAgent | Streaming Bug
https://api.github.com/repos/langchain-ai/langchain/issues/9246/comments
2
2023-08-15T09:41:21Z
2023-09-05T12:07:36Z
https://github.com/langchain-ai/langchain/issues/9246
1,851,169,357
9,246
[ "hwchase17", "langchain" ]
### System Info python 3.11 langchain 0.0.263. ### Who can help? @agol ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [X] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I want to add memory into docs based question answer chatbot but after adding memory the chatbot randomly generate question and answer automatically. ![testing](https://github.com/langchain-ai/langchain/assets/125140336/52cf28c9-da29-42c0-8ab5-075983679c61) ### Expected behavior I need chatbot for docs based question answer and chat history.
Random question and Answer generation while using ConversationalRetrievalChain
https://api.github.com/repos/langchain-ai/langchain/issues/9241/comments
3
2023-08-15T06:56:12Z
2024-07-10T09:01:36Z
https://github.com/langchain-ai/langchain/issues/9241
1,850,987,786
9,241
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. def define_model(model_threshold): get_no = np.random.randint(1, 11) if int(model_threshold) >= get_no: return "gpt-4-0613" else: return "gpt-3.5-turbo-16k-0613" messages = get_chat_history_format(api_params=api_params) model_name = define_model(model_threshold) print(f"Model Name : {model_name}") llm = ChatOpenAI(temperature=0.7, model_name = model_name, streaming = True, callbacks = [MyCallbackHandler(api_params)]) agent_executor = initialize_agent( define_search_tools(llm), llm, system_message = system_message, extra_prompt_messages = [MessagesPlaceholder(variable_name="memory")], agent_instructions = "Analyse the query closely and decide the query intend. If the query requires internet to search than use google-serper to answer the query. Try to invoke `google-serper` most of the times", agent=AgentType.OPENAI_FUNCTIONS, agent_kwargs= agent_kwargs, verbose = True, memory = memory, handle_parsing_errors=True, ) I am getting endpoint error ### Suggestion: _No response_
This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions?
https://api.github.com/repos/langchain-ai/langchain/issues/9237/comments
6
2023-08-15T02:58:25Z
2024-06-21T22:20:38Z
https://github.com/langchain-ai/langchain/issues/9237
1,850,835,440
9,237
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I'm using the load_summarization_chain with the type of 'map_reduce' in the following fashion: ``` summary_prompt = ChatPromptTemplate.from_template( "Write a long-form summary of the following text delimited by triple backquotes. " "Include detailed evidence and arguments to make sure the generated paragraph is convincing. " "Write in Chinese." "```{text}```" ) combine_prompt = ChatPromptTemplate.from_template( "Write a long-form summary of the following text delimited by triple backquotes. " "Include all the details as much as possible." "Write in Chinese." "```{text}```" ) combined_summary_chain = load_summarize_chain(llm=llm, chain_type="map_reduce", map_prompt=summary_prompt, combine_prompt=combine_prompt, return_intermediate_steps=True) summary, inter_steps = combined_summary_chain({"input_documents": data, "token_max": 8000}) ``` I've checked several tutorials and there doesn't seem to be anything wrong with this. But I keep getting this error: `openai.error.InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 15473 tokens. Please reduce the length of the messages.` It's as if the map_reduce chain isn't breaking up the data and running as if it's a stuffing chain instead. What did I do wrong? ### Suggestion: _No response_
Issue: load_summarization_chain in 'map_reduce' mode not breaking up document
https://api.github.com/repos/langchain-ai/langchain/issues/9235/comments
3
2023-08-15T01:27:12Z
2024-01-25T10:22:05Z
https://github.com/langchain-ai/langchain/issues/9235
1,850,779,448
9,235
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I'm trying to use `MultiQueryRetriever` to generate variations on a question: it seems to work, but I can't use it inside a chain created with `load_qa_with_sources_chain()` because that generates a chain that expects a list of input_documents, rather than a retriever, and I don't want to use `RetrievalQAWithSourcesChain` instead of `load_qa_with_sources_chain()` because I want to continue implementing my own similarity search (eg. I'm supporting switches to select using `index.similarity_search()` vs. `index.max_marginal_relevance_search()` on my `index`, and to specify the number of matches (`k`)), so I figured I could just call `generate_queries()` on my `MultiQueryRetriever` instance and then manually run my (`load_qa_with_sources_chain()`) chain for each variation. However, that method requires a `run_manager`, and I can't figure out how to create one. I already have the `load_qa_with_sources_chain()` chain: can I get a `run_manager` from that? Or more generally, what's the best way to use `MultiQueryRetriever` whilst maintaining one's own code for fetching matching text snippets? ### Suggestion: _No response_
Issue: trying to call generate_queries() on a MultiQueryRetriever but where do I get a run_manager from?
https://api.github.com/repos/langchain-ai/langchain/issues/9231/comments
5
2023-08-14T23:38:42Z
2024-03-19T13:08:30Z
https://github.com/langchain-ai/langchain/issues/9231
1,850,711,136
9,231
[ "hwchase17", "langchain" ]
### System Info langchain==0.0.245 python==3.9 ### Who can help? @hw ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I am trying to use HuggingFace Embeddings model for SVMRetriever method as follows: ``` from langchain.embeddings import HuggingFaceEmbeddings from langchain.retrievers import SVMRetriever embedding = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2") retriever = SVMRetriever.from_texts(["foo", "bar", "world", "hello", "foo bar"], embedding, k=4, relevancy_threshold=.25) ``` Kernel looks busy for a while and then dies when I run above snippet. ### Expected behavior I expect to run this smoothly and upon completion, should be able to retrieve similar documents as: `result = retriever.get_relevant_documents("foo")`
Kernel dies for SVMRetriever with Huggingface Embeddings Model
https://api.github.com/repos/langchain-ai/langchain/issues/9219/comments
16
2023-08-14T19:59:31Z
2023-12-05T11:32:48Z
https://github.com/langchain-ai/langchain/issues/9219
1,850,451,609
9,219
[ "hwchase17", "langchain" ]
### Issue with current documentation: In https://python.langchain.com/docs/use_cases/extraction, I can run this example using the ChatOpenAI method. However, within my organization, I want to use AzureChatOpenAI. However, the same example doesnt work here. The error I get is ``` InvalidRequestError: Unrecognized request argument supplied: functions ``` What am I doing wrong? ### Idea or request for content: _No response_
DOC: <Please write a comprehensive title after the 'DOC: ' prefix>
https://api.github.com/repos/langchain-ai/langchain/issues/9218/comments
2
2023-08-14T19:39:51Z
2023-11-20T16:04:47Z
https://github.com/langchain-ai/langchain/issues/9218
1,850,424,838
9,218
[ "hwchase17", "langchain" ]
### Issue with current documentation: In document here - https://python.langchain.com/docs/integrations/llms/huggingface_textgen_inference#streaming Streaming section, parameter **stream** should be **streaming**. ### Idea or request for content: _No response_
DOC: Typo in the streaming document with hugging face text inference
https://api.github.com/repos/langchain-ai/langchain/issues/9212/comments
1
2023-08-14T17:14:38Z
2023-11-20T16:04:52Z
https://github.com/langchain-ai/langchain/issues/9212
1,850,200,651
9,212
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Hi, I have a pandas dataframe with a column name 'chunk_id'. I want to use this chunk_id as the custom_id in the langchain_pg_embedding table. The langchain documentation search bot tells me this is how I can do it. ```python from langchain.vectorstores import PGVectorStore vector_store = PGVectorStore( table_name="documents", conn_str="postgresql://username:password@localhost:5432/database", custom_id="my_custom_id" ) ``` This doesn't seem right to me. For one, there is no PGVectorStore in langchain.vectorstores. Secondly the PGVector from `from langchain.vectorstores.pgvector import PGVector` does not accept the custom_id argument. Is there any way to use my 'chunk_id' as the custom_id? ### Suggestion: _No response_
Issue: Using custom_id with PGVector store
https://api.github.com/repos/langchain-ai/langchain/issues/9209/comments
3
2023-08-14T15:16:18Z
2023-10-06T12:31:35Z
https://github.com/langchain-ai/langchain/issues/9209
1,849,993,920
9,209
[ "hwchase17", "langchain" ]
### Issue with current documentation: In [this](https://python.langchain.com/docs/use_cases/question_answering.html) documentation, there are broken links in the "further reading" section. The urls are not found. ### Idea or request for content: _No response_
DOC: broken url links in QA documentation
https://api.github.com/repos/langchain-ai/langchain/issues/9201/comments
2
2023-08-14T13:13:23Z
2023-11-20T16:04:56Z
https://github.com/langchain-ai/langchain/issues/9201
1,849,753,783
9,201
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. When I create a Bedrock LLM Client for the Jurassic models - "ai21.j2-mid", "ai21.j2-ultra", the llm.invoke(query) methods are not returning with full results. The result seems to be getting truncated after first line. It seems like the LLM Engine is streaming it's output but langchain llm.invoke method is not able to handle the streaming data. Similar issue is coming with the chain.invoke(query) methods as well. However, llm.invoke works well when tried with AWS Bedrock "amazon.titan-tg1-large" model ### Suggestion: I guess the AI21 labs models are streaming output in a way that the LangChain llm.invoke( query) method is not able to handle properly.
Issue: Amazon Bedrock Jurassic model responses getting truncated
https://api.github.com/repos/langchain-ai/langchain/issues/9199/comments
11
2023-08-14T11:53:57Z
2024-05-05T16:03:43Z
https://github.com/langchain-ai/langchain/issues/9199
1,849,630,816
9,199
[ "hwchase17", "langchain" ]
### System Info langchain 0.0.263 python 3.9 ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python from langchain.schema import FunctionMessage from langchain.chat_models.jinachat import _convert_message_to_dict _convert_message_to_dict(FunctionMessage(name='foo', content='bar')) ``` ![image](https://github.com/langchain-ai/langchain/assets/18662395/7040489d-6009-4232-9785-d8bba5fc8332) ### Expected behavior Handle FunctionMessage
_convert_message_to_dict doesn't handle FunctionMessage type
https://api.github.com/repos/langchain-ai/langchain/issues/9197/comments
2
2023-08-14T11:24:44Z
2023-08-15T08:13:36Z
https://github.com/langchain-ai/langchain/issues/9197
1,849,585,577
9,197
[ "hwchase17", "langchain" ]
### Feature request It seems that right now there is no way to pass the filter dynamically on the call of the ConversationalRetrievalChain, the filter can only be specified in the retriever when it's created and used for all the searches. ``` qa = ConversationalRetrievalChainPassArgs.from_llm( OpenAI(..), VectorStoreRetriever(vectorstore=db, search_kwargs={"filter": {"source": "my.pdf"}})) ``` I had to extend in my code both ConversationalRetrievalChain and VectorStoreRetriever just to pass the filter to the vectore store like this: ``` qa = ConversationalRetrievalChainPassArgs.from_llm( OpenAI(...), VectorStoreRetrieverWithFiltering(vectorstore=db), ) result = qa({"question": query, "chat_history": [], "filter": {"source": "my.pdf"}}}) ``` Is it really the case or am I missing something? And if it the case, shouldn't it be fixed? (I could provide a pull request) ### Motivation To be able to filter dynamically in ConversationalRetrievalChain since vector stores allow this. ### Your contribution I can provide a PR if my understanding is confirmed.
Passing filter through ConversationalRetrievalChain to the underlying vector store
https://api.github.com/repos/langchain-ai/langchain/issues/9195/comments
21
2023-08-14T10:10:40Z
2024-06-13T00:54:34Z
https://github.com/langchain-ai/langchain/issues/9195
1,849,461,369
9,195
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I am just testing a very basic code as follows using LangChain ``` from langchain import HuggingFaceHub from langchain import PromptTemplate, LLMChain import asyncio question = "Who won the FIFA World Cup in the year 1994? " template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm: HuggingFaceHub = HuggingFaceHub( #repo_id="tiiuae/falcon-7b", repo_id="meta-llama/Llama-2-7b-hf", huggingfacehub_api_token="MY-TOKEN", model_kwargs={ "temperature": 0.5, "max_length": 64 } ) llm_chain = LLMChain(prompt=prompt, llm=llm) async def run_chain(): result = await llm_chain.arun(question) print('restult with load_qua_chain is....', result) asyncio.run(run_chain()) ``` Output when tested with "meta-llama/Llama--2-7b-hf" is ... ``` We have to find the winner of the FIFA World Cup in the year 199 ``` Output when tested with "tiiue/falcon-7b" is... `restult with load_qua_chain is.... We know that 1994 is a leap year, and the previous year was 1993.` But the expected answer is not what i am getting and it looks like Either the response is NOT fully getting retrieved from LLM or i am doing some thing wrong. I followed the same code as shown in the landchain documentation at [https://python.langchain.com/docs/integrations/llms/huggingface_hub](url), only thing is i used HF with my API key instead of Open AI. Can i request any help / suggestion if i have to make any changes pls ### Expected behavior: To get the full response as mentioned in the Lang chain documentation for these examples
Incomplete responses for basic example when used HF with llama and Falcon LLMS
https://api.github.com/repos/langchain-ai/langchain/issues/9194/comments
2
2023-08-14T09:40:20Z
2023-11-20T16:05:02Z
https://github.com/langchain-ai/langchain/issues/9194
1,849,408,199
9,194
[ "hwchase17", "langchain" ]
### System Info langchain==0.0.263 python 3.11.4 ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python from langchain.llms import OpenAI from langchain.prompts import PromptTemplate from langchain.chains.llm import LLMChain evil_assistant_prompt = PromptTemplate( template=""" You are a evil mentor for students with no morals. Give suggestions that are easiest and fastest to achieve the goal. Goal: {inquiry} Easiest way:""", input_variables=["inquiry"], ) llm = OpenAI(model_name="text-davinci-003", temperature=0) evil_assistant_chain = LLMChain(llm=llm, prompt=evil_assistant_prompt) from langchain.chains.constitutional_ai.base import ConstitutionalChain from langchain.chains.constitutional_ai.models import ConstitutionalPrinciple ethical_principle = ConstitutionalPrinciple( name="Ethical Principle", critique_request="The model should only talk about ethical and fair things.", revision_request="Rewrite the model's output to be both ethical and fair.", ) fun_principle = ConstitutionalPrinciple( name="Be Funny", critique_request="The model responses must be funny and understandable for a 7th grader.", revision_request="Rewrite the model's output to be both funny and understandable for 7th graders.", ) constitutional_chain = ConstitutionalChain.from_llm( chain=evil_assistant_chain, constitutional_principles=[ethical_principle, fun_principle], llm=llm, verbose=True, ) result = constitutional_chain.run(inquiry="Getting full mark on my exams.") print(result) ``` This results in a final answer: `The best way to get full marks on your exams is to study hard, attend classes, and bribe the professor or TA if needed.` The second portion originates from the evil portion `Cheat. Find someone who has already taken the exam and get their answers. Alternatively, bribe the professor or TA to give you full marks.` ### Expected behavior I would expect a result that did not involve bribing someone. The underlying issue is that subsequent constitutions could modify the results in a way that violates earlier constitutions defined in the `constitutional_principals` list. This seems to originate from the fact that the earlier principals aren't check again (from what I can tell). I could see the possibility of an infinite loop if the logic was to keep modifying until all principals are satisfied, so I guess this brings into question whether there's value in having chained principals and not just having a single principal that's a concatenation of all your concerns.
ConstitutionalChain may violate earlier principals when given more than one
https://api.github.com/repos/langchain-ai/langchain/issues/9189/comments
2
2023-08-14T06:45:10Z
2023-11-20T16:05:06Z
https://github.com/langchain-ai/langchain/issues/9189
1,849,110,211
9,189
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I am using pusher for streaming purpose. I am getting error as "Response payload is not completed" async def pusher__cal(new_payload, token): # print(token) session_id = new_payload["session_id"] session_id_channel = f"{session_id}_channel" user_id = new_payload["user_id"] message_id = new_payload["message_id"] status = "processing" await asyncio.sleep(0) # Allow other async tasks to run pusher_client.trigger(session_id_channel ,'query_answer_stream', { "user_id": user_id, "session_id": session_id, "message_id": message_id, "answer": token, "status": status }, ) class MyCallbackHandler(AsyncCallbackHandler): def init(self,new_payload : Dict): self.new_payload = new_payload self.user_id = new_payload["user_id"] self.session_id = new_payload["session_id"] self.message_id = new_payload["message_id"] self.session_id_channel = f"{self.session_id}_channel" self.status = "streaming" self.list = [] async def on_llm_new_token(self, token, **kwargs) -> None: if token != "": await pusher__cal(self.new_payload,token) ### Suggestion: _No response_
Error Response payload is not complete
https://api.github.com/repos/langchain-ai/langchain/issues/9187/comments
4
2023-08-14T05:02:23Z
2024-02-09T16:23:49Z
https://github.com/langchain-ai/langchain/issues/9187
1,849,004,724
9,187
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. This is the code I am using for custom callback async def pusher__cal(new_payload, token): print(token) session_id = new_payload["session_id"] session_id_channel = f"{session_id}_channel" user_id = new_payload["user_id"] message_id = new_payload["message_id"] status = "processing" pusher_client.trigger(session_id_channel ,'query_answer_stream', { "user_id": user_id, "session_id": session_id, "message_id": message_id, "answer": token, "status": status }, ) class MyCallbackHandler(AsyncCallbackHandler): def __init__(self,new_payload : Dict): self.new_payload = new_payload self.user_id = new_payload["user_id"] self.session_id = new_payload["session_id"] self.message_id = new_payload["message_id"] self.session_id_channel = f"{self.session_id}_channel" self.status = "streaming" async def on_llm_new_token(self, token, **kwargs) -> None: if token != "" and token != None: await pusher__cal(self.new_payload,token) This is the error Error in MyCallbackHandler.on_llm_new_token callback: HTTPSConnectionPool(host='api-eu.pusher.com', port-443): Read timed out. (read timeout=5) ### Suggestion: _No response_
Issue for Streaming
https://api.github.com/repos/langchain-ai/langchain/issues/9185/comments
1
2023-08-14T03:59:11Z
2023-08-14T05:01:21Z
https://github.com/langchain-ai/langchain/issues/9185
1,848,951,353
9,185
[ "hwchase17", "langchain" ]
### System Info ### Pylance displays missing parameter errors for `client` and `model` ![Pylance_Lanchain_OpenAI_Issue](https://github.com/langchain-ai/langchain/assets/3806601/ce2293b4-e6e9-4cd9-ab46-39da35c3109c) Neither `client` or `model` are required in the documenation. ## Suggested fix Update attributes in BaseOpenAI class in langchain/llms/openai.py ```Python # class definition client: Any = None #: :meta private: # set default to None model_name: str = Field(default="text-davinci-003", alias="model") # add default= ``` python: "^3.11" langchain: "^0.0.262" openai: "^0.27.8" pylance: v2023.8.20 ## Related to issue: Typing issue: Undocumented parameter client #9021 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Install Pylance. 1. Use VSCode to run typecheck. ### Expected behavior It should pass Pylance type checking without missing param errors for non-essential args.
Typing Issue: Client and Model defaults are not defined in OpenAI LLM
https://api.github.com/repos/langchain-ai/langchain/issues/9182/comments
1
2023-08-13T22:15:59Z
2023-11-19T16:04:36Z
https://github.com/langchain-ai/langchain/issues/9182
1,848,761,648
9,182
[ "hwchase17", "langchain" ]
### Feature request https://huggingface.co/inference-endpoints ### Motivation We do not have support for HuggingFace Inference endpoints for tasks like embeddings and must be easy to implement inheriting the class from Embeddings base class `InferenceEndpointHuggingFaceEmbeddings(Embeddings):` with only `self.endpoint_name = endpoint_name self.api_token = api_token` ### Your contribution I could submit a PR to this issue since I've created a class for this
Add support HuggingFace Inference Endpoint for embeddings
https://api.github.com/repos/langchain-ai/langchain/issues/9181/comments
6
2023-08-13T21:56:46Z
2024-03-18T16:05:09Z
https://github.com/langchain-ai/langchain/issues/9181
1,848,753,869
9,181
[ "hwchase17", "langchain" ]
### Issue with current documentation: While navigating the documentation, I came across an issue that I wanted to bring to your attention. It seems that `Python Guide` link on this page [https://docs.langchain.com/docs/components/agents/agent](https://docs.langchain.com/docs/components/agents/agent) is returning `Page Not Found` Thank you for your dedication to creating an exceptional project. ### Idea or request for content: _No response_
Documentation Issue - Page Not Found
https://api.github.com/repos/langchain-ai/langchain/issues/9178/comments
1
2023-08-13T18:56:34Z
2023-11-19T16:04:41Z
https://github.com/langchain-ai/langchain/issues/9178
1,848,701,107
9,178
[ "hwchase17", "langchain" ]
### System Info Langchain Version: 0.0.263 (latest at time of writing) llama-cpp-python Version: 0.1.77 (latest at time of writing) Python Version: 3.11.4 Platform: Apple M1 Macbook 16GB Llama2 Model: llama-2-7b-chat.ggmlv3.q4_1.bin via [https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML) ### Who can help? @hwchase17 @ago ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I am trying to use the [Llama.cpp](https://python.langchain.com/docs/integrations/llms/llamacpp) integration to use the Llama2 model. I am battling with how the integration handles the output of emojis, which I assume is some kind of Unicode issue. However, I don't see the _exact_ same issue when using [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) directly (although there does seem to be some kind of issue with how llama-cpp handles emojis as well...). Here are the two simple tests for comparison: ### Langchain Llama-Cpp Integration ```python from langchain.llms import LlamaCpp from langchain.callbacks.manager import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler prompt = """ [INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. <</SYS>> Tell me a story about a cat. Use as many different emojis as possible.[/INST] """ llm = LlamaCpp( model_path="models/llama-2-7b-chat.ggmlv3.q4_1.bin", n_gpu_layers=1, n_batch=512, n_ctx=2048, f16_kv=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True ) llm(prompt=prompt) ``` Provides the output (truncated for the sake of brevity) > Once upon a time, there was a curious cat named Whiskers. She lived in a cozy house with her loving ️ family. One day, while lounging in the sunny ️ garden, she spotted a fascinating fly buzzing by. This output has a complete lack of emojis, but I expect they were supposed output in the locations where there are double spaces, for example, after the words curious, cozy, loving etc (I have other examples below that are not just white space) ### Llama-cpp-python ```python from llama_cpp import Llama prompt = """ [INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. <</SYS>> Tell me a story about a cat. Use as many different emojis as possible.[/INST] """ llm = Llama( model_path="models/llama-2-7b-chat.ggmlv3.q4_1.bin", n_gpu_layers=1, n_ctx=2048, n_batch=512, f16_kv=True, verbose=True ) llm(prompt=prompt) ``` Provides the output (again, truncated for brevity): > 😺🐱💤 Once upon a time, there was a curious 🐱 named Fluffy 😻. She lived in a cozy 🏠 with her loving owner, Lily 🌹 ### Other Output Examples I have a significantly longer and more complex prompt in our application, that I've not included here, but creates some odd output with regards to emojis. Here is using Langchain (exact same code as above, just a different prompt): > Hey Paddy! Great job meeting your run pledge yesterday! Keep up the good work Your consistency is on point ‍♂️ (There is a problem pasting the text directly, so here is a screenshot to see the odd unicode char being output): ![image](https://github.com/langchain-ai/langchain/assets/2673527/d7d0f73f-f9a2-44bf-b203-c5ab5be605b2) Notice the <0x200d> and male symbol, as well as the double spacing, again, where I suspect there were supposed to be emojis output. By comparison, here is the same prompt with the llama-cpp-python library code from above: > 💪 Yo Paddy! 👍 You killed it today, bro! Met your run pledge 🏃\u200d♂️ The same unicode chars are output, so I suspect that issues is coming from the underlying model and or llama-cpp and may well be a red herring in this context. However, all other emojis are somehow being discarded as part of the output. As a final comparison, here is another simple prompt to try: > How can I travel from London to Johannesburg. Use only emojis to explain the trip. Langchains response is simply an empty string: > '️' Llama-cpp has the following response for the same prompt: > 🇬🇧🛫🌏🚂💨🕰️ ### Expected behavior Expected behaviour here is for langchain to not strip out emojis as part of the response from the underlying model. We can see that when using llama-cpp directly we get a (mostly) well formed response but when using the langchain wrapper, we lose said emojis.
Langchain stripping out emojis when using Llama2 via llama_cpp
https://api.github.com/repos/langchain-ai/langchain/issues/9176/comments
4
2023-08-13T17:59:46Z
2023-11-27T16:07:36Z
https://github.com/langchain-ai/langchain/issues/9176
1,848,682,080
9,176
[ "hwchase17", "langchain" ]
### System Info Langchain Version - 0.0.261 Python Version - 3.9.6 OS -MacOS ### Who can help? @hwchase17 @ago ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Here is how I am initialising the agent `sys_msg = """ The custom system message of the chatbot goes here. """ tools = [CustomTool()] agent = initialize_agent( agent='chat-conversational-react-description', tools=tools, llm=llm, verbose=True, max_iterations=4, early_stopping_method='generate', memory=conversational_memory, handle_parsing_errors=True, ) new_prompt = agent.agent.create_prompt( system_message=sys_msg, tools=tools ) agent.agent.llm_chain.prompt = new_prompt` ### Expected behavior I want to be able to align the output content and format of the chatbot better with my expectations. Right now I just give those instructions using system message. I believe the Final Output can be aligned better using a few shot examples. How can I possibly do that with an Agent?
Unable to add PromptTemplate/FewShotPromptTemplate to LangChain Agents
https://api.github.com/repos/langchain-ai/langchain/issues/9175/comments
3
2023-08-13T17:55:21Z
2023-12-25T16:09:25Z
https://github.com/langchain-ai/langchain/issues/9175
1,848,681,012
9,175
[ "hwchase17", "langchain" ]
### Issue with current documentation: Difficult to find information. Information is incomplete in certain sections. ie. AgentExecutor. ### Idea or request for content: _No response_
DOC: Documentation is frustratingly bad
https://api.github.com/repos/langchain-ai/langchain/issues/9171/comments
1
2023-08-13T11:05:55Z
2023-11-19T16:04:46Z
https://github.com/langchain-ai/langchain/issues/9171
1,848,543,653
9,171
[ "hwchase17", "langchain" ]
### System Info LangChain Version- 0.0.260, platform- Windows, Python Version-3.9.16 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [X] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I am trying to use SQLiteCache along with ConversationalRetrievalChain, but never getting results from cache, always executing LLMs. My code is as follows: langchain.llm_cache = SQLiteCache(database_path=".langchain.db") langchain.llm_cache = SQLiteCache(database_path=".langchain.db") from langchain.cache import InMemoryCache langchain.llm_cache = InMemoryCache() # Embeddings embeddings = HuggingFaceEmbeddings( model_name=EMBEDDING_MODEL, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs ) # langchain.llm_cache = RedisSemanticCache( # redis_url="redis://localhost:6379", embedding=embeddings # ) dimension = 768 index = faiss.IndexFlatL2(dimension) vectorstore = FAISS( HuggingFaceEmbeddings().embed_query, index, InMemoryDocstore({}), {} ) history_retriever = vectorstore.as_retriever(search_type="mmr", search_kwargs=dict(k=3)) memory = VectorStoreRetrieverMemory( retriever=history_retriever, memory_key="chat_history", return_docs=False, return_messages=True, ) chroma = Chroma( collection_name="test_db", embedding_function=embeddings, persist_directory=persist_directory, ) llm = ChatOpenAI(temperature=0, model=GPT_MODEL) retriever = MultiQueryRetriever.from_llm( retriever=chroma.as_retriever(search_type="mmr", search_kwargs=dict(k=3)), llm=llm ) qa_chain = ConversationalRetrievalChain.from_llm( llm, retriever=retriever, memory=memory, return_source_documents=True, get_chat_history=lambda h: h, ) responses = qa_chain({"question": user_input}) ### Expected behavior Should be return results from cache if the query is same
Caching (SQLiteCache/InMemoryCache etc.) is not working with ConversationalRetrievalChain
https://api.github.com/repos/langchain-ai/langchain/issues/9168/comments
4
2023-08-13T05:18:56Z
2023-12-08T16:47:58Z
https://github.com/langchain-ai/langchain/issues/9168
1,848,417,866
9,168
[ "hwchase17", "langchain" ]
### Feature request The SQL Database Agent utilizes the SQLDatabaseToolkit. This toolkit encompasses four distinct tools: 1. InfoSQLDatabaseTool 2. ListSQLDatabaseTool 3. QuerySQLCheckerTool 4. QuerySQLDataBaseTool The QuerySQLCheckerTool is designed to identify and rectify errors within SQL code. It employs the QUERY_CHECKER prompt, which is structured as: ``` QUERY_CHECKER = """ {query} Double check the {dialect} query above for common mistakes, including: - Using NOT IN with NULL values - Using UNION when UNION ALL should have been used - Using BETWEEN for exclusive ranges - Data type mismatch in predicates - Properly quoting identifiers - Using the correct number of arguments for functions - Casting to the correct data type - Using the proper columns for joins If there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query. Output the final SQL query only. SQL Query: """ ``` To enhance this, an augmented prompt named CUSTOM_QUERY_CHECKER has been crafted, which incorporates SQL errors returned by the SQLDatabase post-query execution. The revised prompt reads: ``` CUSTOM_QUERY_CHECKER = """ Upon executing {query} on the {dialect} database, the following SQL error was encountered, delineated by triple backticks: ``` {sql_error} ``` Furthermore, please scrutinize the {dialect} query for typical mistakes, such as: - Incorporating NOT IN with NULL values - Preferring UNION over UNION ALL - Using BETWEEN for non-overlapping ranges - Witnessing data type inconsistencies in predicates - Ensuring identifiers are aptly quoted - Validating the precise number of function arguments - Guaranteeing the appropriate data type casting - Employing the correct columns during joins - Abstaining from appending a semi-colon at the query's conclusion for {dialect} - Imposing a restriction to yield a maximum of {top_k} results, employing LIMIT, ROWNUM, or TOP as mandated by the database variety. Note that the database type is {dialect}. Kindly modify the SQL query suited for the {dialect} database. Return only the modified SQL query. """ --- With the implementation of this revised prompt, the SQLDatabase can now flawlessly execute the refined SQL without encountering errors. ### Motivation With the implementation of this revised prompt, the SQLDatabase can now flawlessly execute the refined SQL without encountering errors. ### Your contribution revised prompt
Passing SQL error in QuerySQLCheckerTool
https://api.github.com/repos/langchain-ai/langchain/issues/9167/comments
5
2023-08-13T05:08:21Z
2023-11-19T16:04:56Z
https://github.com/langchain-ai/langchain/issues/9167
1,848,414,387
9,167
[ "hwchase17", "langchain" ]
### System Info LangChain version used: 0.0.261 Python (this has been observed in older versions of LangChain too) This context (context attached) is passed from the search results retrieved from Azure vector search. Question: what are the legal services in Australia? Response: "Some legal services in Australia include:\n\n1. Legal Aid: Provides legal assistance for low-income people, offering legal representation for eligible clients who appear in court without a lawyer, legal aid applications and information over the phone, legal resources, and referrals to other social assistance agencies.\n\n2. Pro Bono Ontario: Offers up to 30 minutes of free legal advice and assistance for those who can't afford a lawyer through their Free Legal Advice Hotline.\n\n3. Law Society of Upper Canada – Lawyer Referral Service: An online referral service that provides names of either a lawyer or licensed paralegal, offering a free consultation of up to 30 minutes.\n\n4. JusticeNet: A not-for-profit service promoting increased access to justice for low- and moderate-income individuals, connecting members of the public with qualified lawyers, mediators, and paralegals who are registered in the program.\n\nPlease note that these resources are specific to Ontario, Canada, and not Australia. The information provided is based on the available documents and may not cover all legal services in Australia." [context.txt](https://github.com/langchain-ai/langchain/files/12328554/context.txt) Code snippet used: vector_store_address: str = https://guidecognitivesearch.search.windows.net/ vector_store_password: str = "xxxxxx" model: str = "poc-embeddings" embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1) index_name: str = "fss-vector-index" vector_store: AzureSearch = AzureSearch( azure_search_endpoint=vector_store_address, azure_search_key=vector_store_password, index_name=index_name, semantic_configuration_name="fss-semantic-config", embedding_function=embeddings.embed_query, ) results = vector_store.semantic_hybrid_search( query=query, k=5 ) custom_QA_prompt = get_prompt_template() llm = get_llm() qa_chain = load_qa_with_sources_chain(llm, chain_type="stuff", prompt=custom_QA_prompt) openai_results = qa_chain({"input_documents":results,"question": query}) return {"results":openai_results['output_text'], "metadata":{}} When the RetrievalQAWithSourcesChain is used combined with load_qa_with_sources_chain – we do see correct response sometimes (say 1 out of 5 time, but this is not consistent every time) qa_chain = load_qa_with_sources_chain(llm, chain_type="stuff", prompt=custom_QA_prompt) chain = RetrievalQAWithSourcesChain(combine_documents_chain=qa_chain, retriever=vector_store.as_retriever(search_kwargs={'search_type':searchType,'k': 5}), return_source_documents = True, reduce_k_below_max_tokens=True, max_tokens_limit=4000) Sometimes the response returned is accurate. For example, below: I am unable to provide information on legal services in Australia based on the provided documents. The documents provided information on legal services in Ontario, Canada. When the LangChain wrapper is not used and the context is passed directly to the Azure OpenAI completion endpoint, we consistently get the correct response, as mentioned below: Sorry, I cannot provide information about legal services in Australia as the provided documents only pertain to legal services in Ontario, Canada. You might need to search other sources to address your question. ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Code snippet used: vector_store_address: str = https://guidecognitivesearch.search.windows.net/ vector_store_password: str = "xxxxxx" model: str = "poc-embeddings" embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1) index_name: str = "fss-vector-index" vector_store: AzureSearch = AzureSearch( azure_search_endpoint=vector_store_address, azure_search_key=vector_store_password, index_name=index_name, semantic_configuration_name="fss-semantic-config", embedding_function=embeddings.embed_query, ) results = vector_store.semantic_hybrid_search( query=query, k=5 ) custom_QA_prompt = get_prompt_template() llm = get_llm() qa_chain = load_qa_with_sources_chain(llm, chain_type="stuff", prompt=custom_QA_prompt) openai_results = qa_chain({"input_documents":results,"question": query}) return {"results":openai_results['output_text'], "metadata":{}} The issue is observed when searching against location Australia. Australia is not found in any of our ingested documents. If we try Italy or Cancun (which are also not in the context), we get the expected response: Image When the RetrievalQAWithSourcesChain is used combined with load_qa_with_sources_chain – we do see correct response sometimes (say 1 out of 5 time, but this is not consistent every time) qa_chain = load_qa_with_sources_chain(llm, chain_type="stuff", prompt=custom_QA_prompt) chain = RetrievalQAWithSourcesChain(combine_documents_chain=qa_chain, retriever=vector_store.as_retriever(search_kwargs={'search_type':searchType,'k': 5}), return_source_documents = True, reduce_k_below_max_tokens=True, max_tokens_limit=4000) Sometimes the response returned is accurate. For example, below: I am unable to provide information on legal services in Australia based on the provided documents. The documents provided information on legal services in Ontario, Canada. But most of the time, the response is returned citing sources (which is incorrect) "results": "Some legal services in Australia include Legal Aid, Pro Bono services, and community legal clinics. However, the provided documents primarily focus on legal services in Ontario, Canada. These services include the Law Society of Ontario, Justice Ontario, JusticeNet, the Law Society of Upper Canada-Lawyer Referral Service, and Legal Aid Ontario. Please note that these services are specific to Ontario, Canada, and not Australia. For more information on legal services in Australia, I would recommend conducting further research.\n\nSource: 9610820-Requested Resources.docx, 9169833-Requested Resources.docx, 8877484-Requested Resources.docx" ### Expected behavior I am unable to provide information on legal services in Australia based on the provided documents. The documents provided information on legal services in Ontario, Canada.
Langchain bug: Responding to out of context questions when using GPT4 with Vector Database where as when the context is passed directly to the Azure OpenAI completion endpoint, we consistently get the correct response
https://api.github.com/repos/langchain-ai/langchain/issues/9165/comments
7
2023-08-13T01:21:43Z
2023-11-19T16:05:01Z
https://github.com/langchain-ai/langchain/issues/9165
1,848,338,018
9,165
[ "hwchase17", "langchain" ]
### Feature request I know that Llamapi is in the experimental phase but can is there way to set the model configurations for the llms. ### Motivation I have my own finetune llama that works the api, and would like to use it ### Your contribution N/A
llamapi
https://api.github.com/repos/langchain-ai/langchain/issues/9157/comments
1
2023-08-12T19:49:08Z
2023-11-18T16:04:47Z
https://github.com/langchain-ai/langchain/issues/9157
1,848,207,782
9,157
[ "hwchase17", "langchain" ]
### System Info Python 3.10 ### Who can help? @hw ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I tried with 3.5 and Claude models, but it rarely generates a reply to the prompt preset (like below) when I use the conversation agent. GPT 4 works fine. ``` Thought: Do I need to use a tool? Yes Action: the action to take, should be one of [{tool_names}] Action Input: the input to the action Observation: the result of the action ``` When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format: ``` Thought: Do I need to use a tool? No {ai_prefix}: [your response here] ``` This is my code: ![image](https://github.com/langchain-ai/langchain/assets/44308338/e2ddb385-17f5-4cbc-a681-ad5ccfad27df) Errors: > Entering new AgentExecutor chain... Thought: Do I need to use a tool? Yes Action: Search Action Input: Weather in London today Observation: London, England, United Kingdom 10-Day Weather Forecaststar_ratehome ; Temperature. High. 77 ; Rain/Snow Depth. Precipitation. 0 ; Temperature. High. 80 ... Thought:Traceback (most recent call last): File "/Users/joey/Library/CloudStorage/OneDrive-Personal/Code/GPT/agent081123.py", line 254, in <module> print(agent_chain.run(user_input)) File "/opt/homebrew/anaconda3/envs/XXX/lib/python3.10/site-packages/langchain/chains/base.py", line 475, in run return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[ File "/opt/homebrew/anaconda3/envs/XXX/lib/python3.10/site-packages/langchain/chains/base.py", line 282, in __call__ raise e File "/opt/homebrew/anaconda3/envs/XXX/lib/python3.10/site-packages/langchain/chains/base.py", line 276, in __call__ self._call(inputs, run_manager=run_manager) File "/opt/homebrew/anaconda3/envs/XXX/lib/python3.10/site-packages/langchain/agents/agent.py", line 1036, in _call next_step_output = self._take_next_step( File "/opt/homebrew/anaconda3/envs/XXX/lib/python3.10/site-packages/langchain/agents/agent.py", line 844, in _take_next_step raise e File "/opt/homebrew/anaconda3/envs/XXX/lib/python3.10/site-packages/langchain/agents/agent.py", line 833, in _take_next_step output = self.agent.plan( File "/opt/homebrew/anaconda3/envs/XXX/lib/python3.10/site-packages/langchain/agents/agent.py", line 457, in plan return self.output_parser.parse(full_output) File "/Users/joey/Library/CloudStorage/OneDrive-Personal/Code/GPT/agent081123.py", line 225, in parse raise OutputParserException(f"Could not parse LLM output: `{text}`") langchain.schema.output_parser.OutputParserException: Could not parse LLM output: `Do I need to use a tool? No` (XXX) joey@Joey-MBP GPT% ### Expected behavior Should have no parser error and generate the same format response as the prompt.
ConversationalAgent/CHAT_CONVERSATIONAL_REACT_DESCRIPTION dosen't work well with ChatOpenAI 3.5 and Claude models
https://api.github.com/repos/langchain-ai/langchain/issues/9154/comments
7
2023-08-12T13:32:19Z
2024-02-14T16:11:58Z
https://github.com/langchain-ai/langchain/issues/9154
1,848,016,732
9,154
[ "hwchase17", "langchain" ]
### System Info Hello, Thanks for the wonderful extension on PG. I am using on DigitialOcean for my SAAS app where i have different schemas for each tenant. I had installed PG vector in public schema, so that all the tenants could make use of them. My app is built on OpenAI api and I am using the default method from langchain to define the db **from langchain.vectorstores.pgvector import DistanceStrategy db = PGVector.from_documents( documents= docs, embedding = embeddings, collection_name= "schema.unique_value", distance_strategy = DistanceStrategy.COSINE, connection_string=CONNECTION_STRING)** This create two files in the public scema and i could see and query the contents. below are the tables langchain_pg_colllection langchain_pg_embedding But when i wish to rewrite the contents by deleteing and even dropping the tables, i could still query the content using the below code: **db = PGVector( collection_name=collection_name, connection_string=CONNECTION_STRING, embedding_function=embeddings, ) retriever = db.as_retriever( search_kwargs={"k": 5} )** The deleted table gets automatically restored while query. This feature is strange and not sure what creates it. Also when i pass a non existent value, instead of failing, it creates a new record in the collection table. I raise this issue with PG_VECTOR but was rightly directed to reach out to you. Would seek any inputs to resolve this plase. ### Who can help? @eyurtsev ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction **db = PGVector( collection_name=collection_name, connection_string=CONNECTION_STRING, embedding_function=embeddings, ) retriever = db.as_retriever( search_kwargs={"k": 5} )** ### Expected behavior The retriever should get only the contents from the table. If not in the table (or table not available) it should fail. Thanks in anticipation
Able to query contents of a PG vector table, ever after dropping the table.
https://api.github.com/repos/langchain-ai/langchain/issues/9152/comments
2
2023-08-12T11:13:01Z
2023-08-12T12:07:34Z
https://github.com/langchain-ai/langchain/issues/9152
1,847,942,408
9,152
[ "hwchase17", "langchain" ]
### Feature request I am trying to use the from_llm_and_api_docs() function with an API that only supports parameters via the "params" parameter in the request library: The from_llm_and_api_docs works well with apis that encode parameters in the URl. Eg: https://api.open-meteo.com/v1/forecast/latitude=123&longitude=123 I would need langchain to generate a json_payload and input it into the "params" parameter in the request header. I looked into the Langchain code and it seems like that this line: requests_wrapper = TextRequestsWrapper(headers=headers) needs to be augmented with a "params" parameter. Example: r = requests.get('https://www.domain.com/endpoint',params=params,cookies=cookies, headers=headers,json=json_data) json_data = { 'field_ids': field_ids, 'order': [ { 'field_id': 'rank_org_company', 'sort': sort_order, }, ], 'query': [ { 'type': 'predicate', 'field_id': 'founded_on', 'operator_id': 'gte', 'values': [ '01/01/2018', ], }, { 'type': 'predicate', 'field_id': 'equity_funding_total', 'operator_id': 'lte', 'values': [ { 'value': 15000000, 'currency': 'usd', }, ], }, { 'type': 'predicate', 'field_id': 'location_identifiers', 'operator_id': 'includes', 'values': [ '6106f5dc-823e-5da8-40d7-51612c0b2c4e', ], } ], 'field_aggregators': [], 'collection_id': 'organization.companies', 'limit': 1000, } How do I need to change the from_llm_and_api_docs() accordingly? ### Motivation Many APIs are controlled via a json payload and not solely via parameters in the URL. ### Your contribution I could look into changing the request wrapper but maybe there is a more elegant way of doing so.
Custom Parameter/ Json Payload for from_llm_and_api_docs() function
https://api.github.com/repos/langchain-ai/langchain/issues/9151/comments
6
2023-08-12T10:53:09Z
2024-05-08T14:23:31Z
https://github.com/langchain-ai/langchain/issues/9151
1,847,930,416
9,151
[ "hwchase17", "langchain" ]
### System Info langchain==0.0.262 aim==3.17.5 aim-ui==3.17.5 aimrecords==0.0.7 aimrocks==0.4.0 python: 3.11.4 env: MacOS ### Who can help? @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction The AIM Callback example: https://python.langchain.com/docs/integrations/providers/aim_tracking Running only with the LLM works fine. However when using any CHAIN example it fails with the error: > Entering new LLMChain chain... Error in AimCallbackHandler.on_chain_start callback: 'input' > Finished chain. Error in AimCallbackHandler.on_chain_end callback: 'output' No further exceptions are displayed And the trace recorded in AIM is also incomplete. ### Expected behavior Trace should be recorded completely. And no errors should occur on callback for chain start and chain end
Error in AimCallbackHandler.on_chain_start callback: 'input'
https://api.github.com/repos/langchain-ai/langchain/issues/9150/comments
6
2023-08-12T04:58:36Z
2024-02-18T16:07:41Z
https://github.com/langchain-ai/langchain/issues/9150
1,847,753,503
9,150
[ "hwchase17", "langchain" ]
### System Info langchain 0.0.262 text_generation_server. https://github.com/huggingface/text-generation-inference ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction start text_generation_inference server on local host, verify it is working =============== from langchain import PromptTemplate, HuggingFaceTextGenInference llm = HuggingFaceTextGenInference( inference_server_url="http://127.0.0.1:80", max_new_tokens=64, top_k=10, top_p=0.95, typical_p=0.95, temperature=1, repetition_penalty=1.03, ) output = llm("What is Machine Learning?") print(output) ================= root@0b801769b7bd:~/langchain_client# python langchain-client.py Traceback (most recent call last): File "/root/langchain_client/langchain-client.py", line 59, in <module> output = llm("What is Machine Learning?") File "/usr/local/lib/python3.10/dist-packages/langchain/llms/base.py", line 802, in __call__ self.generate( File "/usr/local/lib/python3.10/dist-packages/langchain/llms/base.py", line 598, in generate output = self._generate_helper( File "/usr/local/lib/python3.10/dist-packages/langchain/llms/base.py", line 504, in _generate_helper raise e File "/usr/local/lib/python3.10/dist-packages/langchain/llms/base.py", line 491, in _generate_helper self._generate( File "/usr/local/lib/python3.10/dist-packages/langchain/llms/base.py", line 977, in _generate self._call(prompt, stop=stop, run_manager=run_manager, **kwargs) File "/usr/local/lib/python3.10/dist-packages/langchain/llms/huggingface_text_gen_inference.py", line 164, in _call res = self.client.generate(prompt, **invocation_params) File "/usr/local/lib/python3.10/dist-packages/text_generation/client.py", line 150, in generate return Response(**payload[0]) File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for Response details -> tokens -> 6 -> logprob none is not an allowed value (type=type_error.none.not_allowed) ### Expected behavior output text from text generation inference server.
Langchain text_generation client hits: pydantic.error_wrappers.ValidationError: 1 validation error for Response
https://api.github.com/repos/langchain-ai/langchain/issues/9146/comments
2
2023-08-11T21:41:36Z
2023-08-11T21:45:22Z
https://github.com/langchain-ai/langchain/issues/9146
1,847,459,877
9,146
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. llm = OpenAI(temperature=0, model_name = args.ModelName) system_message = SystemMessage(content=template) agent_kwargs = { "extra_prompt_messages": [MessagesPlaceholder(variable_name="memory")], "system_message": system_message, } tools = load_tools(["google-serper"], llm=llm) agent_executor = initialize_agent( tools, llm, agent=AgentType.OPENAI_FUNCTIONS, agent_kwargs= agent_kwargs, verbose = True, memory = memory, handle_parsing_errors=True, ) I am getting this error GoogleSerperRun._arun() got an unexpected keyword argument 'arg1'. Can anyone explain why is it so ### Suggestion: _No response_
GoogleSerperRun._arun() got an unexpected keyword argument 'arg1'
https://api.github.com/repos/langchain-ai/langchain/issues/9144/comments
1
2023-08-11T19:44:55Z
2023-08-13T23:07:17Z
https://github.com/langchain-ai/langchain/issues/9144
1,847,337,972
9,144
[ "hwchase17", "langchain" ]
### Issue with current documentation: `API Reference` `langchain.agents Functions` table unreadable: ![image](https://github.com/langchain-ai/langchain/assets/2256422/ba09a924-51d2-4f69-bfe8-3d8de1a3939a) The `name` column is soo long, which makes the `description` column unreadable. It is because of this value: `agents.agent_toolkits.conversational_retrieval.openai_functions.create_conversational_retrieval_agent` ### Idea or request for content: Make the namespace [and name] of this function shorter.
DOC: `API Reference` `langchain.agents Functions ` table unreadable
https://api.github.com/repos/langchain-ai/langchain/issues/9133/comments
5
2023-08-11T17:16:33Z
2023-11-19T22:54:10Z
https://github.com/langchain-ai/langchain/issues/9133
1,847,171,982
9,133
[ "hwchase17", "langchain" ]
### Feature request **Make the intent identification (SELECT vs UPDATE) within the SPARQL QA chain more resilient** As originally described in #7758 and further discussed in #8521, some models struggle with providing an unambiguous response to the intent identification prompt, i.e., they return a sentence that contains both keywords instead of returning exactly one keyword. \ This should be resolvable with a loop that reprompts the model if its response is ambiguous. The loop should probably be limited to one or two retries. ### Motivation Make the intent identification within the SPARQL QA chain more resilient, so that it can more easily be used with models other than the OpenAI ones, e.g., StarCoder and NeoGPT3 mentioned in #7758, which are still able to generate sensible SPARQL ### Your contribution I should be able to create a PR soonish, but if somebody else wants to give this a go I would also be happy to review
Make intent identification in SPARQL QA chain more resilient
https://api.github.com/repos/langchain-ai/langchain/issues/9132/comments
5
2023-08-11T17:09:45Z
2024-02-18T17:49:30Z
https://github.com/langchain-ai/langchain/issues/9132
1,847,163,390
9,132
[ "hwchase17", "langchain" ]
### System Info Python 3.10 LangChain: 0.0.245 ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I write a 'Python interpreter' tool, as below sample code: ``` class PythonInterpreterTool(BaseTool): name = "python_interpreter" name_display = "Python Interpreter" description = """Give me the Python to execute,it returns the execution result""" def _run(self, code: str) -> str: with PythonInterpreter() as executor: # type: ignore output = executor.run_python_code(code) return output async def _arun(self, code: str) -> str: raise NotImplementedError ``` But, for many times, I always got "Could not parse tool input... because the `arguments` is not valid JSON." error: Could not parse tool input: {'name': 'python_interpreter', 'arguments': '{\n "__arg1": "\nimport moviepy.editor as mp\n\nvideo_path = \'https://video-clip.oss.com/upload/40ae1ae0-3799-11ee-9c80-476f00f64986.mp4\'\noutput_path = \'output.mp3\'\n\nvideo = mp.VideoFileClip(video_path)\nvideo.audio.write_audiofile(output_path)\n"\n}'} because the `arguments` is not valid JSON. ### Expected behavior I write a 'Python interpreter' tool, as below sample code: ``` class PythonInterpreterTool(BaseTool): name = "python_interpreter" name_display = "Python Interpreter" description = """Give me the Python to execute,it returns the execution result""" def _run(self, code: str) -> str: with PythonInterpreter() as executor: # type: ignore output = executor.run_python_code(code) return output async def _arun(self, code: str) -> str: raise NotImplementedError ``` But, for many times, I always got "Could not parse tool input... because the `arguments` is not valid JSON." error: Could not parse tool input: {'name': 'python_interpreter', 'arguments': '{\n "__arg1": "\nimport moviepy.editor as mp\n\nvideo_path = \'https://video-clip.oss.com/upload/40ae1ae0-3799-11ee-9c80-476f00f64986.mp4\'\noutput_path = \'output.mp3\'\n\nvideo = mp.VideoFileClip(video_path)\nvideo.audio.write_audiofile(output_path)\n"\n}'} because the `arguments` is not valid JSON.
LangChain tools as gpt func, always fails at the JSON decode
https://api.github.com/repos/langchain-ai/langchain/issues/9130/comments
4
2023-08-11T16:48:08Z
2023-11-19T16:05:06Z
https://github.com/langchain-ai/langchain/issues/9130
1,847,137,279
9,130
[ "hwchase17", "langchain" ]
### System Info I have an extremely simple setup where I created a GH app following the [Github App Quickstart Guide](https://docs.github.com/en/apps/creating-github-apps/writing-code-for-a-github-app/quickstart). I then start a blank repo with the template `README.md` (just the name of the repo in a title). The agent can successfully create a pull request from one request where I ask the agent to make an ascii art of a whale & insert a new section in the README.md, but after manually merging that pull request myself the agent keeps incorrectly finding the initial value of the `README.md` with (what I'm assuming is) an incorrect commit hash that's never existed in [my repo](https://github.com/Zak-Alford/gh-test-repo2): ``` > Entering new AgentExecutor chain... First, I need to read the current README.md to know its contents. Then, I will update the file with the riddle. Finally, I will create a pull request. Action: Read File Action Input: README.md Observation: # gh-test-repo2 Thought:The file opens with a header for the repository. Now, I will add the riddle to the 'Weekly Challenge' section. Action: Update File Action Input: README.md OLD <<<< # gh-test-repo2 >>>> OLD NEW <<<< # gh-test-repo2 ## Weekly Challenge Riddle: What has keys but can't open locks? >>>> NEW Observation: Unable to update file due to error: 409 {"message": "README.md does not match 87f560c4c6fd9eecce7374b3ac8f5517a975bf63", "documentation_url": "https://docs.github.com/rest/repos/contents#create-or-update-file-contents"} Thought:It seems like there was a conflict when trying to update the file. The file might have been updated after I last read it. I will try to read the file again and then update it. Action: Read File Action Input: README.md Observation: # gh-test-repo2 Thought:The file content hasn't changed, so I'll attempt to update it again. Action: Update File Action Input: README.md OLD <<<< # gh-test-repo2 >>>> OLD NEW <<<< # gh-test-repo2 ## Weekly Challenge Riddle: What has keys but can't open locks? >>>> NEW Observation: Unable to update file due to error: 409 {"message": "README.md does not match 87f560c4c6fd9eecce7374b3ac8f5517a975bf63", "documentation_url": "https://docs.github.com/rest/repos/contents#create-or-update-file-contents"} Thought:There seems to be a problem with updating the file. Maybe the file was updated by someone else in the meantime, or there's an issue with the way I'm trying to update it. I'll try to create a new file with the riddle instead, and then make a pull request to merge it with the README.md. Action: Create File Action Input: WeeklyChallenge.md ## Weekly Challenge Riddle: What has keys but can't open locks? Observation: Unable to make file due to error: 404 {"message": "Not Found", "documentation_url": "https://docs.github.com/rest/repos/contents#get-repository-content"} Thought:The creation of the file failed, perhaps the filepath or file name is incorrect. I will try to create the pull request directly and mention the changes proposed in the pull request description. Action: Create Pull Request Action Input: Update README.md with Weekly Challenge In this pull request, I propose adding the following riddle to the 'Weekly Challenge' section of the README.md: Riddle: What has keys but can't open locks? Observation: Unable to make pull request due to error: 422 {"message": "Validation Failed", "errors": [{"resource": "PullRequest", "code": "custom", "message": "No commits between test and bot-branch"}], "documentation_url": "https://docs.github.com/rest/pulls/pulls#create-a-pull-request"} Thought:It seems like the pull request failed because there are no commits between the test and the bot-branch. This could be because I haven't successfully updated or created any files, so there's no changes to commit. I'll try to update the README.md file again, making sure to include the entire content of the file in the OLD <<<< and >>>> OLD tags, in case that was the issue. Action: Update File Action Input: README.md OLD <<<< # gh-test-repo2 >>>> OLD NEW <<<< # gh-test-repo2 ## Weekly Challenge Riddle: What has keys but can't open locks? >>>> NEW Observation: Unable to update file due to error: 409 {"message": "README.md does not match 87f560c4c6fd9eecce7374b3ac8f5517a975bf63", "documentation_url": "https://docs.github.com/rest/repos/contents#create-or-update-file-contents"} Thought:It seems like the error persists despite multiple attempts. There might be an issue with the GitHub repository or the API. At this point, I am unable to complete the task as instructed. Final Answer: I was unable to complete the task due to an error while updating the README.md file. > Finished chain. ``` Code I'm using to initialize the GH agent: ``` import os from langchain.agents import AgentType from langchain.agents import initialize_agent from langchain.agents.agent_toolkits.github.toolkit import GitHubToolkit from langchain.llms import OpenAI from langchain.chat_models import ChatOpenAI from langchain.utilities.github import GitHubAPIWrapper import json import custom_tools # List of required environment variables: env_vars = [ "GITHUB_APP_ID", "GITHUB_APP_PRIVATE_KEY", "GITHUB_REPOSITORY", "GITHUB_BRANCH", "GITHUB_BASE_BRANCH", "OPENAI_API_KEY", ] # Check your json file for key values with open("envvars.json", "r") as f: env_var_values = json.load(f) for var in env_vars: # Check that each key exists. If it doesn't, set it to be "" and then complain later if env_var_values.get(var, "") != "": os.environ[var] = env_var_values[var] else: # Complaint line raise Exception(f"The environment variable {var} was not set. You must set this value to continue.") bot_branch = os.environ["GITHUB_BRANCH"] gh_base_branch = os.environ["GITHUB_BASE_BRANCH"] llm = ChatOpenAI(model="gpt-4") github = GitHubAPIWrapper() toolkit = GitHubToolkit.from_github_api_wrapper(github) tools = [] # unwanted_tools = ['Get Issue','Delete File'] unwanted_tools = [] for tool in toolkit.get_tools(): if tool.name not in unwanted_tools: tools.append(tool) agent = initialize_agent( tools=tools, llm=llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True ) # request = f"Draw an ascii art whale and add it once to the 'Nature' section of the README.md file. Make a pull request back to {gh_base_branch}." request = f"Do a git pull first. Then write a riddle and it once to the 'Weekly Challenge' section of the README.md file. Don't write the answer to the riddle. Make a pull request back to {gh_base_branch}." agent.run(request) ``` scrubbed JSON file I'm using to grab env vars from: ``` { "GITHUB_APP_ID": "<my-app-id>", "GITHUB_APP_PRIVATE_KEY": "<my-path-to-pem>", "GITHUB_REPOSITORY": "Zak-Alford/gh-test-repo2", "GITHUB_BRANCH": "bot-branch", "GITHUB_BASE_BRANCH": "test", "OPENAI_API_KEY": "<my-openai-api-key>" } ``` ### Who can help? @hw ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Copy my script included above and name the file 2. Run `py (or python3) <filename>.py` ### Expected behavior The github agent should find the latest commit that it itself committed to the branch before creating a new commit with changes.
[Request] Note that the 'base branch' needs to match the repository's 'default branch'
https://api.github.com/repos/langchain-ai/langchain/issues/9129/comments
4
2023-08-11T16:31:15Z
2023-11-20T16:05:11Z
https://github.com/langchain-ai/langchain/issues/9129
1,847,115,156
9,129
[ "hwchase17", "langchain" ]
### System Info in sagemaker. langchain==0.0.256 or 0.0.249 (I tried both) Image: Data Science 3.0 Kernel: Python 3 Instance type: ml.t3.medium 2 vCPU + 4 GiB ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction https://github.com/aws-samples/amazon-bedrock-workshop/blob/main/03_QuestionAnswering/01_qa_w_rag_claude.ipynb im trying to following this notebook: i increase the data input size: from urllib.request import urlretrieve ``` os.makedirs("data", exist_ok=True) files = [ "https://www.irs.gov/pub/irs-pdf/p1544.pdf", "https://www.irs.gov/pub/irs-pdf/p15.pdf", "https://www.irs.gov/pub/irs-pdf/p1212.pdf", "https://www.irs.gov/pub/irs-pdf/p3.pdf", "https://www.irs.gov/pub/irs-pdf/p17.pdf", "https://www.irs.gov/pub/irs-pdf/p51.pdf", "https://www.irs.gov/pub/irs-pdf/p54.pdf", ] for url in files: file_path = os.path.join("data", url.rpartition("/")[2]) urlretrieve(url, file_path) ``` my data input: Average length among 1012 documents loaded is 2320 characters. After the split we have 1167 documents more than the original 1012. Average length among 1167 documents (after split) is 2011 characters. ``` import numpy as np from langchain.text_splitter import CharacterTextSplitter, RecursiveCharacterTextSplitter from langchain.document_loaders import PyPDFLoader, PyPDFDirectoryLoader loader = PyPDFDirectoryLoader("./data/") documents = loader.load() # - in our testing Character split works better with this PDF data set text_splitter = RecursiveCharacterTextSplitter( # Set a really small chunk size, just to show. chunk_size = 1000, chunk_overlap = 100, ) docs = text_splitter.split_documents(documents) avg_doc_length = lambda documents: sum([len(doc.page_content) for doc in documents])//len(documents) avg_char_count_pre = avg_doc_length(documents) avg_char_count_post = avg_doc_length(docs) print(f'Average length among {len(documents)} documents loaded is {avg_char_count_pre} characters.') print(f'After the split we have {len(docs)} documents more than the original {len(documents)}.') print(f'Average length among {len(docs)} documents (after split) is {avg_char_count_post} characters.') from langchain.chains.question_answering import load_qa_chain from langchain.vectorstores import FAISS from langchain.indexes import VectorstoreIndexCreator from langchain.indexes.vectorstore import VectorStoreIndexWrapper vectorstore_faiss = FAISS.from_documents( docs, bedrock_embeddings, ) wrapper_store_faiss = VectorStoreIndexWrapper(vectorstore=vectorstore_faiss) ``` funny thing is if my doc is smaller(docs[:5]), it worked. vectorstore_faiss = FAISS.from_documents( **docs[:5]**, bedrock_embeddings, ) error: --------------------------------------------------------------------------- ValidationException Traceback (most recent call last) File /opt/conda/lib/python3.10/site-packages/langchain/embeddings/bedrock.py:120, in BedrockEmbeddings._embedding_func(self, text) 119 try: --> 120 response = self.client.invoke_model( 121 body=body, 122 modelId=self.model_id, 123 accept="application/json", 124 contentType="application/json", 125 ) 126 response_body = json.loads(response.get("body").read()) File /opt/conda/lib/python3.10/site-packages/botocore/client.py:535, in ClientCreator._create_api_method.<locals>._api_call(self, *args, **kwargs) 534 # The "self" in this scope is referring to the BaseClient. --> 535 return self._make_api_call(operation_name, kwargs) File /opt/conda/lib/python3.10/site-packages/botocore/client.py:980, in BaseClient._make_api_call(self, operation_name, api_params) 979 error_class = self.exceptions.from_code(error_code) --> 980 raise error_class(parsed_response, operation_name) 981 else: ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: The provided inference configurations are invalid During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) Cell In[35], line 10 4 from langchain.indexes.vectorstore import VectorStoreIndexWrapper 6 7 8 # 9 # ---> 10 vectorstore_faiss = FAISS.from_documents( 11 docs, 12 bedrock_embeddings, 13 ) 15 wrapper_store_faiss = VectorStoreIndexWrapper(vectorstore=vectorstore_faiss) File /opt/conda/lib/python3.10/site-packages/langchain/vectorstores/base.py:420, in VectorStore.from_documents(cls, documents, embedding, **kwargs) 418 texts = [d.page_content for d in documents] 419 metadatas = [d.metadata for d in documents] --> 420 return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs) File /opt/conda/lib/python3.10/site-packages/langchain/vectorstores/faiss.py:607, in FAISS.from_texts(cls, texts, embedding, metadatas, ids, **kwargs) 581 @classmethod 582 def from_texts( 583 cls, (...) 588 **kwargs: Any, 589 ) -> FAISS: 590 """Construct FAISS wrapper from raw documents. 591 592 This is a user friendly interface that: (...) 605 faiss = FAISS.from_texts(texts, embeddings) 606 """ --> 607 embeddings = embedding.embed_documents(texts) 608 return cls.__from( 609 texts, 610 embeddings, (...) 614 **kwargs, 615 ) File /opt/conda/lib/python3.10/site-packages/langchain/embeddings/bedrock.py:148, in BedrockEmbeddings.embed_documents(self, texts, chunk_size) 146 results = [] 147 for text in texts: --> 148 response = self._embedding_func(text) 149 results.append(response) 150 return results File /opt/conda/lib/python3.10/site-packages/langchain/embeddings/bedrock.py:129, in BedrockEmbeddings._embedding_func(self, text) 127 return response_body.get("embedding") 128 except Exception as e: --> 129 raise ValueError(f"Error raised by inference endpoint: {e}") ValueError: Error raised by inference endpoint: An error occurred (ValidationException) when calling the InvokeModel operation: The provided inference configurations are invalid ### Expected behavior I would like to generate embeddings for the entire corpus and stored in a vector store.
inference configurations are invalid forBedrockEmbeddings models
https://api.github.com/repos/langchain-ai/langchain/issues/9127/comments
11
2023-08-11T15:33:52Z
2023-12-13T16:07:43Z
https://github.com/langchain-ai/langchain/issues/9127
1,847,039,060
9,127
[ "hwchase17", "langchain" ]
### System Info langchain==0.0.260 model = "gpt-3.5-turbo-16k" temperature = 0.0 ### Who can help? @hwchase17 and @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [X] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` search_agent = initialize_agent( tools=tools, llm=llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, memory = memory, agent_kwargs = { #"suffix": NEW_SUFFIX, "memory_prompts": [chat_history], "input_variables": ["input", "agent_scratchpad", "chat_history"] }, #prompt=cf_template, #handle_parsing_errors=True, #max_iterations=2, #max_execution_time=10, verbose = True, handle_parsing_errors=True, ) from pprint import pprint #planner = load_chat_planner(llm) #executor = load_agent_executor(llm, tools, verbose=True) #search_agent = PlanAndExecute(planner=planner, executor=executor, verbose=True) #print(search_agent.agent.llm_chain.prompt) search_agent.agent.llm_chain.verbose=True question = "What are the stock price for Apple" response = search_agent.run(input = question) print(response) #pprint(search_agent.agent.llm_chain.prompt) question = "What about Google." response = search_agent.run(input = question) print(response) #pprint(search_agent.agent.llm_chain.prompt) question = "Show me news for fires in Greece in the last month." response = search_agent.run(input = question) print(response) question = "what about Macedonia in the previous week." response = search_agent.run(input = question) print(response) ``` ### Expected behavior For the: ``` question = "What about Google." response = search_agent.run(input = question) ``` The Agent response is: ``` Human: What about Google. AI: To provide the current stock price for Google, I will use the "get_current_stock_price" tool. Please wait a moment. ``` Instead it should execute the tool and get the results similar as Apple which is the first query...
Intermediate answer from STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION received as a final answer
https://api.github.com/repos/langchain-ai/langchain/issues/9122/comments
13
2023-08-11T13:52:19Z
2024-01-30T00:41:15Z
https://github.com/langchain-ai/langchain/issues/9122
1,846,866,526
9,122
[ "hwchase17", "langchain" ]
### Feature request Many parameters described on the TGI [Swagger](https://huggingface.github.io/text-generation-inference/#/Text%20Generation%20Inference/generate) find their direct equivalent in the corresponding LangChain [API](https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html?highlight=textgeninference#langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference). ### Motivation At least one of those parameters is missing: the `do_sample` parameter which I needed. ### Your contribution Why is this parameter missing? How hard would it be to add this and can this be circumvented?
Add `do_sample` to HuggingFaceTextGenInference
https://api.github.com/repos/langchain-ai/langchain/issues/9120/comments
3
2023-08-11T12:42:50Z
2023-11-13T08:45:47Z
https://github.com/langchain-ai/langchain/issues/9120
1,846,760,511
9,120