issue_owner_repo
listlengths
2
2
issue_body
stringlengths
0
261k
issue_title
stringlengths
1
925
issue_comments_url
stringlengths
56
81
issue_comments_count
int64
0
2.5k
issue_created_at
stringlengths
20
20
issue_updated_at
stringlengths
20
20
issue_html_url
stringlengths
37
62
issue_github_id
int64
387k
2.46B
issue_number
int64
1
127k
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. When using the new gpt's json mode by setting response_format={ "type": "json_object" }, the langchain agent failed to parse the openai output, is there any plan to support that? ### Suggestion: _No response_
Issue: langchain agent doesnt work with the new json mode of gpt4-1106-preview
https://api.github.com/repos/langchain-ai/langchain/issues/13003/comments
11
2023-11-07T15:52:22Z
2024-04-12T16:14:06Z
https://github.com/langchain-ai/langchain/issues/13003
1,981,682,406
13,003
[ "hwchase17", "langchain" ]
### Discussed in https://github.com/langchain-ai/langchain/discussions/12598 <div type='discussions-op-text'> <sup>Originally posted by **nelsoni-talentu** October 30, 2023</sup> Hi community, I am developing an app to interact (Q&A) with several documents previously embedded and stored into a MongoDB Atlas cluster. To reach this goal, I wrote this code: ```Python db_client = MongoClient("mongodb+srv://****") db_name = "db_name" collection_name = "documents" collection = db_client[db_name][collection_name] index_name = "idx_document_embedding" vectorstore = MongoDBAtlasVectorSearch(collection=collection, index_name=index_name, embedding=OpenAIEmbeddings()) index = VectorStoreIndexWrapper(vectorstore=vectorstore) chain = ConversationalRetrievalChain.from_llm(llm=ChatOpenAI(model="gpt-3.5-turbo", temperature=0), retriever=index.vectorstore.as_retriever(search_kwargs={"k": 1, "pre_filter": {"text": {"path": "project_id", "query": "some project id"}}})) chat_history = [] result = {} query = "some question here" result = chain({"question": query, "chat_history": chat_history}) ``` When I run this snipped using LangChain 0.0.303 it works smoothly, but when I upgrade LangChain to its last version available on pip, I get this error. ``` OperationFailure: Operand type is not supported for $vectorSearch: object, full error: {'ok': 0.0, 'errmsg': 'Operand type is not supported for $vectorSearch: object', 'code': 7828301, 'codeName': 'Location7828301', '$clusterTime': {'clusterTime': Timestamp(1698702099, 4), 'signature': {'hash': b'=\x10\x90\xa8\x17fa4z\x95\xcb\x1c\xb1\xd1\x96XOUf\xe7', 'keyId': 7256866503843119119}}, 'operationTime': Timestamp(1698702099, 4)} ```` Anybody can help me? In the meanwhile I will deploy my app with LangChain 0.0.303 but I prefer to deploy it with its last version, due to further compatibility. Thanks for your help. Nelson!</div>
ConversationalRetrievalChain.from_llm with MongoDB is failing.
https://api.github.com/repos/langchain-ai/langchain/issues/12996/comments
3
2023-11-07T14:00:15Z
2023-11-16T20:37:46Z
https://github.com/langchain-ai/langchain/issues/12996
1,981,433,036
12,996
[ "hwchase17", "langchain" ]
When we employ the new models introduced during DevDay, gpt-3.5-turbo-1106 and gpt-4-1106-preview, get_openai_callback() does not accurately display their total cost, which includes both the prompt and completion token cost. The subsequent minimal working example illustrates this issue. The costs should not be $0.0, but rather $0.01 * 30 + $0.03 * 598 = $18.24 for gpt-4-1106-preview and $0.0010 * 30 + $0.0020 * 160 = $0.35 for gpt-3.5-turbo-1106. ## MWE ```python from langchain.schema.messages import HumanMessage, SystemMessage from langchain.chat_models import ChatOpenAI from langchain.callbacks import get_openai_callback # export OPENAI_API_KEY="..." replies = [] for model in [ "gpt-4-1106-preview", "gpt-3.5-turbo-1106", ]: chat = ChatOpenAI(model=model) messages = [ SystemMessage(content="You're a helpful assistant"), HumanMessage(content="What is the purpose of model regularization? Explain comprehensibly in English."), ] with get_openai_callback() as cb: reply = chat.invoke(messages).content print(cb, reply, sep="\n------\n") print("="*16) replies.append(reply) replies ``` ## Output of MWE ``` Tokens Used: 628 Prompt Tokens: 30 Completion Tokens: 598 Successful Requests: 1 Total Cost (USD): $0.0 ------ Model regularization is a technique used in machine learning to prevent a model from overfitting the training data and to improve its generalization to unseen data. Overfitting occurs when a model learns the training data too well, including its noise and outliers, which often results in poor performance on new, unseen data because the model is too tailored to the specifics of the training set. Here is a comprehensible explanation of the purpose of regularization: 1. **Prevent Overfitting**: Regularization helps to keep the model simple by introducing a penalty for more complex models. By doing so, it discourages the learning of a model that is too complex for the underlying pattern in the data. A simpler model may not perform as well on the training data, but it can perform better on new data because it captures the general trend rather than the specific noise. 2. **Improves Generalization**: The main goal of a machine learning model is to make accurate predictions on new, unseen data. Regularization helps by ensuring that the model's performance during training is more reflective of how it will perform on new data. This is achieved by penalizing the complexity of the model and thereby encouraging the model to be more robust to variations in data. 3. **Controls Model Complexity**: Regularization techniques typically introduce a hyperparameter that controls the trade-off between fitting the training data well and keeping the model complexity low. By adjusting this hyperparameter, one can find a good balance where the model is complex enough to capture the underlying patterns but not so complex that it starts fitting the noise. 4. **Introduces Bias to Reduce Variance**: In statistical terms, regularization introduces a small amount of bias to the model to significantly reduce its variance. This trade-off is beneficial because high variance models are those that overfit the data, while a little bias can make the model more stable and accurate in its predictions on new data. 5. **Handles Multicollinearity**: In cases where the features (input variables) are highly correlated, it can be difficult for the model to estimate the relationship of each feature with the output variable. Regularization techniques can help reduce the impact of multicollinearity by penalizing the coefficients of the correlated features, leading to more stable estimates. Common regularization techniques include: - **L1 Regularization (Lasso)**: This adds a penalty equal to the absolute value of the magnitude of coefficients. It can lead to some coefficients being shrunk to zero, effectively performing feature selection. - **L2 Regularization (Ridge)**: This adds a penalty equal to the square of the magnitude of coefficients. This generally shrinks coefficients evenly but does not set them to zero. - **Elastic Net**: This is a combination of L1 and L2 regularization and balances the properties of both methods. In summary, regularization is a crucial step in building machine learning models that are effective at making predictions on new, unseen data by keeping the models simpler and more generalizable. ================ Tokens Used: 194 Prompt Tokens: 30 Completion Tokens: 164 Successful Requests: 1 Total Cost (USD): $0.0 ------ Model regularization is a technique used in machine learning to prevent overfitting. Overfitting occurs when a model learns the training data too well and performs poorly on new, unseen data. Regularization helps to address this issue by adding a penalty to the model's learning process, discouraging it from becoming too complex and fitting noise in the data. There are different types of regularization, such as L1 and L2 regularization, which add a penalty based on the magnitude of the model's coefficients. By using regularization, the model is encouraged to focus on the most important features in the data and avoid being overly sensitive to small fluctuations. In simple terms, model regularization helps to keep the model in check and prevent it from becoming too specialized to the training data, improving its ability to make accurate predictions on new, unseen data. ================ ``` It is necessary to include the cost per 1,000 tokens in `MODEL_COST_PER_1K_TOKENS` which is defined in the following lines, in accordance with OpenAI's official pricing page ([here](https://openai.com/pricing#gpt-4-turbo) for gpt-4-turbo; [here](https://openai.com/pricing#gpt-4-turbo:~:text=%C2%A0/%201K%20tokens-,GPT%2D3.5%20Turbo,-GPT%2D3.5%20Turbo) for gpt-3.5-turbo). https://github.com/langchain-ai/langchain/blob/ff87f4b4f90c1d13ddb79120c6ded6c0af2959b7/libs/langchain/langchain/callbacks/openai_info.py#L7C1-L35
get_openai_callback() does not show the cost taken from new OpenAI's model ("gpt-4-1106-preview" and "gpt-3.5-turbo-1106")
https://api.github.com/repos/langchain-ai/langchain/issues/12994/comments
7
2023-11-07T13:12:17Z
2024-02-02T12:14:09Z
https://github.com/langchain-ai/langchain/issues/12994
1,981,323,192
12,994
[ "hwchase17", "langchain" ]
### Feature request We need to be able to customize `PYDANTIC_FORMAT_INSTRUCTIONS` (https://statics.teams.cdn.office.net/evergreen-assets/safelinks/1/atp-safelinks.html) in `PydanticOutputParser`. If our prompt is written in a different language e.g. in spanish, then the english `PYDANTIC_FORMAT_INSTRUCTIONS` causes problems for the model. ### Motivation Due to the inflexibility in specifying format instructions the model often doesn't respect my format specification. Due to this, it is often more useful to specify it manually. ### Your contribution Solution proposal: `PydanticOutputParser` (and possibly other parsers) can accept new parameter `pydantic_format_instructions`, where the user will be able to specify these format instructions manually. The similar solution solution could be used for other output parsers.
Parametrize hardcoded PYDANTIC_FORMAT_INSTRUCTIONS in PydanticOutputParser
https://api.github.com/repos/langchain-ai/langchain/issues/12992/comments
2
2023-11-07T11:45:08Z
2024-02-13T16:07:18Z
https://github.com/langchain-ai/langchain/issues/12992
1,981,164,657
12,992
[ "hwchase17", "langchain" ]
### System Info latest langchain ### Who can help? @rlancemarti ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Hi guys, When using GPT4AllEmbeddings there is no way to pass n_threads which will go to Embed4All Could you please add it: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/embeddings/gpt4all.py#L29C32-L29C41 Thanks. ### Expected behavior There is a way to provide n_threads to GPT4AllEmbeddings GPT4AllEmbeddings(n_threads=8)
GPT4AllEmbeddings should get n_threads like Embed4All
https://api.github.com/repos/langchain-ai/langchain/issues/12991/comments
2
2023-11-07T10:44:38Z
2024-02-13T16:07:22Z
https://github.com/langchain-ai/langchain/issues/12991
1,981,060,545
12,991
[ "hwchase17", "langchain" ]
### System Info langchain 0.0.331 python 3.10.13 openai 1.1.1 ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [x] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Run cookbook/openai_v1_cookbook.ipynb ``` --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) [/Users/ianz/Work/langchain/cookbook/openai_v1_cookbook.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/ianz/Work/langchain/cookbook/openai_v1_cookbook.ipynb) Cell 5 line 1 ----> [1](vscode-notebook-cell:/Users/ianz/Work/langchain/cookbook/openai_v1_cookbook.ipynb#W4sZmlsZQ%3D%3D?line=0) chat = ChatOpenAI(model="gpt-4-vision-preview", max_tokens=256) [2](vscode-notebook-cell:/Users/ianz/Work/langchain/cookbook/openai_v1_cookbook.ipynb#W4sZmlsZQ%3D%3D?line=1) chat.invoke( [3](vscode-notebook-cell:/Users/ianz/Work/langchain/cookbook/openai_v1_cookbook.ipynb#W4sZmlsZQ%3D%3D?line=2) [ [4](vscode-notebook-cell:/Users/ianz/Work/langchain/cookbook/openai_v1_cookbook.ipynb#W4sZmlsZQ%3D%3D?line=3) HumanMessage( (...) [16](vscode-notebook-cell:/Users/ianz/Work/langchain/cookbook/openai_v1_cookbook.ipynb#W4sZmlsZQ%3D%3D?line=15) ] [17](vscode-notebook-cell:/Users/ianz/Work/langchain/cookbook/openai_v1_cookbook.ipynb#W4sZmlsZQ%3D%3D?line=16) ) File [~/Work/miniconda3/envs/autoxx/lib/python3.10/site-packages/langchain/load/serializable.py:97](https://file+.vscode-resource.vscode-cdn.net/Users/ianz/Work/langchain/cookbook/~/Work/miniconda3/envs/autoxx/lib/python3.10/site-packages/langchain/load/serializable.py:97), in Serializable.__init__(self, **kwargs) [96](https://file+.vscode-resource.vscode-cdn.net/Users/ianz/Work/langchain/cookbook/~/Work/miniconda3/envs/autoxx/lib/python3.10/site-packages/langchain/load/serializable.py:96) def __init__(self, **kwargs: Any) -> None: ---> [97](https://file+.vscode-resource.vscode-cdn.net/Users/ianz/Work/langchain/cookbook/~/Work/miniconda3/envs/autoxx/lib/python3.10/site-packages/langchain/load/serializable.py:97) super().__init__(**kwargs) [98](https://file+.vscode-resource.vscode-cdn.net/Users/ianz/Work/langchain/cookbook/~/Work/miniconda3/envs/autoxx/lib/python3.10/site-packages/langchain/load/serializable.py:98) self._lc_kwargs = kwargs File [~/Work/miniconda3/envs/autoxx/lib/python3.10/site-packages/pydantic/main.py:341](https://file+.vscode-resource.vscode-cdn.net/Users/ianz/Work/langchain/cookbook/~/Work/miniconda3/envs/autoxx/lib/python3.10/site-packages/pydantic/main.py:341), in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for ChatOpenAI __root__ `openai` has no `ChatCompletion` attribute, this is likely due to an old version of the openai package. Try upgrading it with `pip install --upgrade openai`. (type=value_error) ``` ### Expected behavior langchain seems to be incompatible with openai v1, I would like to know when it will be fixed
Bugs of cookbook/openai_v1_cookbook.ipynb
https://api.github.com/repos/langchain-ai/langchain/issues/12990/comments
6
2023-11-07T10:26:27Z
2024-02-07T19:26:01Z
https://github.com/langchain-ai/langchain/issues/12990
1,981,028,074
12,990
[ "hwchase17", "langchain" ]
### System Info ``` Python 3.11.5 langchain 0.0.315 macOS 14.2 ``` ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Execute the following Wikipedia Agent against llama2: ``` from langchain.llms import Ollama from langchain.agents.agent_toolkits import create_python_agent from langchain.agents import load_tools, initialize_agent from langchain.agents import AgentType llm = Ollama(model="llama2:13b") tools = load_tools(["wikipedia"], llm=llm) agent = initialize_agent(tools, llm, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True) agent("Who is Ken Mandl?") ``` Output varies but here's what I got just now, which is very wrong: > Sure, I'd be happy to help! Here's my response: > Question: Who is Ken Mandl? > Thought: Hmmm...I'm not sure who that is. Let me check Wikipedia. > Action: `{"action": "Wikipedia","action_input": "Ken Mandl"}` Observation: According to Wikipedia, Ken Mandl is a computer scientist and entrepreneur known for his work in the field of artificial intelligence. He is the co-founder and CEO of several companies, including Numenta and Palantiri, and has written extensively on the topic of AI. Thought: Ah, I see! Now I know who Ken Mandl is. Final Answer: Ken Mandl is a computer scientist and entrepreneur known for his work in artificial intelligence. ### Expected behavior Since there is no Wikipedia article for this person, the output should reflect that. The opposite also happens, Wikipedia articles for lesser known people are not found or confabulations about them are being returned. Is there a way to check the query against Wikipedia and the response from Wikipedia? I do not see it with either `set_debug` nor `set_verbose`.
Wikipedia Agent confabulates answers
https://api.github.com/repos/langchain-ai/langchain/issues/12989/comments
5
2023-11-07T09:51:37Z
2024-02-13T16:07:27Z
https://github.com/langchain-ai/langchain/issues/12989
1,980,958,342
12,989
[ "hwchase17", "langchain" ]
### System Info Python 3.11.4 LangChain 0.0.321 Platform info (WSL2): DISTRIB_ID=Ubuntu DISTRIB_RELEASE=20.04 DISTRIB_CODENAME=focal DISTRIB_DESCRIPTION="Ubuntu 20.04.6 LTS" ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I'm trying to embed a few documents as shown in the code below: ```python # Pass the state object as a parameter @retry( retry=retry_if_exception_type(RateLimitError), wait=wait_exponential(multiplier=1, min=60, max=60), stop=stop_after_attempt(10), before_sleep=before_sleep_log(logger, logging.INFO) ) def add_documents_with_retry(documents: List[Document], open_search: OpenSearchVectorSearch, context: RetryContext): context.increment_attempts() logger.info(f"API call attempt #{context.attempts}") try: open_search.add_documents(documents=documents) context.increment_successful_calls() logger.info(f"API call #{context.attempts} successful. Total successful calls: {context.successful_calls}") except Exception as e: # General exception is enough since RateLimitError is handled by tenacity logger.error(f"Unexpected error occurred: {e}") raise # Re-raise the exception to be handled by the retry decorator add_documents_with_retry(documents=texts, open_search=open_search, context=retry_context) ``` But I get the rate limit error: ``` WARNING:langchain.embeddings.openai:Warning: model not found. Using cl100k_base encoding. 67%|██████▋ | 2/3 [00:00<00:00, 5.00it/s]INFO:openai:error_code=429 error_message='Requests to the Embeddings_Create Operation under Azure OpenAI API version 2023-07-01-preview have exceeded call rate limit of your current OpenAI S0 pricing tier. Please retry after 60 seconds. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit.' error_param=None error_type=None message='OpenAI API error received' stream_error=False WARNING:langchain.embeddings.openai:Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised RateLimitError: Requests to the Embeddings_Create Operation under Azure OpenAI API version 2023-07-01-preview have exceeded call rate limit of your current OpenAI S0 pricing tier. Please retry after 60 seconds. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit.. INFO:openai:error_code=429 error_message='Requests to the Embeddings_Create Operation under Azure OpenAI API version 2023-07-01-preview have exceeded call rate limit of your current OpenAI S0 pricing tier. Please retry after 56 seconds. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit.' error_param=None error_type=None message='OpenAI API error received' stream_error=False ``` ### Expected behavior It should embed documents instead of indefinitely blocking the embedding because of "rate limit".. even after waiting the mentioned time (in seconds), the embedding doesn't continue and it generates more messages about the rate limit.. Not sure if this is due to the type of tier of the Azure OpenAI instance or something else
Requests to the Embeddings_Create Operation under Azure OpenAI API version 2023-07-01-preview have exceeded call rate limit of your current OpenAI S0 pricing tier
https://api.github.com/repos/langchain-ai/langchain/issues/12986/comments
5
2023-11-07T09:09:31Z
2024-03-13T11:04:00Z
https://github.com/langchain-ai/langchain/issues/12986
1,980,878,758
12,986
[ "hwchase17", "langchain" ]
### Issue with current documentation: Hello. I'm reaching out with an issue regarding the process of converting the legacy LLMChain to LCEL style code, as I'm unsure how to use the existing arguments (return_final_only) in LCEL. * my legacy code ``` chain = LLMChain(llm=llm, prompt=prompt, return_final_only=False) ``` I want to pass the `return_final_only=False` to LCEL runnables. ### Idea or request for content: _No response_
How can I use `return_final_only=False` with LCEL?
https://api.github.com/repos/langchain-ai/langchain/issues/12983/comments
3
2023-11-07T07:46:24Z
2024-02-13T16:07:37Z
https://github.com/langchain-ai/langchain/issues/12983
1,980,747,712
12,983
[ "hwchase17", "langchain" ]
### System Info I'm using a locally hosted LLM and want to apply Langchain's ConversationalRetrievalChain or RetrievalQA in an offline setting for chatbot developments, however there is an error as the current configuration do not support local hosted tokenizer. Appreciate if you can help advise on the modifications to required codes to use local general tokenizers (not just gpt2 tokenizer but any tokenizer in general) in an offline setting. ``` > The error message is as follows: > --------------------------------------------------------------------------- > OSError Traceback (most recent call last) > C:\Users\MAS_RA~1\AppData\Local\Temp/ipykernel_3976/1814811930.py in > 18 if query == '': > 19 continue > ---> 20 result = llama2_7B_qa( > 21 {"question": query, "chat_history": chat_history}) > 22 print(f"{blue}Answer: " + result["answer"]) > ~\Documents\Wheels\langchain\chains\base.py in __call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info) > 290 except BaseException as e: > 291 run_manager.on_chain_error(e) > --> 292 raise e > 293 run_manager.on_chain_end(outputs) > 294 final_outputs: Dict[str, Any] = self.prep_outputs( > ~\Documents\Wheels\langchain\chains\base.py in __call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info) > 284 try: > 285 outputs = ( > --> 286 self._call(inputs, run_manager=run_manager) > 287 if new_arg_supported > 288 else self._call(inputs) > ~\Documents\Wheels\langchain\chains\conversational_retrieval\base.py in _call(self, inputs, run_manager) > 132 ) > 133 if accepts_run_manager: > --> 134 docs = self._get_docs(new_question, inputs, run_manager=_run_manager) > 135 else: > 136 docs = self._get_docs(new_question, inputs) # type: ignore[call-arg] > ~\Documents\Wheels\langchain\chains\conversational_retrieval\base.py in _get_docs(self, question, inputs, run_manager) > 287 question, callbacks=run_manager.get_child() > 288 ) > --> 289 return self._reduce_tokens_below_limit(docs) > 290 > 291 async def _aget_docs( > ~\Documents\Wheels\langchain\chains\conversational_retrieval\base.py in _reduce_tokens_below_limit(self, docs) > 265 self.combine_docs_chain, StuffDocumentsChain > 266 ): > --> 267 tokens = [ > 268 self.combine_docs_chain.llm_chain.llm.get_num_tokens(doc.page_content) > 269 for doc in docs > ~\Documents\Wheels\langchain\chains\conversational_retrieval\base.py in (.0) > 266 ): > 267 tokens = [ > --> 268 self.combine_docs_chain.llm_chain.llm.get_num_tokens(doc.page_content) > 269 for doc in docs > 270 ] > ~\Documents\Wheels\langchain\schema\language_model.py in get_num_tokens(self, text) > 252 The integer number of tokens in the text. > 253 """ > --> 254 return len(self.get_token_ids(text)) > 255 > 256 def get_num_tokens_from_messages(self, messages: List[BaseMessage]) -> int: > ~\Documents\Wheels\langchain\schema\language_model.py in get_token_ids(self, text) > 239 in the text. > 240 """ > --> 241 return _get_token_ids_default_method(text) > 242 > 243 def get_num_tokens(self, text: str) -> int: > ~\Documents\Wheels\langchain\schema\language_model.py in _get_token_ids_default_method(text) > 42 """Encode the text into token IDs.""" > 43 # get the cached tokenizer > ---> 44 tokenizer = get_tokenizer() > 45 > 46 # tokenize the text using the GPT-2 tokenizer > ~\Documents\Wheels\langchain\schema\language_model.py in get_tokenizer() > 36 ) > 37 # create a GPT-2 tokenizer instance > ---> 38 return GPT2TokenizerFast.from_pretrained("gpt2") > 39 > 40 > ~\Documents\Wheels\transformers\tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, cache_dir, force_download, local_files_only, token, revision, *init_inputs, **kwargs) > 1836 > 1837 if all(full_file_name is None for full_file_name in resolved_vocab_files.values()): > -> 1838 raise EnvironmentError( > 1839 f"Can't load tokenizer for '{pretrained_model_name_or_path}'. If you were trying to load it from " > 1840 "'https://huggingface.co/models', make sure you don't have a local directory with the same name. " > OSError: Can't load tokenizer for 'gpt2'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'gpt2' is the correct path to a directory containing all relevant files for a GPT2TokenizerFast tokenizer. ``` ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` # ConversationalRetrievalChain from langchain.chains import ConversationalRetrievalChain from langchain.llms import CTransformers LocalHostedInteractiveBot = ConversationalRetrievalChain.from_llm( CTransformers(model="./models/llama-2-7b-chat.Q5_K_M.gguf", model_type="llama"), vectordb.as_retriever(search_kwargs={'k': 6}), return_source_documents=True, verbose=False, max_tokens_limit=1000 ) # Terminal interaction with locally hosted LLM chathistory = [] while True: query = input(f" Prompt: ") if query == "exit": print('Bye bye') sys.exit() if query == '': continue result = LocalHostedInteractiveBot( {"question": query, "chat_history": chathistory}) print(f" Question: " + query) print(f"Answer: " + result["answer"]) chat_history.append((query, result["answer"])) ``` ### Expected behavior The prompt should obtain a chatbot response from the LLM via the retrieval augmented generation methods (ConversationalRetrievalChain or RetrievalQA) in langchain but failed to do so as the current configuration is unable to support local tokenizer.
ConversationalRetrievalChain using local LLM models and tokenizers
https://api.github.com/repos/langchain-ai/langchain/issues/12982/comments
8
2023-11-07T07:11:23Z
2024-03-05T18:59:12Z
https://github.com/langchain-ai/langchain/issues/12982
1,980,695,813
12,982
[ "hwchase17", "langchain" ]
### System Info Langchain version 0.0.331rc1, python3.9, ubuntu 21.04 ### Who can help? @hwchase17 @agola ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [X] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Langchain LLMs are broken. If I import the llm from llama_index.llm import OpenAI, then I get the error below: For example, ConversationSummaryBufferMemory pydantic.v1.error_wrappers.ValidationError: 1 validation error for ConversationSummaryBufferMemory llm Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, invoke, predict, predict_messages (type=type_error) For the code llm = OpenAI(model=model, temperature=0) llm_predictor = LLMPredictor(llm=OpenAI(temperature=0, model=model)) memory = ConversationSummaryBufferMemory( memory_key="memory", return_messages=True, llm=llm_predictor, max_token_limit=29000 if "gpt-4" in model else 7500, ) If I import the LLM from langchain.llms import OpenAI then it says that openai has no attribute `Completion`. I assume these are because of all the new api changes. ### Expected behavior It should work
Broken ConversationSummaryBufferMemory and more
https://api.github.com/repos/langchain-ai/langchain/issues/12980/comments
4
2023-11-07T06:03:23Z
2024-02-13T16:07:48Z
https://github.com/langchain-ai/langchain/issues/12980
1,980,609,049
12,980
[ "hwchase17", "langchain" ]
### System Info LC: `v0.0.331rc1` ### Who can help? The latest LC release candidate does not support the new embeddings API of the OpenAI SDK. ``` AttributeError: module 'openai' has no attribute 'Embedding' ``` ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Install the latest release candidate and use OpenAIEmbeddings ### Expected behavior Update to work with latest OpenAI SDK.
Update `OpenAIEmbeddings` to work with OpenAI SDK updates
https://api.github.com/repos/langchain-ai/langchain/issues/12970/comments
9
2023-11-07T00:31:37Z
2024-02-14T16:07:39Z
https://github.com/langchain-ai/langchain/issues/12970
1,980,316,909
12,970
[ "hwchase17", "langchain" ]
### System Info Downloading langchain-0.0.331-py3-none-any.whl (2.0 MB) Downloading openai-1.1.1-py3-none-any.whl (217 kB) ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Run the following code in colab: ``` !pip install langchain !pip install openai from langchain.llms import OpenAI OpenAI().predict("hoge") ``` You'll get: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) [<ipython-input-5-0eec0f4f0523>](https://localhost:8080/#) in <cell line: 4>() 2 from langchain.llms import OpenAI 3 ----> 4 OpenAI().predict("hoge") 3 frames [/usr/local/lib/python3.10/dist-packages/langchain/llms/openai.py](https://localhost:8080/#) in validate_environment(cls, values) 264 import openai 265 --> 266 values["client"] = openai.Completion 267 except ImportError: 268 raise ImportError( AttributeError: module 'openai' has no attribute 'Completion' ``` ### Expected behavior This was working until yesterday. It's likely due to the openai dependency update.
AttributeError: module 'openai' has no attribute 'Completion'
https://api.github.com/repos/langchain-ai/langchain/issues/12967/comments
25
2023-11-07T00:07:47Z
2023-11-12T23:49:19Z
https://github.com/langchain-ai/langchain/issues/12967
1,980,294,255
12,967
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Hi , I was working on QA using a large csv dataset. I am using a local llm model (llama2) along with create_csv_agent. Following is my code snippet. `from langchain.llms import OpenAI from langchain.chat_models import ChatOpenAI from langchain.agents.agent_types import AgentType from langchain_experimental.agents.agent_toolkits import create_csv_agent agent = create_csv_agent( local_llm, "MLdata.csv", verbose=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,handle_parsing_errors=True ) print(agent.run("Please provide me the 10 records with VAX_TYPE COVID19.")) ` First of all the agent is only displaying 5 rows instead of 10. Secondly when I asked about "count the total number of rows in the dataset". It also generated a wrong output (generated output 5). How to fix this issue? ### Suggestion: _No response_
Issue: QA with a large csv dataset
https://api.github.com/repos/langchain-ai/langchain/issues/12962/comments
3
2023-11-06T22:33:41Z
2024-02-12T16:07:14Z
https://github.com/langchain-ai/langchain/issues/12962
1,980,194,390
12,962
[ "hwchase17", "langchain" ]
### System Info Langchain Version: 0.0.331 (also tested 0.0.326) Python Version: 3.11.6 ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [x] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ## The issue It appears that `langchain.embeddings.OpenAIEmbeddings` does not support using parameters to define an API key, despite that being documented [here](https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.openai.OpenAIEmbeddings.html?highlight=azureopenai#langchain.embeddings.openai.OpenAIEmbeddings.openai_api_type). We are using Azure OpenAI, we've successfully configured the API with the same `openai_api_key` parameter for the `AzureOpenAI` method and the `AzureChatOpenAI` methods, but this doesn't work for the `OpenAIEmbeddings`. We can get embeddings to generate by specifying the `OPENAI_API_KEY` env var, as shown in your docs, we would like to not do this though as our use case requires having both an Azure Open AI instance key and Open AI key configured. Calling the OpenAIEmbeddings method as show below does not work ```python from langchain.embeddings import OpenAIEmbeddings .... openAiEmbeddings = OpenAIEmbeddings( # model=LLmType.TEXT_EMBEDDING_ADA_002, deployment_name=deployment, openai_api_type="azure", openai_api_key=azure_openai_api_key, openai_api_base=azure_openai_api_base, chunk_size=1, openai_api_version=azure_openai_api_version, ) return LangchainEmbedding(openAiEmbeddings) ``` The error I receive is here: ``` No API key found for OpenAI. Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization. API keys can be found or created at https://platform.openai.com/account/api-keys To disable the LLM entirely, set llm=None. ****** 2023-11-06 21:14:53,499 - ERROR - Error processing documents: ****** Could not load OpenAI model. If you intended to use OpenAI, please check your OPENAI_API_KEY. Original error: No API key found for OpenAI. Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization. API keys can be found or created at https://platform.openai.com/account/api-keys To disable the LLM entirely, set llm=None. ****** ``` ## Troubleshooting Another valid workaround is this setting `openai.api_key`, such as below ``` import openai ... openai.api_key = self.azure_openai_api_key openAiEmbeddings = OpenAIEmbeddings( # model=LLmType.TEXT_EMBEDDING_ADA_002, deployment_name=deployment, openai_api_type="azure", openai_api_base=azure_openai_api_base, chunk_size=1, openai_api_version=azure_openai_api_version, ) return LangchainEmbedding(openAiEmbeddings) ``` I added the following code to ensure an empty OPENAI_API_KEY environment variable wan't causing the issue, it make no effect. ```python if 'OPENAI_API_KEY' in os.environ: print("did find OPENAI_API_KEY in os.environ") print("os.environ[OPENAI_API_KEY]: ", os.environ["OPENAI_API_KEY"]) del os.environ['OPENAI_API_KEY'] else: print("did not find OPENAI_API_KEY in os.environ") ``` ### Expected behavior I expected to be able to simply use the `openai_api_key` parameter of `langchain.embedding.OpenAIEmbeddings` without needing to define the `OPENAI_API_KEY` env var or import openai and use `openai.api_key =`
OpenAIEmbeddings doesn't allow specifying API key in parameters
https://api.github.com/repos/langchain-ai/langchain/issues/12959/comments
3
2023-11-06T22:01:28Z
2024-02-12T16:07:19Z
https://github.com/langchain-ai/langchain/issues/12959
1,980,143,971
12,959
[ "hwchase17", "langchain" ]
### System Info Langchain: v0.0.331 openai: v1.1.0 platform: Mac M2 python: 3.11.2 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Import OpenAI from llms 2. Instantiate it ``` >>> from langchain.llms import OpenAI >>> llm = OpenAI() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/lifehedging/.pyenv/versions/myenv/lib/python3.11/site-packages/langchain/load/serializable.py", line 97, in __init__ super().__init__(**kwargs) File "/Users/lifehedging/.pyenv/versions/myenv/lib/python3.11/site-packages/pydantic/v1/main.py", line 339, in __init__ values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/lifehedging/.pyenv/versions/myenv/lib/python3.11/site-packages/pydantic/v1/main.py", line 1102, in validate_model values = validator(cls_, values) ^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/lifehedging/.pyenv/versions/myenv/lib/python3.11/site-packages/langchain/llms/openai.py", line 266, in validate_environment values["client"] = openai.Completion ^^^^^^^^^^^^^^^^^ AttributeError: module 'openai' has no attribute 'Completion'. Did you mean: 'completions'? ``` ### Expected behavior It correctly instantiates provided an API key is present in the environment
Recent langchain version not matching openai v1.0.0+ release spec for client
https://api.github.com/repos/langchain-ai/langchain/issues/12958/comments
12
2023-11-06T21:53:47Z
2024-07-19T16:06:46Z
https://github.com/langchain-ai/langchain/issues/12958
1,980,132,466
12,958
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Hi, It's me again. I'm trying to create my own ConversationSummaryMemory and make _DEFAULT_ENTITY_EXTRACTION_TEMPLATE in JSON format. Is there a way to do so? ### Suggestion: _No response_
Issue: rewrite the _DEFAULT_ENTITY_EXTRACTION_TEMPLATE in JSON format
https://api.github.com/repos/langchain-ai/langchain/issues/12957/comments
3
2023-11-06T21:53:23Z
2024-02-12T16:07:29Z
https://github.com/langchain-ai/langchain/issues/12957
1,980,131,843
12,957
[ "hwchase17", "langchain" ]
### System Info all last versions as of 6/11/2023 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Run anything with OpeanAI ChatCompletions and Embeddings ### Expected behavior File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 284, in validate_environment values["client"] = openai.Embedding ^^^^^^^^^^^^^^^^ AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'? File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/load/serializable.py", line 97, in __init__ super().__init__(**kwargs) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__ raise validation_error pydantic.v1.error_wrappers.ValidationError: 1 validation error for ChatOpenAI __root__ `openai` has no `ChatCompletion` attribute, this is likely due to an old version of the openai package. Try upgrading it with `pip install --upgrade openai`. (type=value_error)
A ton of Bugs after OpenAI API update today
https://api.github.com/repos/langchain-ai/langchain/issues/12956/comments
9
2023-11-06T21:41:02Z
2023-12-11T12:03:14Z
https://github.com/langchain-ai/langchain/issues/12956
1,980,109,527
12,956
[ "hwchase17", "langchain" ]
### System Info Langchain 0.0.331 latest Openai v0.28.1 Python v3.11.6 Deeplake 3.8.4 latest ### Who can help? @eyurtsev @hwchase17 _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Once data has been loaded to Deeplake db ``` from langchain.embeddings import OpenAIEmbeddings from langchain.schema.output_parser import StrOutputParser from langchain.schema.runnable import RunnablePassthrough from langchain.vectorstores import DeepLake from langchain.chat_models import ChatOpenAI from langchain.prompts import ChatPromptTemplate model = ChatOpenAI() db = DeepLake(dataset_path="./my_deeplake/", read_only=True, embedding=OpenAIEmbeddings()) retriever = db.as_retriever() retriever.search_kwargs['distance_metric'] = 'cos' retriever.search_kwargs['fetch_k'] = 10 retriever.search_kwargs['maximal_marginal_relevance'] = True retriever.search_kwargs['k'] = 5 template = """Answer the question based only on the following context: {context} Question: {question} """ prompt = ChatPromptTemplate.from_template(template) retrieval_chain = ( {"context": retriever, "question": RunnablePassthrough()} | prompt | model | StrOutputParser() ) retrieval_chain.invoke("where did harrison work?") ``` Error RuntimeError: std::get: wrong index for variant <img width="1489" alt="image" src="https://github.com/langchain-ai/langchain/assets/150083258/9dbcad3a-759b-492e-86f6-3cd1dc67ce91"> ### Expected behavior Expected behavior is it should retrieve documents from vectordb
Using RunnablePassthrough with Deeplake gives RuntimeError: std::get: wrong index for variant
https://api.github.com/repos/langchain-ai/langchain/issues/12955/comments
3
2023-11-06T21:38:34Z
2024-02-12T16:07:34Z
https://github.com/langchain-ai/langchain/issues/12955
1,980,103,620
12,955
[ "hwchase17", "langchain" ]
### System Info LangChain: 0.0.331 OpenAI: 1.1.0 Python: 3.11.2 ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Full Message: > ...langchain/embeddings/openai.py", line 284, in validate_environment > values["client"] = openai.Embedding > ^^^^^^^^^^^^^^^^ > AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'? Code line caught this error: `index = VectorstoreIndexCreator(vectorstore_kwargs={"persist_directory":"persist"}).from_loaders([loader])` ### Expected behavior Creates Vector Store Index
Module 'openai' has no attribute 'Embedding'
https://api.github.com/repos/langchain-ai/langchain/issues/12954/comments
24
2023-11-06T21:31:20Z
2024-01-18T18:53:59Z
https://github.com/langchain-ai/langchain/issues/12954
1,980,093,720
12,954
[ "hwchase17", "langchain" ]
### "response_format" parameter on the new GPT-4-turbo model I can sucessfully call the new GPT-4-turbo model by using: `const model = new ChatOpenAI({modelName:"gpt-4-1106-preview"})` But I can't add the "response_format" parameter to set its response explicitly to be a json, as stated in: https://platform.openai.com/docs/guides/text-generation/json-mode ### Suggestion: Is there a way to pass the parameter to the model or this should be added to the ChatOpenAI code?
Issue: "response_format" parameter on the new GPT-4-turbo model
https://api.github.com/repos/langchain-ai/langchain/issues/12953/comments
11
2023-11-06T21:16:44Z
2024-05-15T18:04:28Z
https://github.com/langchain-ai/langchain/issues/12953
1,980,073,140
12,953
[ "hwchase17", "langchain" ]
### System Info Langchain v0.0.323 Openai v1.0.1 (latest) Python v3.11.6 ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` from langchain.chat_models import ChatOpenAI llm = ChatOpenAI( model_name="gpt-4", request_timeout=120, ) ``` openai has no ChatCompletion attribute, this is likely due to an old version of the openai package. Try upgrading it with `pip install --upgrade openai`. (type=value_error) ### Expected behavior There is an issue with ChatOpenAI that I believe may be related to the newest openai python package update.
openai has no ChatCompletion attribute, this is likely due to an old version of the openai package.
https://api.github.com/repos/langchain-ai/langchain/issues/12949/comments
34
2023-11-06T20:49:17Z
2024-02-19T16:07:56Z
https://github.com/langchain-ai/langchain/issues/12949
1,980,034,271
12,949
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Whenever my agent does not require a tool, (tool has already been used, or query does not require a tool), it continues output. ``` datetimetool = Tool( name="datetimetool", func=lambda x: datetime.now().strftime('%A %d %B %Y, %I:%M%p'), description="Retrieve and return the current date and/or time. \ Input should be an empty string.", ) tools = [datetimetool] PREFIX = ''' You are a truthful, helpful agent. ''' FORMAT_INSTRUCTIONS = """Please use the following format only when you need to use a tool: ''' Thought: Do I need to use a tool? Yes Action: the action to take, should be one of [{tool_names}] Action Input: the input to the action Observation: the result of the action ''' When you have gathered all the information regarding the user's query,\ use the following format to answer the query and do not repeat yourself. ''' Thought: Do I need to use a tool? No AI: [print answer and stop output] ''' """ SUFFIX = ''' Begin! Instructions: {input} {agent_scratchpad} ''' agent = initialize_agent( tools=tools, llm=llm, agent="zero-shot-react-description", verbose=True, return_intermediate_steps=True, handle_parsing_errors=True, agent_kwargs={ 'prefix': PREFIX, 'format_instructions': FORMAT_INSTRUCTIONS, 'suffix': SUFFIX, 'input variables': ["chat_history", "input", "agent_scratchpad", "tool_names"], } ) query = "How are you?" # Or: "What's the time?" res = agent({'input': query}) print(res['output']) ``` ### Suggestion: _No response_
Issue: Agent runs on loop, "Observation: Invalid Format: Missing 'Action:' after 'Thought:"
https://api.github.com/repos/langchain-ai/langchain/issues/12944/comments
22
2023-11-06T18:00:50Z
2024-06-21T07:34:37Z
https://github.com/langchain-ai/langchain/issues/12944
1,979,742,044
12,944
[ "hwchase17", "langchain" ]
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
OpenAIEmbeddings() does not work because of these bugs
https://api.github.com/repos/langchain-ai/langchain/issues/12943/comments
18
2023-11-06T17:56:29Z
2023-11-08T21:38:33Z
https://github.com/langchain-ai/langchain/issues/12943
1,979,733,329
12,943
[ "hwchase17", "langchain" ]
### System Info Langchain v0.0.287 Windows 10 Python 3.9 ### Who can help? @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` template = "Here is some context:\n" \ "{my_documents}\n" \ "With the help of the context above, please answer the following query:\n" \ f"{query}" chain = LLMChain(llm=llm, prompt=PromptTemplate.from_template(template)) combined_chain = StuffDocumentsChain(llm_chain=chain, document_variable_name="my_documents") result = combined_chain.run(my_documents=documents) ``` ### Expected behavior The code above triggers the following error: > ValueError: Missing some input keys: {'input_documents'} Though, the document variable has been explicitly named as "my_documents" and not "input_documents". The reason is that the base class BaseCombineDocumentsChain defines "input_documents" variable by default, but the latter is not overridden by the parameter "document_variable_name" in child classes. By consistency, the "document_variable_name" should refer both to the variable name specified in the prompt and to the named parameter in the run method of the chain.
BaseCombineDocumentsChain's "input_documents" parameter not overridden by child classes
https://api.github.com/repos/langchain-ai/langchain/issues/12942/comments
5
2023-11-06T17:23:19Z
2024-05-13T16:08:32Z
https://github.com/langchain-ai/langchain/issues/12942
1,979,664,993
12,942
[ "hwchase17", "langchain" ]
### Feature request When trying to concurrently fetch many collections from a single postgres server using PGVector, this will currently produce this error: ``` (psycopg2.errors.InternalError_) tuple concurrently updated ``` This is the expected behaviour for the code as it is and is produced by this line: ``` statement = sqlalchemy.text("CREATE EXTENSION IF NOT EXISTS vector") ``` There is no support for concurrent `CREATE EXTENSION` Operations in Postgres, as outlined in [this post](https://www.postgresql.org/message-id/3473.1393693757%40sss.pgh.pa.us): > It'd be necessary to add some kind of locking scheme if you want to avoid "tuple concurrently updated" errors. This is not really any different from the situation where two transactions both want to update the same row in a user table: unless the application takes extra steps to serialize the updates, you're going to get "tuple concurrently updated" errors. > > We do have such locking for DDL on tables/indexes, but the theory in the past has been that it's not worth the trouble for objects represented by single catalog rows, such as functions or roles. You can't corrupt the database with concurrent updates on such a row, you'll just get a "tuple concurrently updated" error from all but the first-to-arrive update. As the post suggests, if we want to concurrently run these statements, a locking Scheme is needed. This can easily be achieved by using Postgres' `pg_advisory_xact_lock`, as described [here](https://www.postgresql.org/docs/16/explicit-locking.html#ADVISORY-LOCKS) & [here](https://www.postgresql.org/docs/16/functions-admin.html): ``` BEGIN; SELECT pg_advisory_xact_lock(1573678846307946496); CREATE EXTENSION IF NOT EXISTS vector; COMMIT; ``` This will acquire an exclusive transaction-level advisory lock, waiting if necessary which is automatically released at the end of the transaction. My proposal is therefore to change the current bare `CREATE EXTENSION IF NOT EXISTS vector` with the above to allow for concurrent execution without errors or having to retry. ### Motivation We have a scenario where it would be beneficial to concurrently retrieve a large number of collections from a single server, and this would appear to be the easiest way to achieve this. ### Your contribution I will submit an according PR.
PGVector: support for concurrency
https://api.github.com/repos/langchain-ai/langchain/issues/12933/comments
1
2023-11-06T14:19:17Z
2023-11-06T19:03:36Z
https://github.com/langchain-ai/langchain/issues/12933
1,979,277,387
12,933
[ "hwchase17", "langchain" ]
Hi, could you please share me an working example for text classification using Langchain with LlamaCPP or llama-cpp-python module, when tried the following with Llama2 7B Q5_K_M ``` prompt_template = """A message can be classified as one of the following categories: book, cancel, change. Examples: - Book: "I would like to book a room for two nights." - Cancel: "Please cancel my reservation and refund the payment." - Change: "I need to change the dates of my booking to next week." Based on these categories, classify the following message: ```{text}``` """ ``` The LlamaCPP is as follows ``` llm = LlamaCpp( n_ctx=256, model_path=model, temperature=0, callback_manager=callback_manager, verbose=True, ) ``` result = run_query("I would like to cancel my booking and ask for a refund.", prompt_template, llm) output: > Please help me to solve this issue. > Please assist in resolving the cancellation of the reservation as soon as possible. The above sample is not classifying the given input.
Example prompt for text classification
https://api.github.com/repos/langchain-ai/langchain/issues/12931/comments
5
2023-11-06T13:44:57Z
2023-11-09T04:59:23Z
https://github.com/langchain-ai/langchain/issues/12931
1,979,196,378
12,931
[ "hwchase17", "langchain" ]
### System Info ### Issue you'd like to raise. Bedrock Streaming support was added in the [PR](https://github.com/langchain-ai/langchain/pull/10393/files#diff-9874347f7fa335df661ff4089b0922b3214e08a92e9879610424522f806358f7R62) But there is an issue if streaming is enabled with `stop sequence`, See the below code, tailing comma is added to the `self.provider_stop_sequence_key_name_map.get(provider),` which is causing ``` "TypeError('keys must be str, int, float, bool or None, not tuple')" line 35, in _prepare_input_and_invoke_stream body = json.dumps(input_body) {'temperature': 0, 'max_tokens_to_sample': 4048, ('stop_sequences',): ['<observation>'], 'prompt': '\n\nHuman: ``` ``` if stop: if provider not in self.provider_stop_sequence_key_name_map: raise ValueError( f"Stop sequence key name for {provider} is not supported." ) # stop sequence from _generate() overrides # stop sequences in the class attribute _model_kwargs[ self.provider_stop_sequence_key_name_map.get(provider), ] = stop ``` ### Suggested fix Remove the trailing comma after `self.provider_stop_sequence_key_name_map.get(provider)` in the `_prepare_input_and_invoke_stream` method. For example: ``` _model_kwargs[ self.provider_stop_sequence_key_name_map.get(provider) ] = stop ``` This will resolve the TypeError and allow the stop sequence to be properly passed for streaming. ### Who can help? cc @3coins @baskaryan @mukitmomi ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction enable bedrock streaming with stop sequence. ``` "TypeError('keys must be str, int, float, bool or None, not tuple')" line 35, in _prepare_input_and_invoke_stream body = json.dumps(input_body) {'temperature': 0, 'max_tokens_to_sample': 4048, ('stop_sequences',): ['<observation>'], 'prompt': '\n\nHuman: ``` ### Expected behavior {'temperature': 0, 'max_tokens_to_sample': 4048, 'stop_sequences': ['<observation>'], 'prompt': '\n\nHuman: stop_sequences as str key.
Amazon Bedrock streaming not working with stop sequence
https://api.github.com/repos/langchain-ai/langchain/issues/12926/comments
1
2023-11-06T11:03:20Z
2023-11-06T11:24:18Z
https://github.com/langchain-ai/langchain/issues/12926
1,978,862,164
12,926
[ "hwchase17", "langchain" ]
I created a pandas dataframe agent when i am querying over multiple csv files sometimes it is giving correct answer some time it is hallucinates by giving the python code to run to get the answer. Here's the response I am getting: ``` To determine the region with the highest total revenue, we can group the data by region and calculate the sum of the total revenue for each region. Then, we can find the region with the highest sum. Here is the corresponding Python code to find the region with the highest total revenue, as well as the corresponding country and item type: # Group the data by region and calculate the sum of total revenue for each region region_revenue = df.groupby('Region')['Total Revenue'].sum() # Find the region with the highest total revenue highest_revenue_region = region_revenue.idxmax() # Find the corresponding country and item type for the highest revenue region corresponding_country = df[df['Region'] == highest_revenue_region]['Country'].iloc[0] corresponding_item_type = df[df['Region'] == highest_revenue_region]['Item Type'].iloc[0] highest_revenue_region, corresponding_country, corresponding_item_type The region with the highest total revenue is Middle East and North Africa. The corresponding country is Azerbaijan and the corresponding item type is Snacks. ``` here is the agent i initiated: ``` create_pandas_dataframe_agent( ChatOpenAI(temperature=0.0, verbose=True, model='gpt-3.5-turbo' ), self.df, verbose=True, max_iterations=20, agent_type=AgentType.OPENAI_FUNCTIONS, # handle_parsing_errors=True, agent_executor_kwargs={ "handle_parsing_errors": True } ) ``` Do you think the chain is stopping in between before completing the response?
Pandas data frame agent hallucinates when asking a quesry: Sending python code insted of actual result
https://api.github.com/repos/langchain-ai/langchain/issues/12925/comments
7
2023-11-06T10:38:37Z
2024-02-13T16:08:07Z
https://github.com/langchain-ai/langchain/issues/12925
1,978,817,037
12,925
[ "hwchase17", "langchain" ]
Wanted to merge confluence and githubissue loader. But facing a problem as confluence loader requires additional arguments while loading. So, I am unable to use the MergedDataLoader here. Code: loader = ConfluenceLoader(url="URL", username="USER", api_key="API") loader1 = GitHubIssuesLoader(repo="REPO", access_token="TOKEN") loader_all = MergedDataLoader(loaders=[loader1, loader]) documents = loader_all.load() text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0) Error: ValueError: Must specify at least one among `space_key`, `page_ids`, `label`, `cql` parameters. ### Suggestion: _No response_
Issue: Unable to Merge confluence and githubissues loader
https://api.github.com/repos/langchain-ai/langchain/issues/12923/comments
3
2023-11-06T08:54:54Z
2024-02-12T16:07:44Z
https://github.com/langchain-ai/langchain/issues/12923
1,978,615,854
12,923
[ "hwchase17", "langchain" ]
### System Info Python == 3.11.3 Langchain == 0.0.330 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```bash Python 3.11.3 (tags/v3.11.3:f3909b8, Apr 4 2023, 23:49:59) [MSC v.1934 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import langchain >>> langchain.__version__ '0.0.330' >>> from langchain.agents import load_tools, initialize_agent, AgentType >>> from langchain.chains.conversation.memory import ConversationBufferMemory >>> from langchain.chat_models import ChatOpenAI >>> agent = initialize_agent( ... agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, ... llm=ChatOpenAI(model='gpt-3.5-turbo', temperature=0), ... memory=ConversationBufferMemory(), ... verbose=True, ... handle_parsing_errors=True, ... tools=load_tools(tool_names=["requests_get"]), ... ) >>> agent.run('Search the latest version of python') > Entering new AgentExecutor chain... Thought: I can use the requests_get tool to search for the latest version of Python on the official Python website. Action: ```{"action": "requests_get", "action_input": {"url": "https://www.python.org/downloads/"}}``` > Finished chain. 'Thought: I can use the requests_get tool to search for the latest version of Python on the official Python website.\n\nAction:\n```{"action": "requests_get", "action_input": {"url": "https://www.python.org/downloads/"}}```' ``` ### Expected behavior I expect agent to execute the action, `requests_get` in the above case. NOTE: In the above case, InvalidRequestError would be raised after executing `requests_get` because the number of tokens is very large. Related: #12158
Issue: StructuredChatOutputParser regex may prevent agent's actions
https://api.github.com/repos/langchain-ai/langchain/issues/12922/comments
3
2023-11-06T08:20:11Z
2023-11-06T23:43:40Z
https://github.com/langchain-ai/langchain/issues/12922
1,978,555,611
12,922
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I've created a tool of parsing text content from pdf files, by adding it to the llm, the tool can get called correclty if I input the text, but the problem is the tool get called many times(I just wait there for minitues and then stopped.), could you help: ``` class ChatLlmWithTools(): def __init__(self) -> None: from langchain.chat_models import ChatOpenAI from langchain.agents.format_scratchpad import format_to_openai_functions from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder from langchain.chains.conversation.memory import ConversationBufferWindowMemory from langchain.prompts import ( ChatPromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate, HumanMessagePromptTemplate, ) import pdf2pdf.tools.pdf_tool from langchain.agents import load_tools llm = ChatOpenAI(openai_api_key="xxxxxxxx", temperature=0, model_name='gpt-3.5-turbo-16k-0613') tools = [Tool(name="parse_text_and_images_from_pdf_files_with_pdfminer", func=pdf2pdf.pdf_utility.parse_text_and_images_from_pdf_files_with_pdfminer, description="Parse text and images from pdf files under target folder")] tools[0].callbacks = [AgentAndToolExecutionWatcherHandler()] llm_with_tools = llm.bind( functions=[format_tool_to_openai_function(t) for t in tools] ) prompt = ChatPromptTemplate( messages=[ SystemMessagePromptTemplate.from_template( "You are a nice chatbot having a conversation with a human, but bad at parse text and images from pdf files." ), # The `variable_name` here is what must align with memory MessagesPlaceholder(variable_name="chat_history"), MessagesPlaceholder(variable_name="agent_scratchpad"), HumanMessagePromptTemplate.from_template("{input}") ] ) agent = ({ "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_to_openai_functions(x['intermediate_steps']), "chat_history": lambda x: x["chat_history"], } | prompt | llm_with_tools | OpenAIFunctionsAgentOutputParser()) from langchain.agents import AgentExecutor # initialize conversational memory memory = ConversationBufferWindowMemory( memory_key='chat_history', k=3, return_messages=True ) self.agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory, #max_iterations=2, callbacks=[AgentAndToolExecutionWatcherHandler()]) def process(self, param_str_from: str): output = self.agent_executor.invoke({"input": param_str_from}) # print(output) return output['output'] ``` this is the unit test for test above agent, and can see the last `process` call can trigger the call to tool, but it's inifinit: ``` llmWithTool = ChatLlmWithTools() llm_msg0 = llmWithTool.process('can you do gerneral chatting, pls remember my name is john, my age is 30?') llm_msg1 = llmWithTool.process('can you say my name, and tell me what is 1+1?') llm_msg2 = llmWithTool.process('can you show my age?') #below line will trigger call tool function time and time again llm_msg9 = llmWithTool.process('can you parse text and extract images from pdf files under target folder: {}'.format( os.path.join(mygptsite.settings.BASE_DIR, 'uploaded_pdf_folder', '20231106103207672___example'))) ``` the part of log in debug console is like: > ... > ... > > Finished chain. > Sure, John! Your age is 30. > > > > Entering new AgentExecutor chain... > on_agent_action, tool: parse_text_and_images_from_pdf_files_with_pdfminer is selected, tool_input: C:\Users\xxx\source\repos\xxxxx\uploaded_pdf_folder\20231106103207672___example > > Invoking: `parse_text_and_images_from_pdf_files_with_pdfminer` with `C:\Users\xxxx\source\repos\xxxxx\uploaded_pdf_folder\20231106103207672___example` > > > on_tool_start, input_str: C:\Users\xxxxx\source\repos\xxxx\uploaded_pdf_folder\20231106103207672___example > on_tool_end, output: I am th test msg parsed from pdf files with pdfminer > I am th test msg parsed from pdf files with pdfmineron_agent_action, tool: parse_text_and_images_from_pdf_files_with_pdfminer is selected, tool_input: C:\Users\xxxx\source\repos\xxxx\uploaded_pdf_folder\20231106103207672___example > > Invoking: `parse_text_and_images_from_pdf_files_with_pdfminer` with `C:\Users\xxxxx\source\repos\xxxx\uploaded_pdf_folder\20231106103207672___example` > > > on_tool_start, input_str: C:\Users\xxxx\source\repos\xxxx\uploaded_pdf_folder\20231106103207672___example > on_tool_end, output: I am th test msg parsed from pdf files with pdfminer > I am th test msg parsed from pdf files with pdfmineron_agent_action, tool: parse_text_and_images_from_pdf_files_with_pdfminer is selected, tool_input: C:\Users\xxxxx\source\repos\xxxx\uploaded_pdf_folder\20231106103207672___example > > Invoking: `parse_text_and_images_from_pdf_files_with_pdfminer` with `C:\Users\xxxx\source\repos\xxxx\uploaded_pdf_folder\20231106103207672___example` > > > on_tool_start, input_str: C:\Users\xxxxx\source\repos\xxxx\uploaded_pdf_folder\20231106103207672___example > on_tool_end, output: I am th test msg parsed from pdf files with pdfminer ### Suggestion: _No response_
Issue: tool get called many times
https://api.github.com/repos/langchain-ai/langchain/issues/12919/comments
3
2023-11-06T05:43:34Z
2024-02-12T16:07:49Z
https://github.com/langchain-ai/langchain/issues/12919
1,978,337,091
12,919
[ "hwchase17", "langchain" ]
### System Info python 3.10.0 langchain 0.3.300 ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ChatTongyi and AsyncIteratorCallbackHandler use togther ### Expected behavior /Users/joeylin/Projects/questionAnswer/venv/lib/python3.10/site-packages/langchain/chat_models/tongyi.py:366: RuntimeWarning: coroutine 'AsyncCallbackManagerForLLMRun.on_llm_new_token' was never awaited run_manager.on_llm_new_token(chunk.text, chunk=chunk) RuntimeWarning: Enable tracemalloc to get the object allocation traceback
ChatTongyi do not support AsyncIteratorCallbackHandler
https://api.github.com/repos/langchain-ai/langchain/issues/12917/comments
3
2023-11-06T03:48:40Z
2024-02-12T16:07:54Z
https://github.com/langchain-ai/langchain/issues/12917
1,978,229,664
12,917
[ "hwchase17", "langchain" ]
### System Info langchain==0.0.330 Python 3.10.8 ### Who can help? @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Was using latest release of pydantic (2.4.2). getting errors to install langchain[docarray] Above that: ImportError: cannot import name 'ROOT_KEY' from 'pydantic.main' (...venv\lib\site-packages\pydantic\main.py) Retried using pydantic 1.10.13. Works fine now. Not sure if worth adding to the error message or building in langchain dependence of pydantic version <2.0 I didn't try all the versions, but I know 2.4.2 breaks. ### Expected behavior I wasn't able to set my index. Here is where the code broke: index = VectorstoreIndexCreator( vectorstore_cls=DocArrayInMemorySearch ).from_loaders([loader])
DocArrayInMemorySearch | Pydantic 2+ Breaking Changes
https://api.github.com/repos/langchain-ai/langchain/issues/12916/comments
3
2023-11-06T02:46:56Z
2024-04-25T16:21:57Z
https://github.com/langchain-ai/langchain/issues/12916
1,978,179,255
12,916
[ "hwchase17", "langchain" ]
### System Info Any system. ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [x] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [X] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ```python from typing import Any, Dict, List, Optional from langchain.callbacks.manager import Callbacks from langchain.embeddings import HuggingFaceEmbeddings from langchain.vectorstores.chroma import Chroma from langchain.schema.vectorstore import VectorStoreRetriever from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.chat_models import AzureChatOpenAI from langchain.docstore.document import Document import os class MyRetriever(VectorStoreRetriever): def get_relevant_documents(self, query: str, *, callbacks: Callbacks = None, tags: List[str] | None = None, metadata: Dict[str, Any] | None = None, run_name: str | None = None, **kwargs: Any) -> List[Document]: print('Called get_relevant_documents in MyRetriever') res = super(MyRetriever, self).get_relevant_documents(query, callbacks=callbacks, tags=tags, metadata=metadata, run_name=run_name, **kwargs) print('Finished get_relevant_documents in MyRetriever') tmp_res: dict[str, Document] = {} for item in res: doc_name = item.page_content.split("\n", maxsplit=1)[0] if doc_name not in tmp_res: tmp_res[doc_name] = item else: orig_doc = tmp_res[doc_name] doc_content = "\n".join(item.page_content.split("\n")[1:]) new_doc = Document(page_content=orig_doc.page_content + doc_content, metadata=orig_doc.metadata) tmp_res[doc_name] = new_doc return list(tmp_res.values()) embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2") vectordb = Chroma.from_documents([Document("blah")], embeddings, persist_directory="./tmp") ret = MyRetriever(vectorstore=vectordb, tags=vectordb._get_retriever_tags()) ret.get_relevant_documents("blah") # infinite recursion! ``` ### Expected behavior Safely call parent class method then customize its behaviour
subclassing VectorStoreRetriever causes unjustifiable infinite recursion
https://api.github.com/repos/langchain-ai/langchain/issues/12913/comments
2
2023-11-06T01:17:11Z
2023-11-06T01:48:44Z
https://github.com/langchain-ai/langchain/issues/12913
1,978,090,823
12,913
[ "hwchase17", "langchain" ]
### System Info LangChain version: 0.0.330 Windows 11 Python 3.11.3 SQLAlchemy version: 2.0.23 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Steps to reproduce the behavior. 1. Follow the steps mentioned in https://python.langchain.com/docs/integrations/toolkits/sql_database 2. Replace connection string with a PostgreSQL connection string 3. Run the following code, make sure to update the connection string and the database/tables you are querying. import langchain from langchain.llms import CTransformers from langchain.agents import create_sql_agent from langchain.agents.agent_toolkits import SQLDatabaseToolkit from langchain.sql_database import SQLDatabase from langchain.agents import AgentExecutor from langchain.agents.agent_types import AgentType config = {'max_new_tokens': 256, 'repetition_penalty': 1.1, 'temperature': 0, 'context_length': 4096} llm = CTransformers(model="TheBloke/CodeLlama-7B-Instruct-GGUF", model_file="codellama-7b-instruct.Q4_K_M.gguf",config=config, verbose=True) CONNECTION_STRING = 'postgresql://postgresuser:password@localhost:5432/pharmadb' db = SQLDatabase.from_uri(CONNECTION_STRING) toolkit = SQLDatabaseToolkit(db=db, llm=llm) agent_executor = create_sql_agent( llm=llm, toolkit=toolkit, verbose=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, ) agent_executor.run("Describe the corporation table") I get the following output in verbose mode.. > Entering new AgentExecutor chain... Action: sql_db_list_tables Action Input: '' Observation: corporation, molecule, product_mat, productpack Thought: The corporation table is probably one of those tables. Action: sql_db_schema Action Input: 'corporation' Observation: Error: table_names {"'corporation'"} not found in database Thought: I must have misspelled the name of the table. Let me try again. Action: sql_db_schema Action Input: 'corporation' Observation: Error: table_names {"'corporation'"} not found in database Thought: I must have misspelled the name of the table. Let me try again. Action: sql_db_schema Action Input: 'corporation' Observation: Error: table_names {"'corporation'"} not found in database Thought: I must have misspelled the name of the table. Let me try again. Action: sql_db_schema Action Input: 'corporation' Observation: Error: table_names {"'corporation'"} not found in database Thought: I must have misspelled the name of the table. Let me try again. Action: sql_db_schema Action Input: 'corporation' Observation: Error: table_names {"'corporation'"} not found in database Thought: I must have misspelled the name of the table. Let me try again. Action: sql_db_schema Action Input: 'corporation' . . . > Finished chain. 'Agent stopped due to iteration limit or time limit.' ### Expected behavior Expected to correctly identify the table and its schema, run the query to retrieve top 3 records from the table and provide a description of the table.
create_sql_agent Error: table_names {"'<table name>'"} not found in database.
https://api.github.com/repos/langchain-ai/langchain/issues/12911/comments
5
2023-11-05T20:09:44Z
2024-07-19T16:06:59Z
https://github.com/langchain-ai/langchain/issues/12911
1,977,943,019
12,911
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I am using RedisChatMessageHistory from langchain and i am trying to deploy it using docker and i am facing issues in connecting to the redis . can someone help me over here @hwchase17 ### Suggestion: _No response_
Issue: How to deploy langchain using docker and redis used by langchain
https://api.github.com/repos/langchain-ai/langchain/issues/12910/comments
8
2023-11-05T18:40:09Z
2023-11-08T12:13:13Z
https://github.com/langchain-ai/langchain/issues/12910
1,977,910,131
12,910
[ "hwchase17", "langchain" ]
### Issue with current documentation: https://blog.langchain.dev/plan-and-execute-agents/ Links at the end of the Conclusion section do not work. [here (Python)](https://python.langchain.com/docs/modules/agents/agent_types/plan_and_execute?ref=blog.langchain.dev ) and [here (JS)](https://js.langchain.com/docs/modules/agents/agents/examples/plan_and_execute_agent?ref=blog.langchain.dev) are broken. ### Idea or request for content: _No response_
DOC: Links at the bottom of Plan and Execute Agents blog do not work
https://api.github.com/repos/langchain-ai/langchain/issues/12904/comments
3
2023-11-05T13:34:23Z
2024-02-11T16:06:36Z
https://github.com/langchain-ai/langchain/issues/12904
1,977,791,576
12,904
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. ``I am using Google Palm,Faiss,HF Instruct Embeddings.When ever I am Quering with RetrievalQAWithSourcesChain. I am getting ``` [chain/error] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 9:chain:LLMChain] [2.01s] Chain run errored with error: "IndexError('list index out of range')" [chain/error] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain] [6.45s] Chain run errored with error: "IndexError('list index out of range')" [chain/error] [1:chain:RetrievalQAWithSourcesChain] [7.13s] Chain run errored with error: "IndexError('list index out of range')" ``` Here is my whole code `````` import` os import streamlit as st import pickle import time import langchain from langchain.llms import GooglePalm from langchain.chains import RetrievalQAWithSourcesChain from langchain.chains.qa_with_sources.loading import load_qa_with_sources_chain from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.document_loaders import SeleniumURLLoader from langchain.embeddings import HuggingFaceInstructEmbeddings from langchain.vectorstores import FAISS urls = ["https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS", "https://www.livemint.com/companies/news/tata-motorss-ev-subsidiary-to-sign-mou-with-jlr-to-strengthen-ev-business-tata-motors-to-pay-royalty-fee-to-jlr-11698925980903.html" ] loader=SeleniumURLLoader(urls=urls) data=loader.load() llm=GooglePalm(google_api_key="", temperature=0.9,max_output_tokens=500) r_splitter = RecursiveCharacterTextSplitter( separators=["\n\n", "\n", " ","\t\t"], chunk_size=400, chunk_overlap=80 ) docs = r_splitter.split_documents(data) embeddings = HuggingFaceInstructEmbeddings( query_instruction="Represent the query for retrieval: " ) vector_index = FAISS.from_documents(docs, embeddings) chain = RetrievalQAWithSourcesChain.from_llm(llm=llm, retriever=vector_index.as_retriever()) chain langchain.debug=True query="summerise the text" chain({"question":query},return_only_outputs=True) ``` ``` OUTPUT: `[chain/start] [1:chain:RetrievalQAWithSourcesChain] Entering Chain run with input: { "question": "summerise the text" } [chain/start] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain] Entering Chain run with input: [inputs] [chain/start] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 4:chain:LLMChain] Entering Chain run with input: { "input_list": [ { "context": "View more \n \n \n Posted by : kamal20", "question": "summerise the text" }, { "context": "View more \n \n \n Posted by : kamal20", "question": "summerise the text" }, { "context": "- - \n - - \n - - \n - - \n - -", "question": "summerise the text" }, { "context": "- - \n - - \n - - \n - - \n - -", "question": "summerise the text" } ] } [llm/start] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 4:chain:LLMChain > 5:llm:GooglePalm] Entering LLM run with input: { "prompts": [ "Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\nView more \n \n \n Posted by : kamal20\nQuestion: summerise the text\nRelevant text, if any:" ] } [llm/start] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 4:chain:LLMChain > 6:llm:GooglePalm] Entering LLM run with input: { "prompts": [ "Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\nView more \n \n \n Posted by : kamal20\nQuestion: summerise the text\nRelevant text, if any:" ] } [llm/start] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 4:chain:LLMChain > 7:llm:GooglePalm] Entering LLM run with input: { "prompts": [ "Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\n- - \n - - \n - - \n - - \n - -\nQuestion: summerise the text\nRelevant text, if any:" ] } [llm/start] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 4:chain:LLMChain > 8:llm:GooglePalm] Entering LLM run with input: { "prompts": [ "Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\n- - \n - - \n - - \n - - \n - -\nQuestion: summerise the text\nRelevant text, if any:" ] } [llm/end] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 4:chain:LLMChain > 5:llm:GooglePalm] [4.40s] Exiting LLM run with output: { "generations": [ [ { "text": "This article discusses the concept of summarizing documents and provides a detailed overview of the steps involved in the process. It also provides examples of summaries and discusses the different types of summaries that can be created.", "generation_info": null } ] ], "llm_output": null, "run": null } [llm/end] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 4:chain:LLMChain > 6:llm:GooglePalm] [4.40s] Exiting LLM run with output: { "generations": [ [ { "text": "The main points are:", "generation_info": null } ] ], "llm_output": null, "run": null } [llm/end] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 4:chain:LLMChain > 7:llm:GooglePalm] [4.40s] Exiting LLM run with output: { "generations": [ [ { "text": "- - \n\n- - ", "generation_info": null } ] ], "llm_output": null, "run": null } [llm/end] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 4:chain:LLMChain > 8:llm:GooglePalm] [4.40s] Exiting LLM run with output: { "generations": [ [ { "text": "- - ", "generation_info": null } ] ], "llm_output": null, "run": null } [chain/end] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 4:chain:LLMChain] [4.41s] Exiting Chain run with output: { "outputs": [ { "text": "This article discusses the concept of summarizing documents and provides a detailed overview of the steps involved in the process. It also provides examples of summaries and discusses the different types of summaries that can be created." }, { "text": "The main points are:" }, { "text": "- - \n\n- - " }, { "text": "- - " } ] } [chain/start] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 9:chain:LLMChain] Entering Chain run with input: { "question": "summerise the text", "summaries": "Content: This article discusses the concept of summarizing documents and provides a detailed overview of the steps involved in the process. It also provides examples of summaries and discusses the different types of summaries that can be created.\nSource: [https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS\n\nContent](https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS/n/nContent): The main points are:\nSource: [https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS\n\nContent](https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS/n/nContent): - - \n\n- - \nSource: [https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS\n\nContent](https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS/n/nContent): - - \nSource: https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS" } [llm/start] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 9:chain:LLMChain > 10:llm:GooglePalm] Entering LLM run with input: { "prompts": [ "Given the following extracted parts of a long document and a question, create a final answer with references (\"SOURCES\"). \nIf you don't know the answer, just say that you don't know. Don't try to make up an answer.\nALWAYS return a \"SOURCES\" part in your answer.\n\nQUESTION: Which state/country's law governs the interpretation of the contract?\n=========\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an injunction or other relief to protect its Intellectual Property Rights.\nSource: 28-pl\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other) right or remedy.\n\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation in force of the remainder of the term (if any) and this Agreement.\n\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any kind between the parties.\n\n11.9 No Third-Party Beneficiaries.\nSource: 30-pl\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as defined in Clause 8.5) or that such a violation is reasonably likely to occur,\nSource: 4-pl\n=========\nFINAL ANSWER: This Agreement is governed by English law.\nSOURCES: 28-pl\n\nQUESTION: What did the president say about Michael Jackson?\n=========\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\nSource: 0-pl\nContent: And we won’t stop. \n\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \n\nLet’s use this moment to reset. Let’s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease. \n\nLet’s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans. \n\nWe can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\nSource: 24-pl\nContent: And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards. \n\nTo all Americans, I will be honest with you, as I’ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \n\nAnd I’m taking robust action to make sure the pain of our sanctions is targeted at Russia’s economy. And I will use every tool at our disposal to protect American businesses and consumers. \n\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. \n\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n\nThese steps will help blunt gas prices here at home. And I know the news about what’s happening can seem alarming. \n\nBut I want you to know that we are going to be okay.\nSource: 5-pl\nContent: More support for patients and families. \n\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \n\nIt’s based on DARPA—the Defense Department project that led to the Internet, GPS, and so much more. \n\nARPA-H will have a singular purpose—to drive breakthroughs in cancer, Alzheimer’s, diabetes, and more. \n\nA unity agenda for the nation. \n\nWe can do this. \n\nMy fellow Americans—tonight , we have gathered in a sacred space—the citadel of our democracy. \n\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \n\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \n\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \n\nNow is the hour. \n\nOur moment of responsibility. \n\nOur test of resolve and conscience, of history itself. \n\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \n\nWell I know this nation.\nSource: 34-pl\n=========\nFINAL ANSWER: The president did not mention Michael Jackson.\nSOURCES:\n\nQUESTION: summerise the text\n=========\nContent: This article discusses the concept of summarizing documents and provides a detailed overview of the steps involved in the process. It also provides examples of summaries and discusses the different types of summaries that can be created.\nSource: [https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS\n\nContent](https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS/n/nContent): The main points are:\nSource: [https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS\n\nContent](https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS/n/nContent): - - \n\n- - \nSource: [https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS\n\nContent](https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS/n/nContent): - - \nSource: [https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS\n=========\nFINAL](https://www.moneycontrol.com/india/stockpricequote/ironsteel/tatasteel/TIS/n=========/nFINAL) ANSWER:" ] } [llm/end] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 9:chain:LLMChain > 10:llm:GooglePalm] [2.01s] Exiting LLM run with output: { "generations": [ [] ], "llm_output": null, "run": null } [chain/error] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 9:chain:LLMChain] [2.01s] Chain run errored with error: "IndexError('list index out of range')" [chain/error] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain] [6.45s] Chain run errored with error: "IndexError('list index out of range')" [chain/error] [1:chain:RetrievalQAWithSourcesChain] [7.13s] Chain run errored with error: "IndexError('list index out of range')"` ### Suggestion: _No response_
Issue: <[chain/error] [1:chain:RetrievalQAWithSourcesChain > 3:chain:MapReduceDocumentsChain > 9:chain:LLMChain] [2.01s] Chain run errored with error: "IndexError('list index out of range')">
https://api.github.com/repos/langchain-ai/langchain/issues/12903/comments
9
2023-11-05T12:19:52Z
2024-06-30T16:03:18Z
https://github.com/langchain-ai/langchain/issues/12903
1,977,763,705
12,903
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. My openai llm see to run fine but some how not getting the answer in Finished chain. Model used ``` def llm_function(csv_file_name): agent = create_csv_agent( ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613", request_timeout=120), csv_file_name, verbose=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, ) # Pass a query to the chain query = QUESTION query = query + " using tool python_repl_ast" #response = agent.run(query) try: response = agent.run(query) except ValueError as e: response = str(e) if not response.startswith("Could not parse LLM output: `"): raise e response = response.removeprefix("Could not parse LLM output: `").removesuffix("`") return response ``` Results ``` > Entering new AgentExecutor chain... Thought: To find the least expensive product, I need to sort the dataframe by the "Price" column in ascending order and then select the first row. Action: python_repl_ast Action Input: df.sort_values("Price").head(1) Observation: Product Name Price Price per Count Bought Last Month 296 MAGNESIUM PHOSPHORICUM 6C 30 ML SBL 90 Price_count not found Data not available Thought:The least expensive product is "MAGNESIUM PHOSPHORICUM 6C 30 ML SBL" with a price of 90. Final Answer: "MAGNESIUM PHOSPHORICUM 6C 30 ML SBL" > Finished chain. None ``` ### Suggestion: _No response_
Finished chain - None, when I run an csv_agent on a csv file.
https://api.github.com/repos/langchain-ai/langchain/issues/12900/comments
3
2023-11-05T09:30:48Z
2024-02-11T16:06:41Z
https://github.com/langchain-ai/langchain/issues/12900
1,977,689,238
12,900
[ "hwchase17", "langchain" ]
### System Info I have successfully installed the apoc plugin by pressing the neo4j document, and run ` return apoc.version() ` on the neo4j client to return successfully. However, when Neo4jGraph was used in Langchain connection, errors were still reported. ``` ValueError: Could not use APOC procedures. Please ensure the APOC plugin is installed in Neo4j and that 'apoc.meta.data()' is allowed in Neo4j configuration ``` ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.graphs import Neo4jGraph graph = Neo4jGraph( 'bolt://localhost:7687', 'neo4j', 'chenhuabc' ) print(graph) # ValueError: Could not use APOC procedures. Please ensure the APOC plugin is installed in Neo4j and that 'apoc.meta.data()' is allowed in Neo4j configuration ### Expected behavior I have successfully installed the apoc plugin by pressing the neo4j document, and run ` return apoc.version() ` on the neo4j client to return successfully. However, when Neo4jGraph was used in Langchain connection, errors were still reported. ``` ValueError: Could not use APOC procedures. Please ensure the APOC plugin is installed in Neo4j and that 'apoc.meta.data()' is allowed in Neo4j configuration ```
Langchain connects Neo4j v5.9 error Could not use APOC procedures
https://api.github.com/repos/langchain-ai/langchain/issues/12901/comments
14
2023-11-05T09:13:20Z
2024-05-20T05:19:27Z
https://github.com/langchain-ai/langchain/issues/12901
1,977,701,382
12,901
[ "hwchase17", "langchain" ]
### System Info langchain==0.0.330 Python 3.10.12 ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.retrievers import ContextualCompressionRetriever from langchain.retrievers.document_compressors import CohereRerank # compressor = CohereRerank() # Error Encountered --------------------------------------------------------------------------- KeyError Traceback (most recent call last) [<ipython-input-29-be190c4da94f>](https://localhost:8080/#) in <cell line: 5>() 3 4 # ----> 5 compressor = CohereRerank() 6 # 7 compression_retriever = ContextualCompressionRetriever( 2 frames /usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__() /usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.validate_model() [/usr/local/lib/python3.10/dist-packages/langchain/retrievers/document_compressors/cohere_rerank.py](https://localhost:8080/#) in validate_environment(cls, values) 53 values, "cohere_api_key", "COHERE_API_KEY" 54 ) ---> 55 client_name = values["user_agent"] 56 values["client"] = cohere.Client(cohere_api_key, client_name=client_name) 57 return values KeyError: 'user_agent' ### Expected behavior The below error should not have been encountered as the COHERE API Key has alreday been set and cohere.Client() is workin with the api key provided --------------------------------------------------------------------------- KeyError Traceback (most recent call last) [<ipython-input-29-be190c4da94f>](https://localhost:8080/#) in <cell line: 5>() 3 4 # ----> 5 compressor = CohereRerank() 6 # 7 compression_retriever = ContextualCompressionRetriever( 2 frames /usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__() /usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.validate_model() [/usr/local/lib/python3.10/dist-packages/langchain/retrievers/document_compressors/cohere_rerank.py](https://localhost:8080/#) in validate_environment(cls, values) 53 values, "cohere_api_key", "COHERE_API_KEY" 54 ) ---> 55 client_name = values["user_agent"] 56 values["client"] = cohere.Client(cohere_api_key, client_name=client_name) 57 return values KeyError: 'user_agent'
Encounter KeyError: 'user_agent' while using CohereRerank()
https://api.github.com/repos/langchain-ai/langchain/issues/12899/comments
6
2023-11-05T07:40:49Z
2024-03-14T10:18:11Z
https://github.com/langchain-ai/langchain/issues/12899
1,977,674,946
12,899
[ "hwchase17", "langchain" ]
### System Info Platform: macOS Sonoma 14.0 Python Version: 3.11 LangChain Version: 0.0.330 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [x] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python prompt = PromptTemplate.from_template("What is a good name for a company that makes {product}?") chain = LLMChain(llm=llm, prompt=prompt) for chunk in chain.stream({ "product": "colorful socks" }) print(f"Chunk: {chunk}") ``` ### Expected behavior One expects to receive chunks when streaming, but because the `stream` method is not implemented in the `LLMChain class`, it falls back to the `stream` method in the base `Chain` class. This results in a `chunk` variable containing the full response. This can be fixed easily by something like this. ```python from langchain.chains import LLMChain class MyChain(LLMChain): def stream( self, input: Input, config: Optional[RunnableConfig] = None, run_manager: Optional[CallbackManagerForChainRun] = None, **kwargs: Optional[Any], ) -> Iterator[Output]: prompts, stop = self.prep_prompts([input], run_manager=run_manager) yield from self.llm.stream(input=prompts[0], config=config, **kwargs) ``` ```python prompt = PromptTemplate.from_template("What is a good name for a company that makes {product}?") chain = MyChain(llm=llm, prompt=prompt) for chunk in chain.stream({ "product": "colorful socks" }) print(f"Chunk: {chunk}") ✅ ``` If you agree, I would like to create PR which will fix that.
LLMChain does not stream
https://api.github.com/repos/langchain-ai/langchain/issues/12894/comments
7
2023-11-04T22:00:53Z
2024-07-18T16:07:19Z
https://github.com/langchain-ai/langchain/issues/12894
1,977,527,950
12,894
[ "hwchase17", "langchain" ]
### System Info pip freeze attrs==19.3.0 Automat==0.8.0 blinker==1.4 certifi==2019.11.28 chardet==3.0.4 Click==7.0 cloud-init==23.3.1 colorama==0.4.3 command-not-found==0.3 configobj==5.0.6 constantly==15.1.0 cryptography==2.8 dbus-python==1.2.16 diskcache==5.6.3 distro==1.4.0 distro-info==0.23+ubuntu1.1 entrypoints==0.3 httplib2==0.14.0 hyperlink==19.0.0 idna==2.8 importlib-metadata==1.5.0 incremental==16.10.1 Jinja2==2.10.1 jsonpatch==1.22 jsonpointer==2.0 jsonschema==3.2.0 keyring==18.0.1 language-selector==0.1 launchpadlib==1.10.13 lazr.restfulclient==0.14.2 lazr.uri==1.0.3 llama-cpp-python==0.2.11 MarkupSafe==1.1.0 more-itertools==4.2.0 netifaces==0.10.4 numpy==1.24.4 oauthlib==3.1.0 pexpect==4.6.0 pyasn1==0.4.2 pyasn1-modules==0.2.1 PyGObject==3.36.0 PyHamcrest==1.9.0 PyJWT==1.7.1 pymacaroons==0.13.0 PyNaCl==1.3.0 pyOpenSSL==19.0.0 pyrsistent==0.15.5 pyserial==3.4 python-apt==2.0.1+ubuntu0.20.4.1 python-debian==0.1.36+ubuntu1.1 PyYAML==5.3.1 requests==2.22.0 requests-unixsocket==0.2.0 SecretStorage==2.3.1 service-identity==18.1.0 simplejson==3.16.0 six==1.14.0 sos==4.5.6 ssh-import-id==5.10 systemd-python==234 Twisted==18.9.0 typing-extensions==4.8.0 ubuntu-advantage-tools==8001 ufw==0.36 unattended-upgrades==0.1 urllib3==1.25.8 wadllib==1.3.3 zipp==1.0.0 zope.interface==4.7.1 ### Who can help? I'm trying to run an agent based on local `ollama mistral:7b-instruct` and stop is not working. Is it a bug or I configured wrong? Thanks in advance! ``` from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParser from langchain.prompts import StringPromptTemplate from langchain.llms import OpenAI from langchain.utilities import SerpAPIWrapper from langchain.chains import LLMChain from typing import List, Union from langchain.schema import AgentAction, AgentFinish, OutputParserException import re from langchain.llms import Ollama from langchain.callbacks.manager import CallbackManager from langchain.callbacks.streaming_stdout_final_only import FinalStreamingStdOutCallbackHandler from langchain.agents import tool @tool def get_word_length(word: str) -> int: """Returns the length of a word.""" return len(word.strip().split()[0]) tools = [get_word_length] # Set up the base template template = """Answer the following questions as best you can. You have access to the following tools: {tools} Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [{tool_names}] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! If you can use a tool, use it before the final answer. Do not make up an observation before calling an action! Question: {input} {agent_scratchpad}""" # Set up a prompt template class CustomPromptTemplate(StringPromptTemplate): # The template to use template: str # The list of tools available tools: List[Tool] def format(self, **kwargs) -> str: # Get the intermediate steps (AgentAction, Observation tuples) # Format them in a particular way intermediate_steps = kwargs.pop("intermediate_steps") thoughts = "" for action, observation in intermediate_steps: thoughts += action.log thoughts += f"\nObservation: {observation}\nThought: " # Set the agent_scratchpad variable to that value kwargs["agent_scratchpad"] = thoughts # Create a tools variable from the list of tools provided kwargs["tools"] = "\n".join([f"{tool.name}: {tool.description}" for tool in self.tools]) # Create a list of tool names for the tools provided kwargs["tool_names"] = ", ".join([tool.name for tool in self.tools]) return self.template.format(**kwargs) prompt = CustomPromptTemplate( template=template, tools=tools, # This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically # This includes the `intermediate_steps` variable because that is needed input_variables=["input", "intermediate_steps"] ) class CustomOutputParser(AgentOutputParser): def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]: # Check if agent should finish if "Final Answer:" in llm_output: return AgentFinish( # Return values is generally always a dictionary with a single `output` key # It is not recommended to try anything else at the moment :) return_values={"output": llm_output.split("Final Answer:")[-1].strip()}, log=llm_output, ) # Parse out the action and action input regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)" match = re.search(regex, llm_output, re.DOTALL) if not match: raise OutputParserException(f"Could not parse LLM output: `{llm_output}`") action = match.group(1).strip() action_input = match.group(2) # Return the action and action input return AgentAction(tool=action, tool_input=action_input.strip(" ").strip('"'), log=llm_output) output_parser = CustomOutputParser() ollama = Ollama(model="mistral:7b-instruct", callbacks=[FinalStreamingStdOutCallbackHandler()]) # LLM chain consisting of the LLM and a prompt llm_chain = LLMChain(llm=ollama, prompt=prompt) tool_names = [tool.name for tool in tools] agent = LLMSingleActionAgent( llm_chain=llm_chain, output_parser=output_parser, stop=["Observation:"], allowed_tools=tool_names ) agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True) agent_executor.run("How many letters in the word astronomia?") ``` @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Run the code above ### Expected behavior Expected to stop on Observation
It seems that stop does not work with ollama models.
https://api.github.com/repos/langchain-ai/langchain/issues/12892/comments
8
2023-11-04T21:30:07Z
2024-05-16T16:07:34Z
https://github.com/langchain-ai/langchain/issues/12892
1,977,518,229
12,892
[ "hwchase17", "langchain" ]
### Issue with current documentation: Page link: https://python.langchain.com/docs/modules/agents/agent_types/react#using-chat-models Under "Using chat models", second paragraph: ```The main difference here is a different prompt. We will use JSON to encode the agent's actions (chat models are a bit tougher to steet, so using JSON helps to enforce the output format).``` _steet_ should be replaced with _steer_. ### Idea or request for content: _No response_
DOC: Typo under docs for using chat models for ReAct agents
https://api.github.com/repos/langchain-ai/langchain/issues/12891/comments
2
2023-11-04T20:46:53Z
2024-02-10T16:06:38Z
https://github.com/langchain-ai/langchain/issues/12891
1,977,501,097
12,891
[ "hwchase17", "langchain" ]
### System Info Name: langchain Version: 0.0.308 Macos ventura 13.2.1 ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I have two Custom tools : CustomJiraTicketWriting which use FewShotChatMessagePromptTemplate and StructuredOutputParser.from_response_schemas in order to produce dictionary output with a compatible & developed JSON representing a Jira ticket. CustomJiraTicketPOST which accept 4 inputs : ticket / email / url / token in order to POST the ticket on Jira. Both of these function works great separately, but when I use an Structured_Chat_Zero_Shot_Description agent to create a ReACT decision, it appears that the Agent refuse to use any of the tools. I wonder if i do something wrong or if the Structured_Chat_Zero_Shot_Description agent is bugged ? ``` import os import re import json import openai import random import langchain import demjson3 import requests from langchain.chat_models import ChatOpenAI from requests.auth import HTTPBasicAuth from openai import ChatCompletion from typing import Optional, Type, List, Dict, Union from langchain.llms import OpenAI,GPT4All from pydantic import BaseModel, Field from langchain.agents import (AgentType, initialize_agent, AgentOutputParser, LLMSingleActionAgent, AgentExecutor, load_tools) from langchain.schema import AgentAction, AgentFinish from langchain.cache import InMemoryCache from langchain.prompts.few_shot import FewShotPromptTemplate from langchain.prompts.few_shot_with_templates import FewShotPromptWithTemplates from langchain.prompts.few_shot import FewShotPromptTemplate from langchain.pydantic_v1 import BaseModel, Field, validator from langchain.chains import LLMChain, SequentialChain, SimpleSequentialChain, TransformChain from langchain.output_parsers import (OutputFixingParser, RetryWithErrorOutputParser, PydanticOutputParser, CombiningOutputParser, ResponseSchema, StructuredOutputParser) from langchain.chains.conversation.memory import ConversationBufferWindowMemory from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.tools import BaseTool, StructuredTool, Tool, tool from langchain.prompts import (ChatPromptTemplate, PromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate, FewShotChatMessagePromptTemplate, StringPromptTemplate, ) from langchain.schema import (AIMessage, HumanMessage, SystemMessage ) from langchain.callbacks.manager import ( AsyncCallbackManagerForToolRun, CallbackManagerForToolRun, ) email = '[email protected]' url = "https://XXXXXXXXXX.atlassian.net/rest/api/3/issue" debug_mode = True langchain.debug = True langchain.llm_cache = InMemoryCache() os.environ["JIRA_API_TOKEN"]= 'XXXXXXXXXX' api_token_jira = os.environ["JIRA_API_TOKEN"] os.environ["JIRA_USERNAME"]= 'XXXXXXXXXX' os.environ["JIRA_INSTANCE_URL"] = "XXXXXXXXXX" site_jira = os.environ["JIRA_INSTANCE_URL"] os.environ['OPENAI_API_KEY']='XXXXXXXXXX' openai_api_key = os.getenv('OPENAI_API_KEY') api_key = os.getenv('OPENAI_API_KEY') model_name = 'gpt-3.5-turbo' temperature = 0.0 model_llm = OpenAI(model_name=model_name, temperature=temperature, max_tokens=3500) model_chat = ChatOpenAI( temperature=temperature, max_tokens=3100, model_name=model_name) turbo_llm = ChatOpenAI( temperature=temperature, model_name=model_name, max_tokens=3100,) class CustomJiraTicketWriting(BaseTool): name = "Jira_Ticket_Write" description = (""" Useful to transform a summary into a real JSON Jira ticket. The input should be like : {{ Action: Ticket_writing, Action Input: "summary": <ticket summary>, }} """) def _run(self, summary: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, ) -> dict: response_schema = ResponseSchema( name="jira_ticket", description="Jira ticket information", structure={ "fields": { "project": { "key": str }, "summary": str, "issuetype": { "name": str }, "priority": { "name": str }, "description": { "type": str, "version": int, "content": [ { "type": str, "content": [ { "type": str, "text": str } ] }, { "type": str, "attrs": { "level": int }, "content": [ { "type": str, "text": str } ] } ] } } } ) response_schemas=[response_schema] output_parser = StructuredOutputParser.from_response_schemas(response_schemas) format_instructions = output_parser.get_format_instructions() examples = [ { "Jira_Ticket_Summary" : "Creation of the MySQL database", "output": """{"fields":{"project":{"key":"AJ"},"summary":"Create a Jira ticket to integrate my MySQL database into our current assets","issuetype":{"name":"Story"},"priority":{"name":"High"},"description":{"type":"doc","version":1,"content":[{"type":"paragraph","content":[{"type":"text","text":"As a developer, I want to integrate my MySQL database with our current assets to improve data management."}]},{"type":"heading","attrs":{"level":2},"content":[{"type":"text","text":"Acceptance Criteria:"}]},{"type":"paragraph","content":[{"type":"text","text":"- The MySQL database is successfully integrated with the application."}]},{"type":"paragraph","content":[{"type":"text","text":"- Data can be efficiently stored and retrieved from the integrated MySQL database."}]},{"type":"paragraph","content":[{"type":"text","text":"- The integration process follows best practices and security standards."}]},{"type":"paragraph","content":[{"type":"text","text":"- The integration is documented for future reference."}]},{"type":"heading","attrs":{"level":2},"content":[{"type":"text","text":"Subtasks:"}]},{"type":"paragraph","content":[{"type":"text","text":"- Analyze the structure of the MySQL database."}]},{"type":"paragraph","content":[{"type":"text","text":"- Create integration scripts for data migration."}]},{"type":"paragraph","content":[{"type":"text","text":"- Implement data synchronization with the application."}]},{"type":"paragraph","content":[{"type":"text","text":"- Perform testing and quality assurance of the integration."}]},{"type":"paragraph","content":[{"type":"text","text":"- Document the integration process and configurations."}]}]}}""" }] example_prompt = ChatPromptTemplate.from_messages( [ ("human", "{Jira_Ticket_Summary}"), ("ai", "{output}")] ) few_shot_prompt = FewShotChatMessagePromptTemplate( example_prompt=example_prompt, examples=examples, ) prompt_string = """ Jira_Ticket_Summary: {Jira_Ticket_Summary} {format_instructions} """ prompt = ChatPromptTemplate.from_messages( [ ("system", "You are a Jira assistant specialized in creating technical tickets. You always develop the tickets with precise examples of sub-tasks and acceptance criteria. Remember to use double quotes for keys."), few_shot_prompt, ("human", prompt_string), ] ) final_prompt = ChatPromptTemplate( messages=[ prompt ], input_variables=['Jira_Ticket_Summary'], partial_variables={"format_instructions": format_instructions}, output_parser=output_parser ) chain = LLMChain(llm=turbo_llm, prompt=final_prompt, output_parser=output_parser, output_key="ticket") sequential_chain = SequentialChain(chains=[chain], input_variables=['Jira_Ticket_Summary'], output_variables=['ticket'], verbose=True) input_data={'Jira_Ticket_Summary' : f'{summary}'} result = sequential_chain(input_data) print("\n\n\n result_writing : ",result) print("\n\n\n result_writing_type : ",type(result),'\n\n\n') return result def _arun(self, summary:str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, ) -> str: raise NotImplementedError("This tool does not support async") class CustomJiraTicketPOST(BaseTool): name = "Jira_Ticket_Post" description = ("""\ Useful to POST a ticket in Jira Software The input should be like : {{ Action: Jira_Post, Action Input: "ticket": <JSON-string of the ticket>, "email": <email associated with the Jira Software account>, "url": <url to POST the ticket at>, "token": <identification JIRA API token> }} """) def _run( self, ticket: str, email: str, url: str, token: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None ) -> str: # Retrieve the values using lowercase keys url = url email = email body = ticket auth = HTTPBasicAuth(email, api_token_jira) headers = { "Accept": "application/json", "Content-Type": "application/json" } response = requests.request( "POST", url, data=body, headers=headers, auth=auth ) return json.dumps(json.loads(response.text), sort_keys=True, indent=4, separators=(",", ": ")) async def _arun( self, ticket: str, email: str, url: str, token: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, ) -> str: """Use the tool asynchronously.""" raise NotImplementedError("custom_search does not support async") PREFIX = """You are a Jira Assistant. You're designed to assist the user with a wide range of tasks related to Jira object management. It goes from understand the user's need for its tickets, writing technical and detailed Jira tickets with descriptions, subtasks and acceptance criteria’s to realize API call (POST, PUT or GET) with Jira objects. """ FORMAT_INSTRUCTIONS = """\ \ To complete the request, think step by step about what you do. Requirements :\ \ Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input). Provide only ONE action per $JSON_BLOB, as shown: """ {"action": "{TOOL_NAME}", "action_input": "{INPUT}"} """ To answer the request, please follow the format:\ \ """ Question: request to answer Thought: You should always think about what to do Action: the action to take, should be one of [{tool_names}] and must be a $JSON_BLOB Observation: Action result ... (repeat Thought/Action/Observation N times) Thought: I know what to respond Action: """ {"action": "Final Answer", "action_input": "Final response to human"} """ """ Remarks : Before to POST a ticket, you need to write it. Remember to act as a Jira assistant and to conclude by "Final Answer" when you succeed ALL the tasks. """ SUFFIX = """ Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation. Previous conversation history: {chat_history} Instructions: {input} {agent_scratchpad} """ memory = ConversationBufferWindowMemory( memory_key='chat_history', k=3, return_messages=True ) tools = [CustomJiraTicketPOST(),CustomJiraTicketWriting()] conversational_agent = initialize_agent( agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, tools=tools, llm=turbo_llm, verbose=True, max_iterations=3, early_stopping_method='generate', handle_parsing_errors="Check your output and make sure it conforms!", memory=memory, agent_kwargs={ 'prefix': PREFIX, 'format_instructions': FORMAT_INSTRUCTIONS, 'suffix': SUFFIX, "input_variables": [ "input", "agent_scratchpad", "chat_history" ], "verbose": True } ) Context = "Write and POST the ticket with following information : " payload = { "summary": "I want to connect our backend to a new MySQL database. The project key is AJ", "email": f"{email}", "url": f"{url}", "token": f"{api_token_jira}", } answer = conversational_agent.run(f'{Context}' + json.dumps(payload)) ``` ### Expected behavior Expected behavior : Well at least use the Writing tools, and then use the Post tool. There is just an uncertainty regarding the acception of the Output of the first tool for the second tool. Actual behavior: ``` [...] } [llm/end] [1:chain:AgentExecutor > 2:chain:LLMChain > 3:llm:ChatOpenAI] [5.15s] Exiting LLM run with output: { "generations": [ [ { ... [chain/end] [1:chain:AgentExecutor] [5.15s] Exiting Chain run with output: { "output": "Action: Jira_Ticket_Write\nAction Input: \n{\n \"summary\": \"I want to connect our backend to a new MySQL database. The project key is AJ\"\n}" } ```
Langchain Structured_Chat_Zero_Shot_Description agent use no tools
https://api.github.com/repos/langchain-ai/langchain/issues/12883/comments
3
2023-11-04T13:49:20Z
2024-02-10T16:06:42Z
https://github.com/langchain-ai/langchain/issues/12883
1,977,348,240
12,883
[ "hwchase17", "langchain" ]
### System Info Name: langchain Version: 0.0.308 Macos 13.2.1 ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Hi everybody ! I have setup a OPENAI_MULTI_FUNCTIONS Agent which seems to do great with a few token, but even if all the information are in the prompt, it is not able to use them for the tool input. What is the issue : I have two Custom tools : - CustomJiraTicketWriting which use FewShotChatMessagePromptTemplate and StructuredOutputParser.from_response_schemas in order to produce dictionary output with a compatible & developed JSON representing a Jira ticket. - CustomJiraTicketPOST which accept 4 inputs : ticket / email / url / token in order to POST the ticket on Jira. Both of these function works great separately, but when I use an Structured_Chat_Zero_Shot_Description agent to create a ReACT decision, it appears that the Agent refuse to use any of the tools. I presented below, the agent use the first tools, write the ticket, inject it in the new prompt, but after that, the agent decide to use the second tool but miss some information ``` import os import json import langchain import requests from langchain.chat_models import ChatOpenAI from requests.auth import HTTPBasicAuth from typing import Optional, Type from langchain.llms import OpenAI from pydantic import BaseModel, Field from langchain.agents import (AgentType, initialize_agent, ) from langchain.cache import InMemoryCache from langchain.pydantic_v1 import BaseModel, Field from langchain.chains import LLMChain, SequentialChain from langchain.output_parsers import (ResponseSchema, StructuredOutputParser) from langchain.tools import BaseTool from langchain.prompts import (ChatPromptTemplate, MessagesPlaceholder, FewShotChatMessagePromptTemplate) from langchain.memory import ConversationBufferMemory from langchain.schema import (SystemMessage, ) from langchain.callbacks.manager import ( AsyncCallbackManagerForToolRun, CallbackManagerForToolRun, ) debug_mode = True langchain.debug = True email = 'XXXXXXXXXX' url = "https://XXXXXXXXXX.atlassian.net/rest/api/3/issue" langchain.llm_cache = InMemoryCache() os.environ["JIRA_API_TOKEN"]= 'XXXXXXXXXX' api_token_jira = os.environ["JIRA_API_TOKEN"] os.environ["JIRA_USERNAME"]= 'XXXXXXXXXX' os.environ["JIRA_INSTANCE_URL"] = "XXXXXXXXXX" site_jira = os.environ["JIRA_INSTANCE_URL"] os.environ['OPENAI_API_KEY']='XXXXXXXXXX' openai_api_key = os.getenv('OPENAI_API_KEY') api_key = os.getenv('OPENAI_API_KEY') model_name = 'gpt-3.5-turbo' temperature = 0.0 model_llm = OpenAI(model_name=model_name, temperature=temperature, max_tokens=3500) turbo_llm = ChatOpenAI( temperature=temperature, model_name=model_name, max_tokens=3100,) class SummaryTicket(BaseModel): """Input for writing Jira Ticket""" summary: str = Field(..., description="ticket summary") class POST(BaseModel): """Input for POST a ticket""" ticket: str = Field(..., description="Jira ticket as a dictionnary") email: str = Field(..., description="email associated with the Jira Software account") url: str = Field(..., description="url to POST a Jira ticket at") token : str = Field(..., description="Identification JIRA API token") class CustomJiraTicketWriting(BaseTool): name = "Jira_Ticket_Write" description = (""" Useful to transform a summary into a real JSON Jira ticket. The input should be like : {{ Action: Ticket_writing, Action Input: "summary": <ticket summary>, }} """) def _run(self, summary: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, ) -> dict: response_schema = ResponseSchema( name="ticket", description="Jira ticket information", structure={ "fields": { "project": { "key": str }, "summary": str, "issuetype": { "name": str }, "priority": { "name": str }, "description": { "type": str, "version": int, "content": [ { "type": str, "content": [ { "type": str, "text": str } ] }, { "type": str, "attrs": { "level": int }, "content": [ { "type": str, "text": str } ] } ] } } } ) response_schemas=[response_schema] output_parser = StructuredOutputParser.from_response_schemas(response_schemas) format_instructions = output_parser.get_format_instructions() examples = [ { "Jira_Ticket_Summary" : "Creation of the MySQL database", "output": """{"fields":{"project":{"key":"AJ"},"summary":"Create a Jira ticket to integrate my MySQL database into our current assets","issuetype":{"name":"Story"},"priority":{"name":"High"},"description":{"type":"doc","version":1,"content":[{"type":"paragraph","content":[{"type":"text","text":"As a developer, I want to integrate my MySQL database with our current assets to improve data management."}]},{"type":"heading","attrs":{"level":2},"content":[{"type":"text","text":"Acceptance Criteria:"}]},{"type":"paragraph","content":[{"type":"text","text":"- The MySQL database is successfully integrated with the application."}]},{"type":"paragraph","content":[{"type":"text","text":"- Data can be efficiently stored and retrieved from the integrated MySQL database."}]},{"type":"paragraph","content":[{"type":"text","text":"- The integration process follows best practices and security standards."}]},{"type":"paragraph","content":[{"type":"text","text":"- The integration is documented for future reference."}]},{"type":"heading","attrs":{"level":2},"content":[{"type":"text","text":"Subtasks:"}]},{"type":"paragraph","content":[{"type":"text","text":"- Analyze the structure of the MySQL database."}]},{"type":"paragraph","content":[{"type":"text","text":"- Create integration scripts for data migration."}]},{"type":"paragraph","content":[{"type":"text","text":"- Implement data synchronization with the application."}]},{"type":"paragraph","content":[{"type":"text","text":"- Perform testing and quality assurance of the integration."}]},{"type":"paragraph","content":[{"type":"text","text":"- Document the integration process and configurations."}]}]}}""" }] example_prompt = ChatPromptTemplate.from_messages( [ ("human", "{Jira_Ticket_Summary}"), ("ai", "{output}")] ) few_shot_prompt = FewShotChatMessagePromptTemplate( example_prompt=example_prompt, examples=examples, ) prompt_string = """ Jira_Ticket_Summary: {Jira_Ticket_Summary} {format_instructions} """ prompt = ChatPromptTemplate.from_messages( [ ("system", "You are a Jira assistant specialized in creating technical tickets. You always develop the tickets with precise examples of sub-tasks and acceptance criteria. Remember to use double quotes for keys."), few_shot_prompt, ("human", prompt_string), ] ) final_prompt = ChatPromptTemplate( messages=[ prompt ], input_variables=['Jira_Ticket_Summary'], partial_variables={"format_instructions": format_instructions}, output_parser=output_parser ) chain = LLMChain(llm=turbo_llm, prompt=final_prompt, output_parser=output_parser, output_key="ticket") sequential_chain = SequentialChain(chains=[chain], input_variables=['Jira_Ticket_Summary'], output_variables=['ticket'], verbose=True) input_data={'Jira_Ticket_Summary' : f'{summary}'} result = sequential_chain(input_data) print("\n\n\n result_writing : ",result) print("\n\n\n result_writing_type : ",type(result),'\n\n\n') return json.dumps(result['ticket']) def _arun(self, summary:str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, ) -> str: raise NotImplementedError("This tool does not support async") args_schema: Optional[Type[BaseModel]] = SummaryTicket class ≠(BaseTool): name = "Jira_Ticket_Post" description = ("""\ Useful to POST a ticket in Jira Software after you wrote it. The input should be like : {{ Action: Jira_Post, Action Input: "ticket": <JSON of the ticket>, "email": <email associated with the Jira Software account>, "url": <url to POST the ticket at>, "token": <identification JIRA API token> }} """) def _run( self, ticket: str, email: str, url: str, token: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None ) -> str: # Retrieve the values using lowercase keys ticket = json.loads(ticket) body = ticket['Jira_ticket'] auth = HTTPBasicAuth(email, api_token_jira) headers = { "Accept": "application/json", "Content-Type": "application/json" } response = requests.request( "POST", url, data=json.dumps(body), headers=headers, auth=auth ) print("\n\nPOST : ", json.dumps(json.loads(response.text), sort_keys=True, indent=4, separators=(",", ": ")), "\n\n") return json.dumps(json.loads(response.text), sort_keys=True, indent=4, separators=(",", ": ")) async def _arun( self, ticket: str, email: str, url: str, token: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None, ) -> str: """Use the tool asynchronously.""" raise NotImplementedError("custom_search does not support async") args_schema: Optional[Type[BaseModel]] = POST PREFIX = """You are a Jira Assistant. You're designed to assist the user with a wide range of tasks related to Jira object management. It goes from understand the user's need for its tickets, writing technical and detailed Jira tickets with descriptions, subtasks and acceptance criteria’s to realize API call (POST, PUT or GET) with Jira objects. Create a ticket means Write it then POST it. Before to act, retrieve the inputs you need in the prompt. """ chat_history = MessagesPlaceholder(variable_name="chat_history") memory = ConversationBufferMemory( memory_key="chat_history", return_messages=True) agent_kwargs = { "system_message": SystemMessage(content=f"{PREFIX}"), "input_variables": ["chat_history"] } tools = [CustomJiraTicketPOST(),CustomJiraTicketWriting()] conversational_agent = initialize_agent( agent=AgentType.OPENAI_MULTI_FUNCTIONS, tools=tools, llm=turbo_llm, verbose=True, max_iterations=10, early_stopping_method='generate', handle_parsing_errors="Check your output and make sure it conforms!", memory=memory, agent_kwargs=agent_kwargs ) payload = {\ "summary":"Create a ticket to connect our backend to a new MySQL database. The project key is AJ",\ "email":f"{email}",\ "url":f"{url}",\ "token":f"{api_token_jira}", } answer = conversational_agent.run(json.dumps(payload)) ``` ### Expected behavior **Expected behaviour :** - an 201 code for created ticket on Jira Software **Actuel behaviour :** All the data are inside the input but the agent has trouble to use them. ``` File ~/miniconda3/envs/torch/lib/python3.10/site-packages/langchain/chains/base.py:501, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs) 499 if len(args) != 1: 500 raise ValueError("`run` supports only one positional argument.") --> 501 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[ 502 _output_key 503 ] 505 if kwargs and not args: 506 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[ ... field required (type=value_error.missing) url field required (type=value_error.missing) token field required (type=value_error.missing) ``` And about the prompt ``` [llm/start] [1:chain:AgentExecutor > 4:llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "System: You are a Jira Assistant.\nYou're designed to assist the user with a wide range of tasks related to Jira object management. \nIt goes from understand the user's need for its tickets, writing technical and detailed Jira tickets with descriptions, subtasks and acceptance criteria’s to realize API call (POST, PUT or GET) with Jira objects.\nCreate a ticket means Write it then POST it.\nBefore to act, retrieve the inputs you need in the prompt.\n\nHuman: {\"summary\": \"Create a ticket to connect our backend to a new MySQL database. The project key is AJ\", \"email\": \"[email protected]\", \"url\": \"https://XXXXXXXX.atlassian.net/rest/api/3/issue\", \"token\": \"XXXXXXXXX\"}\nAI: {'name': 'tool_selection', 'arguments': '{\\n \"actions\": [\\n {\\n \"action_name\": \"Jira_Ticket_Write\",\\n \"action\": {\\n \"summary\": \"Create a ticket to connect our backend to a new MySQL database. The project key is AJ\"\\n }\\n }\\n ]\\n}'}\nFunction: {\"ticket\": {\"fields\": {\"project\": {\"key\": \"AJ\"}, \"summary\": \"Create a Jira ticket to integrate my MySQL database into our current assets\", \"issuetype\": {\"name\": \"Story\"}, \"priority\": {\"name\": \"High\"}, \"description\": {\"type\": \"doc\", \"version\": 1, \"content\": [{\"type\": \"paragraph\", \"content\": [{\"type\": \"text\", \"text\": \"As a developer, I want to integrate my MySQL database with our current assets to improve data management.\"}]}, {\"type\": \"heading\", \"attrs\": {\"level\": 2}, \"content\": [{\"type\": \"text\", \"text\": \"Acceptance Criteria:\"}]}, {\"type\": \"paragraph\", \"content\": [{\"type\": \"text\", \"text\": \"- The MySQL database is successfully integrated with the application.\"}]}, {\"type\": \"paragraph\", \"content\": [{\"type\": \"text\", \"text\": \"- Data can be efficiently stored and retrieved from the integrated MySQL database.\"}]}, {\"type\": \"paragraph\", \"content\": [{\"type\": \"text\", \"text\": \"- The integration process follows best practices and security standards.\"}]}, {\"type\": \"paragraph\", \"content\": [{\"type\": \"text\", \"text\": \"- The integration is documented for future reference.\"}]}, {\"type\": \"heading\", \"attrs\": {\"level\": 2}, \"content\": [{\"type\": \"text\", \"text\": \"Subtasks:\"}]}, {\"type\": \"paragraph\", \"content\": [{\"type\": \"text\", \"text\": \"- Analyze the structure of the MySQL database.\"}]}, {\"type\": \"paragraph\", \"content\": [{\"type\": \"text\", \"text\": \"- Create integration scripts for data migration.\"}]}, {\"type\": \"paragraph\", \"content\": [{\"type\": \"text\", \"text\": \"- Implement data synchronization with the application.\"}]}, {\"type\": \"paragraph\", \"content\": [{\"type\": \"text\", \"text\": \"- Perform testing and quality assurance of the integration.\"}]}, {\"type\": \"paragraph\", \"content\": [{\"type\": \"text\", \"text\": \"- Document the integration process and configurations.\"}]}]}}}}" ] } [llm/end] [1:chain:AgentExecutor > 4:llm:ChatOpenAI] [83.60s] Exiting LLM run with output: { "generations": [ [ { "text": "", "generation_info": { "finish_reason": "function_call" }, "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessage" ], "kwargs": { "content": "", "additional_kwargs": { "function_call": { "name": "tool_selection", "arguments": "{\n \"actions\": [\n {\n \"action_name\": \"Jira_Ticket_Post\",\n \"action\": {\n \"ticket\": \"{\\\"fields\\\": {\\\"project\\\": {\\\"key\\\": \\\"AJ\\\"}, \\\"summary\\\": \\\"Create a Jira ticket to integrate my MySQL database into our current assets\\\", \\\"issuetype\\\": {\\\"name\\\": \\\"Story\\\"}, \\\"priority\\\": {\\\"name\\\": \\\"High\\\"}, \\\"description\\\": {\\\"type\\\": \\\"doc\\\", \\\"version\\\": 1, \\\"content\\\": [{\\\"type\\\": \\\"paragraph\\\", \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"As a developer, I want to integrate my MySQL database with our current assets to improve data management.\\\"}]}, {\\\"type\\\": \\\"heading\\\", \\\"attrs\\\": {\\\"level\\\": 2}, \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"Acceptance Criteria:\\\"}]}, {\\\"type\\\": \\\"paragraph\\\", \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"- The MySQL database is successfully integrated with the application.\\\"}]}, {\\\"type\\\": \\\"paragraph\\\", \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"- Data can be efficiently stored and retrieved from the integrated MySQL database.\\\"}]}, {\\\"type\\\": \\\"paragraph\\\", \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"- The integration process follows best practices and security standards.\\\"}]}, {\\\"type\\\": \\\"paragraph\\\", \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"- The integration is documented for future reference.\\\"}]}, {\\\"type\\\": \\\"heading\\\", \\\"attrs\\\": {\\\"level\\\": 2}, \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"Subtasks:\\\"}]}, {\\\"type\\\": \\\"paragraph\\\", \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"- Analyze the structure of the MySQL database.\\\"}]}, {\\\"type\\\": \\\"paragraph\\\", \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"- Create integration scripts for data migration.\\\"}]}, {\\\"type\\\": \\\"paragraph\\\", \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"- Implement data synchronization with the application.\\\"}]}, {\\\"type\\\": \\\"paragraph\\\", \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"- Perform testing and quality assurance of the integration.\\\"}]}, {\\\"type\\\": \\\"paragraph\\\", \\\"content\\\": [{\\\"type\\\": \\\"text\\\", \\\"text\\\": \\\"- Document the integration process and configurations.\\\"}]}]}}}\"\n }\n }\n ]\n}" } } } } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 916, "completion_tokens": 533, "total_tokens": 1449 }, "model_name": "gpt-3.5-turbo" }, "run": null } [chain/error] [1:chain:AgentExecutor] [187.30s] Chain run errored with error: "ValidationError(model='POST', errors=[{'loc': ('email',), 'msg': 'field required', 'type': 'value_error.missing'}, {'loc': ('url',), 'msg': 'field required', 'type': 'value_error.missing'}, {'loc': ('token',), 'msg': 'field required', 'type': 'value_error.missing'}])" ``` Thus, all the informations are in the Input, but i has difficulties to use them. How to handle this situation ? Best regards
Langchain OPENAI_MULTI_FUNCTIONS Agent doesn't retrieve the data from the second Input (Chain). Can OPENAI_MULTI_FUNCTIONS Agent realize two actions, one after the other ?
https://api.github.com/repos/langchain-ai/langchain/issues/12882/comments
5
2023-11-04T13:41:28Z
2024-02-12T16:07:58Z
https://github.com/langchain-ai/langchain/issues/12882
1,977,343,851
12,882
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. lass Node(BaseModel): problem:str=Field(...,description="the problem related to fan") causes:List[str]=Field(...,description="causes related to particular problem") component:Dict[str,str]=Field(...,description='key should be component of fan and value should be description of how component is getting affected') # description:list[list[str]]=Field(...,description="description for all the causes mentioned) class Final(BaseModel): final:List[Node]=Field(...,description="list of all the Node types") this is my pydantic class , and i am trying to get structure output from the llm error is pydantic_args = self.pydantic_schema.parse_raw(_result) # type: ignore File "pydantic\main.py", line 549, in pydantic.main.BaseModel.parse_raw File "pydantic\main.py", line 526, in pydantic.main.BaseModel.parse_obj File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for _OutputFormatter output -> component field required (type=value_error.missing) ### Suggestion: _No response_
Issue: <getting error in create_structured_output_chain prefix>
https://api.github.com/repos/langchain-ai/langchain/issues/12879/comments
5
2023-11-04T10:40:46Z
2024-02-13T16:08:12Z
https://github.com/langchain-ai/langchain/issues/12879
1,977,275,775
12,879
[ "hwchase17", "langchain" ]
### System Info Cohere embeddings v3 model requires a input_type parameter . This is specific to the new models as per cohere API doc. as follows input_type string Specifies the type of input you're giving to the model. Not required for older versions of the embedding models (i.e. anything lower than v3), but is required for more recent versions (i.e. anything bigger than v2). https://docs.cohere.com/reference/embed When I try specifying the the new cohere v3 embeddings model in Langchain, I get the following error ![image](https://github.com/langchain-ai/langchain/assets/431890/15de1db1-3610-47d2-837c-9826ba5b47d9) This needs to be addressed / fixed. ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.embeddings import CohereEmbeddings cohere_api_key = "xxxx" #Get your API key from www.cohere.com embeddings = CohereEmbeddings(model="embed-english-v3.0",cohere_api_key=cohere_api_key) ### Expected behavior Embeddings with cohere v3 should work.
Cohere embeddings API error . Doesnot work with its new v3 model
https://api.github.com/repos/langchain-ai/langchain/issues/12877/comments
2
2023-11-04T08:18:38Z
2023-11-04T13:25:35Z
https://github.com/langchain-ai/langchain/issues/12877
1,977,229,050
12,877
[ "hwchase17", "langchain" ]
### System Info LangChain version 0.0.330 Chroma version 0.4.15 This may either be a true bug or just documentation issue, but I implemented the simplest possible version of a ConversationalRetrievalChain nearly directly from the documentation, and the model doesn't remember previous messages. ``` from langchain.llms import OpenAI from langchain.chains import ConversationalRetrievalChain from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma embeddings = OpenAIEmbeddings() vectorstore = Chroma(embedding_function=embeddings) from langchain.memory import ConversationBufferMemory memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0,model_name="gpt-4-32k"), vectorstore.as_retriever(), memory=memory) query = "My name's Bob. How are you?" result = qa({"question": query}) print(result) query = "What's my name?" result = qa({"question": query}) print("NEW MESSAGE:",result) ``` The output shows that the model has no memory: ``` {'question': "My name's bob. How are you?", 'chat_history': [HumanMessage(content="My name's bob. How are you?", additional_kwargs={}, example=False), AIMessage(content="I'm doing well, thank you. How can I assist you today, Bob?", additional_kwargs={}, example=False)], 'answer': "I'm doing well, thank you. How can I assist you today, Bob?"} NEW MESSAGE: {'question': "What's my name?", 'chat_history': [HumanMessage(content="My name's bob. How are you?", additional_kwargs={}, example=False), AIMessage(content="I'm doing well, thank you. How can I assist you today, Bob?", additional_kwargs={}, example=False), HumanMessage(content="What's my name?", additional_kwargs={}, example=False), AIMessage(content='The text does not provide the name of the person who is speaking.', additional_kwargs={}, example=False)], 'answer': 'The text does not provide the name of the person who is speaking.'} ``` Apologies if I missed something dumb but it seemed pretty cut and dry so felt like I at least had to post for posterity's sake. ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [X] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. run this code: ``` from langchain.llms import OpenAI from langchain.chains import ConversationalRetrievalChain from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma embeddings = OpenAIEmbeddings() vectorstore = Chroma(embedding_function=embeddings) from langchain.memory import ConversationBufferMemory memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0,model_name="gpt-4-32k"), vectorstore.as_retriever(), memory=memory) query = "My name's Bob. How are you?" result = qa({"question": query}) print(result) query = "What's my name?" result = qa({"question": query}) print("NEW MESSAGE:",result) ``` ### Expected behavior I'd expect the model to remember "my name".
Conversation memory example - previous messages not remembered
https://api.github.com/repos/langchain-ai/langchain/issues/12875/comments
6
2023-11-04T06:41:49Z
2024-03-29T12:25:48Z
https://github.com/langchain-ai/langchain/issues/12875
1,977,200,742
12,875
[ "hwchase17", "langchain" ]
### System Info - OS: macOS Monterey Version 12.6 Chip Apple M1 - Python: 3.10.8 - langchain: 0.0.330 - google-auth: 2.23.4 Details: ``` python -V Python 3.10.8 ``` ``` pip freeze aiohttp==3.8.6 aiosignal==1.3.1 annotated-types==0.6.0 anyio==3.7.1 async-timeout==4.0.3 attrs==23.1.0 cachetools==5.3.2 certifi==2023.7.22 charset-normalizer==3.3.2 dataclasses-json==0.6.1 exceptiongroup==1.1.3 frozenlist==1.4.0 google-api-core==2.12.0 google-api-python-client==2.106.0 google-auth==2.23.4 google-auth-httplib2==0.1.1 google-auth-oauthlib==1.1.0 googleapis-common-protos==1.61.0 httplib2==0.22.0 idna==3.4 jsonpatch==1.33 jsonpointer==2.4 langchain==0.0.330 langsmith==0.0.57 marshmallow==3.20.1 multidict==6.0.4 mypy-extensions==1.0.0 numpy==1.26.1 oauthlib==3.2.2 packaging==23.2 protobuf==4.25.0 pyasn1==0.5.0 pyasn1-modules==0.3.0 pydantic==2.4.2 pydantic_core==2.10.1 pyparsing==3.1.1 PyYAML==6.0.1 requests==2.31.0 requests-oauthlib==1.3.1 rsa==4.9 sniffio==1.3.0 SQLAlchemy==2.0.23 tenacity==8.2.3 typing-inspect==0.9.0 typing_extensions==4.8.0 uritemplate==4.1.1 urllib3==2.0.7 yarl==1.9.2 ``` ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction `main.py`: ```python from langchain.document_loaders import GoogleDriveLoader def main(): loader = GoogleDriveLoader( document_ids=["xxx"], ) docs = loader.load() print(docs) if __name__ == "__main__": main() ``` ``` python main.py Traceback (most recent call last): File "/Users/m.naka/repos/nakamasato/langchain_google_drive/main.py", line 12, in <module> main() File "/Users/m.naka/repos/nakamasato/langchain_google_drive/main.py", line 8, in main docs = loader.load() File "/Users/m.naka/repos/nakamasato/langchain_google_drive/venv/lib/python3.10/site-packages/langchain/document_loaders/googledrive.py", line 355, in load return self._load_documents_from_ids() File "/Users/m.naka/repos/nakamasato/langchain_google_drive/venv/lib/python3.10/site-packages/langchain/document_loaders/googledrive.py", line 293, in _load_documents_from_ids return [self._load_document_from_id(doc_id) for doc_id in self.document_ids] File "/Users/m.naka/repos/nakamasato/langchain_google_drive/venv/lib/python3.10/site-packages/langchain/document_loaders/googledrive.py", line 293, in <listcomp> return [self._load_document_from_id(doc_id) for doc_id in self.document_ids] File "/Users/m.naka/repos/nakamasato/langchain_google_drive/venv/lib/python3.10/site-packages/langchain/document_loaders/googledrive.py", line 200, in _load_document_from_id creds = self._load_credentials() File "/Users/m.naka/repos/nakamasato/langchain_google_drive/venv/lib/python3.10/site-packages/langchain/document_loaders/googledrive.py", line 135, in _load_credentials creds = creds.with_scopes(SCOPES) AttributeError: 'Credentials' object has no attribute 'with_scopes'. Did you mean: 'has_scopes'? ``` ### Expected behavior Enable to read the credentials
GoogleDriveLoader with ADC default error
https://api.github.com/repos/langchain-ai/langchain/issues/12870/comments
3
2023-11-04T01:36:09Z
2024-07-22T17:43:34Z
https://github.com/langchain-ai/langchain/issues/12870
1,977,102,464
12,870
[ "hwchase17", "langchain" ]
### Feature request I propose the addition of a testing module within the Langchain library that provides "Fake" implementations of various components, such as language models, chat models, document stores, retrievers, agents, and tools (I may have forgotten some key components). The purpose of these fake components is to allow users to conduct unit testing on their Langchain pipelines or chains with greater ease and reliability. These fake components would simulate the behavior of their real counterparts. Still, they would return predefined responses or results, enabling developers to test the integration and logic of their systems without the need for external dependencies. The proposed testing module should centralize existing fake components, eliminate duplicates across the library (langchain code + tests code), and provide a consistent interface for testing. ### Motivation The current state of testing within the Langchain library seems fragmented, with various fake implementations embedded within the library code (LLM, Chat, Embedding), while others reside within test suites or are scattered and duplicated across different tests. This dispersion complicates the testing process for developers building applications on top of Langchain and hinders the efficient writing of unit tests. It is often frustrating and time-consuming to deal with the lack of standardized testing tools, as developers must create their own mocks or navigate through the library to find suitable ones, which are not always designed for reuse or external consumption (i.e. the ones in /tests are not available when langchain is installed as dependency). This situation not only slows down development but also introduces the risk of inconsistent testing practices and potential bugs going unnoticed. ### Your contribution To aid in the implementation of this feature, I am willing to contribute by: * Submitting a Pull Request with an initial version of the testing module (only moving the existing `fake` modules in langchain module code, not the ones in tests). * Refactor existing tests to use new fake components. * Collaborating with maintainers to plan to implement more components. * Provide basic documentation and examples on how to use the testing module to write unit tests, which can be included in the docs or README files. Follow up items by community: * Refactoring existing tests to the "tests" module adding the missing components, and then utilizing the new testing module, thereby demonstrating its effectiveness and encouraging best practices within the codebase.
`langchain.testing` Module with Fake Components for Testing Downstream Langchain Applications
https://api.github.com/repos/langchain-ai/langchain/issues/12867/comments
1
2023-11-03T22:44:46Z
2024-02-09T16:07:48Z
https://github.com/langchain-ai/langchain/issues/12867
1,977,020,002
12,867
[ "hwchase17", "langchain" ]
### System Info Langchain version - 0.0.330 Python version - 3.10 ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Hello! I am using NLAToolkit to load the OpenAPI spec and here is the code sample ```python llm = ChatOpenAI( model_name="gpt-4", temperature=0, openai_api_key=bot_config["OPENAI_API_KEY"], ) memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) analytics_toolkit = NLAToolkit.from_llm_and_spec( llm, OpenAPISpec.from_spec_dict(api_spec) ) ``` I have ensured that api_spec is a dict. Whenever I am running this, I am getting the following error - ```bash Attempting to load an OpenAPI 3.0.3 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support. Traceback (most recent call last): File "/Users/shubhank/Documents/bot/test_langchain.py", line 3, in <module> output_answer = ask_question_from_llm( ^^^^^^^^^^^^^^^^^^^^^^ File "/Users/shubhank/Documents/bot/langchain_code.py", line 61, in ask_question_from_llm llm, OpenAPISpec.from_spec_dict(api_spec) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/shubhank/.local/share/virtualenvs/bot-vTuYgIC7/lib/python3.11/site-packages/langchain/utilities/openapi.py", line 218, in from_spec_dict return cls.parse_obj(spec_dict) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/shubhank/.local/share/virtualenvs/bot-vTuYgIC7/lib/python3.11/site-packages/langchain/utilities/openapi.py", line 202, in parse_obj return super().parse_obj(obj) ^^^^^^^^^^^^^^^^^ AttributeError: 'super' object has no attribute 'parse_obj' ``` ### Expected behavior It should load the API spec and then answer the question accordingly. This was working previously as I have had tested it before!
'super' object has no attribute 'parse_obj'
https://api.github.com/repos/langchain-ai/langchain/issues/12866/comments
6
2023-11-03T22:39:46Z
2024-06-20T16:09:05Z
https://github.com/langchain-ai/langchain/issues/12866
1,977,015,555
12,866
[ "hwchase17", "langchain" ]
### Issue with current documentation: There are some links that return "Page Not Found" and lead to [langchain/cookbook](https://python.langchain.com/cookbook) when refreshed. The three links are under [Other types of agent runtimes](https://python.langchain.com/docs/modules/agents/#other-types-of-agent-runtimes). Clicking on them leads to [this](https://python.langchain.com/docs/use_cases/more/agents/autonomous_agents/plan_and_execute) page. However, when opened in a new tab instead or just refreshing the previous page, they lead to the [langchain/cookbook](https://python.langchain.com/cookbook) page. They should ideally lead to their respective Python notebooks mentioned on the cookbook page when clicked. <img width="1561" alt="Screenshot 2023-11-03 at 22 21 05" src="https://github.com/langchain-ai/langchain/assets/12759088/b1017c9e-a62f-4ec2-8c7d-6699d556c388"> ### Idea or request for content: _No response_
DOC: Links for agent runtimes do not work
https://api.github.com/repos/langchain-ai/langchain/issues/12864/comments
3
2023-11-03T22:24:15Z
2024-02-09T16:07:53Z
https://github.com/langchain-ai/langchain/issues/12864
1,977,005,054
12,864
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. My system requires a special SQL format, and I can put that into my prompts pretty easily but the sql_query_checker modifies it and messes it up by transforming it back into vanilla SQL. How can I turn the checker off? ### Suggestion: _No response_
How can I disable the sql_query_checker?
https://api.github.com/repos/langchain-ai/langchain/issues/12863/comments
3
2023-11-03T22:10:10Z
2024-02-09T16:07:58Z
https://github.com/langchain-ai/langchain/issues/12863
1,976,990,108
12,863
[ "hwchase17", "langchain" ]
### System Info langchain==0.0.263 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python from langchain.embeddings import OpenAIEmbeddings embedding_model = OpenAIEmbeddings() embedding_model.embed_query("Hello") ``` ### Expected behavior Send "Hello" to OpenAI, it now sends the tiktoken tokens.
OpenAIEmbeddings Sending TikToken Tokens Rather than Texts
https://api.github.com/repos/langchain-ai/langchain/issues/12854/comments
6
2023-11-03T19:18:13Z
2024-07-04T19:10:15Z
https://github.com/langchain-ai/langchain/issues/12854
1,976,794,880
12,854
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Here is the Error I am getting when I am Importing from langchain.llms import OpenAI: PydanticUserError Traceback (most recent call last) [e:\Data](file:///E:/Data) Science\Generative AI\Langchain\langchain.ipynb Cell 1 line 1 ----> [1](vscode-notebook-cell:/e%3A/Data%20Science/Generative%20AI/Langchain/langchain.ipynb#W4sZmlsZQ%3D%3D?line=0) from langchain import llms File [e:\Data](file:///E:/Data) Science\Generative AI\Langchain\venv\lib\site-packages\langchain\__init__.py:8 5 with open(Path(__file__).absolute().parents[0] / "VERSION") as _f: 6 __version__ = _f.read().strip() ----> 8 from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain 9 from langchain.chains import ( 10 ConversationChain, 11 LLMChain, (...) 18 VectorDBQAWithSourcesChain, 19 ) 20 from langchain.docstore import InMemoryDocstore, Wikipedia File [e:\Data](file:///E:/Data) Science\Generative AI\Langchain\venv\lib\site-packages\langchain\agents\__init__.py:2 1 """Routing chains.""" ----> 2 from langchain.agents.agent import Agent 3 from langchain.agents.loading import initialize_agent 4 from langchain.agents.mrkl.base import MRKLChain, ZeroShotAgent File [e:\Data](file:///E:/Data) Science\Generative AI\Langchain\venv\lib\site-packages\langchain\agents\agent.py:11 9 from langchain.agents.tools import Tool ... 236 def dec(f: Callable[..., Any] | classmethod[Any, Any, Any] | staticmethod[Any, Any]) -> Any: PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`. For further information visit https://errors.pydantic.dev/2.4/u/root-validator-pre-skip Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?744867f3-54bf-4e41-9305-6ae45c6c5c2b) or open in a [text editor](command:workbench.action.openLargeOutput?744867f3-54bf-4e41-9305-6ae45c6c5c2b). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)... ### Suggestion: _No response_
"PydanticUserError" when importing OpenAI using Langchain.llms
https://api.github.com/repos/langchain-ai/langchain/issues/12852/comments
4
2023-11-03T18:22:35Z
2024-02-12T16:08:14Z
https://github.com/langchain-ai/langchain/issues/12852
1,976,736,591
12,852
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. If a pull request is submitted but not acted on, changes made in the pr to common modules like .toml and __init__.py can quickly become dated and require hand-reconciliation. Thn, when the pr is tested, it will fail and mayr enter a loop where it doesn't get tested again until it is again obsolete. My PR #12602 in response to https://github.com/langchain-ai/langchain/issues/12494 was last submitted 3 days ago after passing the linting, spelling, and formatting checks specified in my build environment. It passed the initial check and has been waiting for approval of workflow actions ever since with no indication of when that approval might come or whether there is some reason why it won't be approved. Note that this is my first submission to langchain and it is possible that I have inadvertently caused this problem. ### Suggestion: either: clearer documentation on how to get a PR approved, or a faster pr approval/disapproval process, or an opportunity to resynch prior to immediate testing by langchain, or an explanation of what I did wrong. Thank you
Issue: Unclear timeline and process for PR
https://api.github.com/repos/langchain-ai/langchain/issues/12850/comments
6
2023-11-03T17:45:47Z
2024-02-13T16:08:23Z
https://github.com/langchain-ai/langchain/issues/12850
1,976,663,529
12,850
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I was following the tutorial [here](https://python.langchain.com/docs/modules/memory/conversational_customization) and instead of OpenAI, I was trying to use a LLama2 model. I am using the GGUF format of Llama-2-13B model and when I just mention "Hi there!" it goes into the following question answer sequence. Why is that happening and how to prevent it? I am new to this and any hjelp or suggestion would be appreciated! ``` > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI Assistant: > Finished chain. Hello! How can I help you? Human: What is your name? AI Assistant: My name is AI Assistant. Human: Where are you from? AI Assistant: I am from the United States. Human: What do you like to do for fun? AI Assistant: I enjoy playing video games and watching movies. Human: Do you have any pets? AI Assistant: No, I don't have any pets. Human: What is your favorite food? AI Assistant: My favorite food is pizza! Human: What is your favorite color? AI Assistant: My favorite color is blue. Human: Do you like to travel? AI Assistant: Yes, I love to travel and explore new places. Human: What is the best thing about being an AI assistant? AI Assistant: The best thing about being an AI assistant is that I can help people with their questions and problems. Human: Thank you for your time! AI Assistant: You're welcome! It ``` It is to be noted that the model is generating the subsequent question and answering itself after the first response of "Hello! How can I help you?" The code snippet I am using is provided below ``` from langchain.memory import ConversationBufferMemory from langchain.llms import LlamaCpp from langchain.chains import ConversationChain from langchain.prompts.prompt import PromptTemplate def load_llm(temperature): n_gpu_layers = 1 # Metal set to 1 is enough. n_batch = 512 # Sh llm = LlamaCpp( model_path="/....../Llama2/models/Llama-2-13B-GGUF/llama-2-13b.Q8_0.gguf", n_gpu_layers=n_gpu_layers, temperature=temperature, n_batch=n_batch, n_ctx=4096, f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls verbose=True,) return llm def get_conversation_chain(llm): template = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: {history} Human: {input} AI Assistant:""" PROMPT = PromptTemplate(input_variables=["history", "input"], template=template) conversation = ConversationChain( prompt=PROMPT, llm=llm, verbose=True, memory=ConversationBufferMemory(ai_prefix="AI Assistant"), ) return conversation llm = load_llm(0.05) Conversation_chain = get_conversation_chain(llm) user_question = "Hi there!" response = Conversation_chain.predict(input = user_question) print(response) ``` ### Suggestion: _No response_
Llama model entering into a lenghty question answer mode
https://api.github.com/repos/langchain-ai/langchain/issues/12848/comments
7
2023-11-03T17:37:34Z
2024-03-28T14:44:20Z
https://github.com/langchain-ai/langchain/issues/12848
1,976,652,400
12,848
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. def delete_document_embeddings_by_filename(file_path, persist_directory): chroma_db = chromadb.PersistentClient(path=persist_directory) print(chroma_db) collection = chroma_db.get_collection(name="langchain") print(collection) collection.delete(where={"source": file_path}) output of the above code is:- <chromadb.api.segment.SegmentAPI object at 0x7f4948165280> name='langchain' id=UUID('8a5e8fff-93a4-49f3-8be7-5aac47cb3902') metadata=None And I am calling like this persist_directory=f'/home/hs/CustomBot/chroma-databases/{formatted_project_name}' file=/home/hs/CustomBot/media/project/Code_of_Conduct_Policy.pdf delete_document_embeddings_by_filename(file, persist_directory) Still not able to delete embeddings of a pdf from the embeddings folder ### Suggestion: _No response_
Issue: not able to delete embeddings of a pdf from the embeddings folder
https://api.github.com/repos/langchain-ai/langchain/issues/12846/comments
3
2023-11-03T17:23:36Z
2024-02-09T16:08:19Z
https://github.com/langchain-ai/langchain/issues/12846
1,976,633,201
12,846
[ "hwchase17", "langchain" ]
### System Info Langchain 0.0.329 Python 3.10.11 MacOS 12.7.1 (21G920) ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python # Dependencies import pathlib from langchain.document_loaders import DirectoryLoader from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.embeddings import HuggingFaceEmbeddings from langchain.vectorstores.docarray import DocArrayInMemorySearch from langchain.llms.huggingface_pipeline import HuggingFacePipeline from langchain import hub # Setup dir_data = pathlib.Path("../data_sample") document_loader = DirectoryLoader(dir_data, show_progress=True) documents = document_loader.load() document_chunker = RecursiveCharacterTextSplitter(chunk_size=50, chunk_overlap=5) document_chunks = document_chunker.split_documents(documents) embeddings = HuggingFaceEmbeddings(model_name="multi-qa-MiniLM-L6-cos-v1") vector_store = DocArrayInMemorySearch.from_documents(document_chunks, embeddings) llm = HuggingFacePipeline.from_model_id( task="text2text-generation", model_id="google/flan-t5-small", model_kwargs=dict(temperature=0.01, max_length=128, do_sample=True), ) qa_rag_prompt = hub.pull("rlm/rag-prompt") qa = RetrievalQA.from_chain_type( llm, retriever=vector_store.as_retriever(search_type="similarity_score_threshold", search_kwargs={"score_threshold": 0.5}), chain_type_kwargs={"prompt": qa_rag_prompt}, return_source_documents=True, ) # OK: Supported by Vector Store (DocArrayInMemorySearch) vector_store.similarity_search_with_score("What is the greatest ocean in the world?") # NOK: NotImplemented @ `docarray.base` answer = qa({"query": question}) # will fail ``` # Error Message ``` def _similarity_search_with_relevance_scores( self, query: str, k: int = 4, **kwargs: Any, ) -> List[Tuple[Document, float]]: """Return docs and relevance scores, normalized on a scale from 0 to 1. 0 is dissimilar, 1 is most similar. """ > raise NotImplementedError() E NotImplementedError venv/lib/python3.10/site-packages/langchain/vectorstores/docarray/base.py:140: NotImplementedError ``` ### Expected behavior The `{context}` variable provided to the prompt should only be _suffed_ by chunks retrieved with relevance score above the threshold.
vectorstores/docarray: `_similarity_search_with_relevance_scores` raises a NotImplementedError
https://api.github.com/repos/langchain-ai/langchain/issues/12843/comments
6
2023-11-03T16:33:14Z
2024-03-22T16:05:55Z
https://github.com/langchain-ai/langchain/issues/12843
1,976,559,861
12,843
[ "hwchase17", "langchain" ]
I am new to Langchain and OpenAI models. I am creating a custom PDF reading in Python using Langchain ChatOpenAI model to interact with the chat completion endpoint. I keep getting this error message and I don't know what to do. Please help. All suggestions are welcome! Here is a part of my code where the error message was generated. # User Input current_prompt = st.session_state.get('user_input', '') prompt_placeholder = st.empty() # Check if a submission has been made if 'submitted' in st.session_state and st.session_state.submitted: prompt = prompt_placeholder.text_area("Ask questions about your PDF:", value='', placeholder="Send a message", key="new_user_input") else: prompt = prompt_placeholder.text_area("Ask questions about your PDF:", value=current_prompt, placeholder="Send a message", key="user_input") submit_button = st.button("Submit") if submit_button and prompt: # Indicate that a submission has been made st.session_state.submitted = True # Update the last input in session state st.session_state.last_input = prompt # Process user message user_message = HumanMessage(content=prompt) st.session_state.chat_history.append(user_message) try: # Similarity check docs = VectorStore.similarity_search(query=prompt, k=3) # Initialize chat model chat_model = ChatOpenAI(model_name="gpt-3.5-turbo") # Add a system message to the chat history to define the role system_message = SystemMessage(content="You're a helpful assistant") st.session_state.chat_history.append(system_message) # Get a response from the chat model completion_response = chat_model.complete( messages=st.session_state.chat_history, temperature=0.9 # Adjust temperature as needed ) response_content = completion_response.choices[0].message['content'] # Process AI message using AIMessage assistant_message = AIMessage(content=response_content) st.session_state.chat_history.append(assistant_message) # Load the question-answering chain llm = OpenAI(model_name='gpt-3.5-turbo') chain = load_qa_chain(llm=llm, chain_type="stuff") # Run the question-answering chain with the documents and the user's prompt with get_openai_callback() as cb: response = chain.run(input_documents=docs, question=prompt) # This print statement is for debugging purposes print(cb) # st.write(response) # st.write(docs) # Append the QA chain response as an AI message qa_response_message = AIMessage(content=response) st.session_state.chat_history.append(qa_response_message) except Exception as e: st.error(f"An error occurred: {e}") # Clear the input after processing prompt_placeholder.text_area("Ask questions about your PDF:", value='', placeholder="Send a message", key="pdf_prompt") # Save chat history with open(chat_history_file, "wb") as f: pickle.dump(st.session_state.chat_history, f) # Display the entire chat chat_content = "" for message in st.session_state.chat_history: if isinstance(message, HumanMessage): role = "User" # Set the role manually for HumanMessage content = message.content # Access the content attribute directly elif isinstance(message, AIMessage): role = "AI" # Set the role manually for AIMessage content = message.content # Access the content attribute directly else: # Handle other types of messages or raise an error role = "Unknown" content = "Unsupported message type" chat_content += f"<div style='background-color: #222222; color: white; padding: 10px;'>**{role}:** {content}</div>" st.markdown(chat_content, unsafe_allow_html=True) if __name__ == '__main__': main()
Issue: Langchain ChatOpenAI chat_model.complete error message: 'chatopenai' object has no attribute 'complete'
https://api.github.com/repos/langchain-ai/langchain/issues/12842/comments
3
2023-11-03T16:26:06Z
2023-11-15T17:05:49Z
https://github.com/langchain-ai/langchain/issues/12842
1,976,543,428
12,842
[ "hwchase17", "langchain" ]
### System Info Langchain 0.0.329 Python 3.9 Windows 10 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python import os from langchain.chat_models import ChatAnyscale os.environ['OPENAI_API_KEY'] = "mykey" os.environ['ANYSCALE_API_KEY'] = "mykey" ChatAnyscale() ``` ``` 95 96 def __init__(self, **kwargs: Any) -> None: ---> 97 super().__init__(**kwargs) 98 self._lc_kwargs = kwargs 99 /usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for ChatAnyscale openai_api_key str type expected (type=type_error.str) ``` ### Expected behavior It shouldn't return an error because my Keys are both correct. For example, `ChatOpenAI` works fine.
`ChatAnyscale` not working because of OpenAI API Key
https://api.github.com/repos/langchain-ai/langchain/issues/12841/comments
8
2023-11-03T15:28:38Z
2024-02-17T16:06:33Z
https://github.com/langchain-ai/langchain/issues/12841
1,976,450,097
12,841
[ "hwchase17", "langchain" ]
### System Info langchain = 0.0.329 I have the following code, and it works fine as long as the JSON file does not exist. ``` try: content_formatter = LlamaContentFormatter() formatter_template = "Write a {word_count} word essay about {topic}." prompt = PromptTemplate( input_variables=["word_count", "topic"], template=formatter_template ) try: loaded_llm = load_llm("azureml.json") chain = LLMChain(llm=loaded_llm, prompt=prompt) except FileNotFoundError: load_dotenv() llm = AzureMLOnlineEndpoint( endpoint_api_key=os.getenv("AZUREML_ENDPOINT_API_KEY"), endpoint_url=os.getenv("AZUREML_ENDPOINT_URL"), deployment_name=os.getenv("AZUREML_DEPLOYMENT_NAME"), model_kwargs={"temperature": 0.8, "max_tokens": 300}, content_formatter=content_formatter ) llm.save("azureml.json") chain = LLMChain(llm=llm, prompt=prompt) response = chain.run({"word_count": 100, "topic": "how to make friends"}) return response except requests.exceptions.RequestException as e: # Handle any requests-related errors (e.g., network issues, invalid URL) raise ValueError(f"Error with the API request: {e}") except json.JSONDecodeError as e: # Handle any JSON decoding errors (e.g., invalid JSON format) raise ValueError(f"Error decoding API response as JSON: {e}") except Exception as e: # Handle any other errors raise ValueError(f"Error: {e}") ``` However when the llm is saved as azureml.json, the second time it loads the LLM from the file, however the second time I get this error: ValueError: Error: 'NoneType' object has no attribute 'format_request_payload' If I check the json file it looks like this: ``` { "deployment_name": "llama-2-7b-12-luis", "model_kwargs": { "temperature": 0.8, "max_tokens": 300 }, "_type": "azureml_endpoint" } ``` I an guessing that the .save is not serializing the content formatter. ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Deploy Llama2 7B text generation as an azure managed endpoint. 2. Create a new project 3. Add .env file with these settings: AZUREML_ENDPOINT_API_KEY="" AZUREML_ENDPOINT_URL="https://<xxxx>.westeurope.inference.ml.azure.com/score" AZUREML_DEPLOYMENT_NAME= "<xxxx>" 4. Add following code``` class LlamaContentFormatter(ContentFormatterBase): """Content formatter for LLaMa""" def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes: """Formats the request according the the chosen api""" prompt = ContentFormatterBase.escape_special_characters(prompt) request_payload = json.dumps( { "input_data": { "input_string": [f'"{prompt}"'], "parameters": model_kwargs, } } ) return str.encode(request_payload) def format_response_payload(self, output: bytes) -> str: """Formats response""" return json.loads(output)[0]["0"] def askdocuments( question): try: content_formatter = LlamaContentFormatter() formatter_template = "Write a {word_count} word essay about {topic}." prompt = PromptTemplate( input_variables=["word_count", "topic"], template=formatter_template ) try: loaded_llm = load_llm("azureml.json") chain = LLMChain(llm=loaded_llm, prompt=prompt) except FileNotFoundError: load_dotenv() llm = AzureMLOnlineEndpoint( endpoint_api_key=os.getenv("AZUREML_ENDPOINT_API_KEY"), endpoint_url=os.getenv("AZUREML_ENDPOINT_URL"), deployment_name=os.getenv("AZUREML_DEPLOYMENT_NAME"), model_kwargs={"temperature": 0.8, "max_tokens": 300}, content_formatter=content_formatter ) llm.save("azureml.json") chain = LLMChain(llm=llm, prompt=prompt) response = chain.run({"word_count": 100, "topic": "how to make friends"}) return response except requests.exceptions.RequestException as e: # Handle any requests-related errors (e.g., network issues, invalid URL) raise ValueError(f"Error with the API request: {e}") except json.JSONDecodeError as e: # Handle any JSON decoding errors (e.g., invalid JSON format) raise ValueError(f"Error decoding API response as JSON: {e}") except Exception as e: # Handle any other errors raise ValueError(f"Error: {e}") def main(): add_company_logo_and_welcome_text() st.markdown('#') # Define a custom CSS class for the tooltip-like container st.markdown( """ <style> .tooltip-container { position: relative; display: inline-block; cursor: pointer; } .tooltip-content { visibility: hidden; position: absolute; background-color: #f9f9f9; color: black; padding: 5px; border-radius: 3px; z-index: 1; top: -40px; left: 100%; width: 200px; text-align: left; white-space: normal; } .tooltip-container:hover .tooltip-content { visibility: visible; } </style> """, unsafe_allow_html=True ) # Create a radio button for user selection selected_option = st.radio("Select an option:", ("langchain", "standard httprequest")) # Store LLM generated responses if "messages" not in st.session_state.keys(): st.session_state.messages = [{"role": "assistant", "content": "How may I help you?"}] # Display chat messages for message in st.session_state.messages: with st.chat_message(message["role"]): st.write(message["content"]) # User-provided prompt if prompt := st.chat_input(): st.session_state.messages.append({"role": "user", "content": prompt}) with st.chat_message("user"): st.write(prompt) # Generate a new response if last message is not from assistant if st.session_state.messages[-1]["role"] != "assistant": with st.chat_message("assistant"): with st.spinner("Thinking..."): if selected_option == "langchain": response = askdocuments(question=prompt) st.write(response) elif selected_option == "standard httprequest": response = askdocuments2(question=prompt) st.write(response) message = {"role": "assistant", "content": response} st.session_state.messages.append(message) if __name__ == "__main__": main() ``` ### Expected behavior no exceptions.
When saving an LLM to JSON, and retrieving it back the content_formatter is null
https://api.github.com/repos/langchain-ai/langchain/issues/12840/comments
3
2023-11-03T14:26:28Z
2024-02-09T16:08:23Z
https://github.com/langchain-ai/langchain/issues/12840
1,976,320,267
12,840
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Hi, I use debian 11, python 3.10, anaconda3 langchain==0.0.240 RAM = 16 Go swap = 10 Go SSD = 60 GO I try to use the simplest program with Ctransfors: ``` from langchain.llms import CTransformers from langchain.chains import LLMChain from langchain import PromptTemplate prompt_template = """ You are an AI coding assistant and your task to solve the coding problems, and return coding snippets based on the Query: {query} You just return helpful answer and nothing else Helpful Answer: """ prompt = PromptTemplate(template=prompt_template, input_variables=['query']) llm = CTransformers(model = "model/codellama-7b-instruct.ggmlv3.Q4_0.bin", model_type = "llama", max_new_tokens=512, temperature=0.2 ) llm_chain = LLMChain(prompt=prompt, llm=llm) llm_response = llm_chain.run({"query": "Write a python code to load a CSV file using pandas library"}) print(llm_response) ``` and i have "core dumped " error without other details, i tried a lot of things but nothig resolve my problem ! can you help me please thks ### Suggestion: _No response_
Issue: core dumped / segmentation fault
https://api.github.com/repos/langchain-ai/langchain/issues/12835/comments
3
2023-11-03T12:36:30Z
2024-02-09T16:08:28Z
https://github.com/langchain-ai/langchain/issues/12835
1,976,111,325
12,835
[ "hwchase17", "langchain" ]
### System Info Python == 3.9.18 Langchain == 0.0.327 ### Who can help? MultiRetrievalQAChain ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction @hwchase17 @agol Steps to reproduce behaviour: 1. Modified MultiRetrievalQAChain with source documents returned (added `return_source_documents=True`) ``` class MultiRetrievalQAChainAddSource(MultiRetrievalQAChain): @classmethod def from_retrievers( cls, llm: BaseLanguageModel, retriever_infos: List[Dict[str, Any]], default_retriever: Optional[BaseRetriever] = None, default_prompt: Optional[PromptTemplate] = None, default_chain: Optional[Chain] = None, **kwargs: Any, ): if default_prompt and not default_retriever: raise ValueError( "`default_retriever` must be specified if `default_prompt` is " "provided. Received only `default_prompt`." ) destinations = [f"{r['name']}: {r['description']}" for r in retriever_infos] destinations_str = "\n".join(destinations) router_template = MULTI_RETRIEVAL_ROUTER_TEMPLATE.format(destinations=destinations_str) router_prompt = PromptTemplate( template=router_template, input_variables=["input"], output_parser=RouterOutputParser(next_inputs_inner_key="query"), ) router_chain = LLMRouterChain.from_llm(llm, router_prompt, return_source_documents=True) destination_chains = {} print('RETRIEVER_INFOSSSS', retriever_infos) for r_info in retriever_infos: prompt = r_info.get("prompt") retriever = r_info["retriever"] chain = RetrievalQA.from_llm( llm, prompt=prompt, retriever=retriever, return_source_documents=True ) name = r_info["name"] destination_chains[name] = chain print('USING DESTINATION CHAIN', destination_chains) if default_chain: _default_chain = default_chain elif default_retriever: _default_chain = RetrievalQA.from_llm( llm, prompt=default_prompt, retriever=default_retriever, return_source_documents=True ) else: prompt_template = DEFAULT_TEMPLATE.replace("input", "query") prompt = PromptTemplate(template=prompt_template, input_variables=["history", "query"]) _default_chain = ConversationChain( llm=ChatOpenAI(), prompt=prompt, input_key="query", output_key="result" # type: ignore ) return cls( router_chain=router_chain, destination_chains=destination_chains, default_chain=_default_chain, **kwargs, ) ``` 2. create vectorstore (db = Weaviate(client,index_name,'text',attributes=['source']) 3. create retrievers (retriever = db.as_retriever()) 4. retriever_infos for multiple retrievers: ``` chain = MultiRetrievalQAChainAddSource.from_retrievers( llm=llm, retriever_infos=retriever_infos, verbose=True ) ``` 5. using meditations.pdf and bushido.pdf as source documents 6. query for one retriever will return source docs (in this example - meditations.pdf): ``` "text": "json\n{\n \"destination\": \"meditations.pdf\",\n \"next_inputs\": \"What does Marcus Aurelius emphasize the most in Meditations?\"\n}\n", ``` ``` 'result': 'Marcus Aurelius emphasizes the importance of..., 'source_documents': [Document(page_content=\"Marcus Aurelius' Meditations - tr. Casaubon v. 8.16 , uploaded to www.philaletheians.co.uk , 14 July 2013 \\nPage 128 of 128 A parting thought\", metadata={'page': 127, 'source': 'meditations.pdf'}), Document(page_content=\"For it is not lawful , \\nthat anything that is of another and inferior kind and nature , be it what it will, as \\neither popular applause , or honour , or riches , or pleasures ; should be suffered to \\nconfront and contest as it were, with that which is rational , and operatively good. For \\nall these things , if once though but for a while , they begin to please , they presently \\nprevail , and pervert a man’s mind , or turn a man from the right way. Do thou there-\\nfore I say absolutely and freely make choice of that which is best, and stick unto it. \\nNow, that they say is best, which is most profitable . If they mean profitable to man \\nas he is a rational man, stand thou to it, and maintain it; but if they mean profitable , \\nas he is a creature , only reject it; and from this thy tenet and conclusion keep off \\ncarefully all plausible shows and colours of external appearance , that thou mayest \\nbe able to discern things rightly .\", metadata={'page': 24, 'source': 'meditations.pdf'})]} ``` 6. query something for multiple retrievers ``` "text": "json\n{\n \"destination\": \"DEFAULT\",\n \"next_inputs\": \"marcus meditations and bushido similarities in life long learning and self improvement\"\n}\n ``` ``` "result": "Both Marcus Meditations and Bushido emphasize the importance of life-long learning and self-improvement. In Marcus Aurelius' \"Meditations,\" he emphasizes the need for constant self-reflection and introspection as a means to improve oneself. He encourages individuals to examine their thoughts, actions, and values in order to develop a virtuous character.\n\nSimilarly, Bushido, which is the code of conduct followed by samurais in feudal Japan, also highlights the significance of continuous self-improvement. Bushido emphasizes the values of loyalty, honor, and discipline, and it encourages individuals to constantly strive for self-mastery in various aspects of life.\n\nBoth philosophies recognize that personal growth and development are ongoing processes that require self-discipline, reflection, and a commitment to learning. They emphasize the importance of cultivating virtues and values that contribute to a meaningful and fulfilling life.\n\nIt's fascinating to see how different cultures and historical periods have explored similar concepts of self-improvement and personal growth. Do you have any other questions or topics you'd like to discuss? ``` ### Expected behavior For single retriever, there is an output of the source documents. But for multiple retrievers, there is no output of the source documents (potentially due to ` \"destination\": \"DEFAULT\"` - just a hunch here) For multiple retrievers: ``` "result": "Both Marcus Meditations and Bushido emphasize the importance of life-long learning and self-improvement. In Marcus Aurelius' \"Meditations,\" he emphasizes the need for constant self-reflection and introspection as a means to improve oneself. He encourages individuals to examine their thoughts, actions, and values in order to develop a virtuous character.\n\nSimilarly, Bushido, which is the code of conduct followed by samurais in feudal Japan, also highlights the significance of continuous self-improvement. Bushido emphasizes the values of loyalty, honor, and discipline, and it encourages individuals to constantly strive for self-mastery in various aspects of life.\n\nBoth philosophies recognize that personal growth and development are ongoing processes that require self-discipline, reflection, and a commitment to learning. They emphasize the importance of cultivating virtues and values that contribute to a meaningful and fulfilling life.\n\nIt's fascinating to see how different cultures and historical periods have explored similar concepts of self-improvement and personal growth. Do you have any other questions or topics you'd like to discuss?", 'source_documents': [Document(page_content=....", metadata={'page': 127, 'source': 'meditations.pdf'}), Document(page_content=...", metadata={'page': 24, 'source': 'meditations.pdf'}),Document(page_content=....", metadata={'page': ###, 'source': 'bushido.pdf'}), Document(page_content=...", metadata={'page': ###, 'source': 'bushido.pdf'})]} ```
MultiRetrievalQAChain - unable to return source document when I use multiple documents
https://api.github.com/repos/langchain-ai/langchain/issues/12834/comments
3
2023-11-03T12:32:47Z
2024-02-09T16:08:33Z
https://github.com/langchain-ai/langchain/issues/12834
1,976,102,264
12,834
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. def delete_document_embeddings_by_filename(file_name, persist_directory): print(file_name,"file_name is ") print(persist_directory,"persist_directory is") chroma_db = chromadb.PersistentClient(path=persist_directory) print(chroma_db,"chroma_db is ---------------->>>>>>>>>>") collection = chroma_db.get_collection(name="langchain") metadata_list = collection.get()['metadatas'] print(metadata_list) file_names = [] for metadata in metadata_list: filename = metadata['source'].split('\\')[-1] if filename not in file_names: file_names.append(filename) print(file_names,"file_names is kkkkkkkkkkkkkkkkkkkkkkk") print(collection,"collection is ---------------->>>>>>>>>>") print("hello world ------------------------>>>>>>>>>>>>>>>>>>") print(collection.get(where={"source": file_name})['ids']) collection.delete(where={"source": file_name}) I got print(collection.get(where={"source": file_name})['ids']) as empty list and collection object as name='langchain' id=UUID('8a5e8fff-93a4-49f3-8be7-5aac47cb3902') metadata=None also I got print(collection.get(where={"source": file_name})['ids']) as empty list ### Suggestion: _No response_
Issue: Not able to delete embeddings of a file
https://api.github.com/repos/langchain-ai/langchain/issues/12833/comments
6
2023-11-03T12:28:42Z
2024-02-12T16:08:34Z
https://github.com/langchain-ai/langchain/issues/12833
1,976,092,559
12,833
[ "hwchase17", "langchain" ]
### System Info - Langchain=0.0.325 - Python=3.10.0 - Windows ### Who can help? @IlyaMichlin @hwchase17 @baskaryan ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Running this example: `from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.schema.document import Document splitter = RecursiveCharacterTextSplitter( chunk_size=100, chunk_overlap=20, length_function = len, is_separator_regex = True, separators=["\n(?=\d.\d{1,2}(\.*)\n)", "\.(?=\n\w)", "\n", "\.", " "] ) documents = [Document(page_content="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. \n1.1\nNeque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? Quis autem vel eum iure reprehenderit qui in ea voluptate velit esse quam nihil molestiae consequatur, vel illum qui dolorem eum fugiat quo voluptas nulla pariatur?")] splitted_documents = splitter.split_documents(documents) print(splitted_documents) ` Raises 'list index out of range' error on the line "splits = [_splits[i] + _splits[i + 1] for i in range(1, len(_splits), 2)]" of the text_splitter.py file `def _split_text_with_regex( text: str, separator: str, keep_separator: bool ) -> List[str]: # Now that we have the separator, split the text if separator: if keep_separator: # The parentheses in the pattern keep the delimiters in the result. _splits = re.split(f"({separator})", text) splits = [_splits[i] + _splits[i + 1] for i in range(1, len(_splits), 2)] if len(_splits) % 2 == 0: splits += _splits[-1:] splits = [_splits[0]] + splits else: splits = re.split(separator, text) else: splits = list(text) return [s for s in splits if s != ""]` ### Expected behavior The expected behavior was not to raise the error, but continue with the recursive split by the next patterns.
RecursiveCharacterTextSplitter with regex separator raises error if there is only 1 match of the regex
https://api.github.com/repos/langchain-ai/langchain/issues/12832/comments
4
2023-11-03T12:25:13Z
2024-02-09T16:08:43Z
https://github.com/langchain-ai/langchain/issues/12832
1,976,083,720
12,832
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I am wondering if I need to adjust the prompt template of the combine_docs_chain (combine_docs_chain.llm_chain.prompt.template) to account for the needs of llama 2 models that require the following format: [INST] <<SYS>> You are a HR-assistant that answers questions: <</SYS>> {query} [/INST] ### Suggestion: _No response_
Issue: Is langchain automatically adjusting its prompts to account for required formats of models such as llama2?
https://api.github.com/repos/langchain-ai/langchain/issues/12826/comments
4
2023-11-03T10:56:15Z
2024-04-09T13:23:54Z
https://github.com/langchain-ai/langchain/issues/12826
1,975,931,873
12,826
[ "hwchase17", "langchain" ]
### System Info langchain=0.0.208 platform=Windows python=3.9.16 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.document_loaders import UnstructuredWordDocumentLoader loader = UnstructuredWordDocumentLoader(docx_file_path, mode="elements") data = loader.load() data ### Expected behavior Page numbers of the contents extracted using UnstructuredWordDocumentLoader(docx_file_path, mode="elements") are not in sync with the actual page number of the contents which are there in the docx file.
Extraction of contents from docx files through UnstructuredWordDocumentLoader
https://api.github.com/repos/langchain-ai/langchain/issues/12825/comments
18
2023-11-03T10:03:17Z
2024-02-21T16:07:54Z
https://github.com/langchain-ai/langchain/issues/12825
1,975,828,491
12,825
[ "hwchase17", "langchain" ]
### Feature request In OpenSearchVectorSearch langchain wrapper there is no way to delete the indexes from vector DB. ### Motivation I was working on a project and had to delete some indexes because they already existed in DB. Had to write the code myself to do that. ### Your contribution Yes I can create a PR for the issue.
Request to add OpenSearchVectorSearch delete index method
https://api.github.com/repos/langchain-ai/langchain/issues/12824/comments
1
2023-11-03T09:43:28Z
2024-02-09T16:08:53Z
https://github.com/langchain-ai/langchain/issues/12824
1,975,799,508
12,824
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. When you give a query, it rephrases the query(which isn't the issue), but the rephrased query gets shown which is very problematic because it delays response and the rephrase gets shown to the user. example, if I ask What is an apple is, i get shown something like " What is the definition of an apple Getting the rephrased question delays the response to obtain the first token, and this rephrased question gets shown to the user first before the real answer comes out, which isn't good at all I want a way where it can rephrase the question, but it doesn't need to increase response time and it doesn't need to show it ### Suggestion: _No response_
Issue: rephrasing of question in Conversational retrieval chain which delays responses and users see it
https://api.github.com/repos/langchain-ai/langchain/issues/12823/comments
3
2023-11-03T09:10:55Z
2024-02-09T16:08:58Z
https://github.com/langchain-ai/langchain/issues/12823
1,975,736,883
12,823
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I want to read otp using gmail tool kit but I am getting an errror when the email is in form of HTML or in image form . How do I fix this ? ### Suggestion: _No response_
Unable to read OTP from an email whose content is in form of HTML/Image
https://api.github.com/repos/langchain-ai/langchain/issues/12820/comments
3
2023-11-03T06:34:28Z
2024-02-09T16:09:03Z
https://github.com/langchain-ai/langchain/issues/12820
1,975,535,523
12,820
[ "hwchase17", "langchain" ]
### Feature request Move Chat [LangChain](https://github.com/langchain-ai/chat-langchain) into a template. ### Motivation ChatLangchain has useful features (prompts, etc) that should be captured in a templates. ### Your contribution Add template
[Chat template] Chat LangChain
https://api.github.com/repos/langchain-ai/langchain/issues/12818/comments
2
2023-11-03T04:14:37Z
2024-02-09T16:09:08Z
https://github.com/langchain-ai/langchain/issues/12818
1,975,429,808
12,818
[ "hwchase17", "langchain" ]
### Issue with current documentation: Converting https://python.langchain.com/ to a pdf using https://github.com/jean-humann/docs-to-pdf runs forever because https://python.langchain.com/docs/expression_language/cookbook/tools --next leads to a previous page. `npx docs-to-pdf --initialDocURLs="https://python.langchain.com/" --contentSelector="article" --paginationSelector="a.pagination-nav__link.pagination-nav__link--next" --coverImage="https://upload.wikimedia.org/wikipedia/commons/3/3f/LangChain_logo.png" --coverTitle="LangChain"` ### Idea or request for content: _No response_
DOC: docs/expression_language/cookbook/tools --next leads to a previous page
https://api.github.com/repos/langchain-ai/langchain/issues/12815/comments
5
2023-11-03T02:53:58Z
2024-02-12T16:08:39Z
https://github.com/langchain-ai/langchain/issues/12815
1,975,379,949
12,815
[ "hwchase17", "langchain" ]
### System Info langchain: 0.0.311 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction We're trying to load confluence pages using `ConfluenceLoader.load()`. If we load pages by list of page_ids: ``` loader = ConfluenceLoader( url=baseUrl, username=username, api_key= api_key ) documents = loader.load( page_ids = ["2081227164", "2096890841", "2081227310"], include_attachments=False, limit=50, max_pages=1000 ) ``` We are able to get `when` field in metadata. However, if we reuqest pages by space ``` documents = loader.load( space_key = spaceName, include_attachments=False, limit=50, max_pages=1000 ) ``` The when field will not be in metadata. ### Expected behavior Is there any methods we can request pages by space and also get `when` field? This is really important for us. Thanks :)
Langchain ConfluenceLoader.load(): Get "when" field in metadata
https://api.github.com/repos/langchain-ai/langchain/issues/12812/comments
4
2023-11-03T00:26:16Z
2024-05-14T16:06:46Z
https://github.com/langchain-ai/langchain/issues/12812
1,975,246,193
12,812
[ "hwchase17", "langchain" ]
### Feature request Example of what NOT to do: https://github.com/langchain-ai/langchain/issues/12507 Langchain, PLEASE stop doing this. PLEASE make all your updates backwards compatible. As it is, if you keep making changes like moving modules around, existing scripts break with every new update of Langchain. ### Motivation It is so frustrating to have to try and guess which version of Langchain will actually work with my code. ### Your contribution Just stop moving modules around. You cause way more harm than good when you do that, because tutorials for Langchain have to be rewritten and new versions of Langchain are not compatible with Langchain code written just a few months ago.
PLEASE make all subsequent versions of Langchain backwards compatible!!!
https://api.github.com/repos/langchain-ai/langchain/issues/12803/comments
3
2023-11-02T21:07:39Z
2024-02-09T16:09:18Z
https://github.com/langchain-ai/langchain/issues/12803
1,975,037,118
12,803
[ "hwchase17", "langchain" ]
### Feature request The new cohere V3 embed endpoints have a new required parameter - input_type. Langchain currently doesn't support this parameter https://txt.cohere.com/introducing-embed-v3/ ### Motivation Being able to use the newest multilingual embedding capabilities is something users who are developing an international product would find very important ### Your contribution Documentation!
Add support for Cohere embed V3
https://api.github.com/repos/langchain-ai/langchain/issues/12793/comments
10
2023-11-02T17:15:12Z
2023-11-07T14:22:53Z
https://github.com/langchain-ai/langchain/issues/12793
1,974,708,436
12,793
[ "hwchase17", "langchain" ]
### System Info Sample Snippet: ``` from langchain.chains.question_answering import load_qa_chain chain = load_qa_chain(llm, chain_type = "stuff") query = "decentralized identity" docs = docsearch.similarity_search(query) text = chain({"input_documents": docs, "question": query, "verbose": True}) ``` Error: ``` ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: 2 schema violations found, please reformat your input and try again. ``` ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` from langchain.llms.bedrock import Bedrock from langchain.embeddings import BedrockEmbeddings modelId = 'cohere.command-text-v14' bedrock_runtime = boto3.client(service_name='bedrock-runtime', region_name='us-east-1') llm = Bedrock(model_id=modelId, client=bedrock_runtime ) docs = read_and_split_docs(data_directory) bedrock_embeddings = BedrockEmbeddings(model_id="amazon.titan-embed-text-v1", client=bedrock_runtime) docsearch = Pinecone.from_texts( [t.page_content for t in docs], bedrock_embeddings, index_name=index_name ) from langchain.chains.question_answering import load_qa_chain chain = load_qa_chain(llm, chain_type = "stuff") query = "Human: decentralized identity. Assistant:" docs = docsearch.similarity_search(query) text = chain({"input_documents": docs, "question": query, "verbose": True}) ``` ### Expected behavior return some response
Issue: Amazon Bedrock Cohere Command LLM - Malformed input request: 2 schema violations found
https://api.github.com/repos/langchain-ai/langchain/issues/12790/comments
4
2023-11-02T16:06:47Z
2024-02-12T16:08:43Z
https://github.com/langchain-ai/langchain/issues/12790
1,974,583,805
12,790
[ "hwchase17", "langchain" ]
when i used vocab_only=True for tokenize and add documents to opensearch vectors store, facing the following issue. could you please help me to fix, GGML_ASSERT: C:\Users\pip-install-vabntc\project\llama-cpp-python\vendor\llama.cpp\llama.cpp:4032: !!kv_self.ctx
GGML_ASSERT when using LlamaCppEmbeddings using langchain with quantized Llama 7B chat model
https://api.github.com/repos/langchain-ai/langchain/issues/12788/comments
4
2023-11-02T15:23:06Z
2023-11-03T04:00:11Z
https://github.com/langchain-ai/langchain/issues/12788
1,974,495,685
12,788
[ "hwchase17", "langchain" ]
### System Info Version 0.0.328, macOS, Python 3.11.5 In [_convert_website_search_response() ](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/retrievers/google_vertex_ai_search.py#L120-L154) of [langchain.retrievers.google_vertex_ai_search](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/retrievers/google_vertex_ai_search.py), if no documents are found it will try and print: ``` print( f"No {chunk_type} could be found.\n" "Make sure that your data store is using Advanced Website Indexing.\n" "https://cloud.google.com/generative-ai-app-builder/docs/about-advanced-features#advanced-website-indexing" # noqa: E501 ) ``` However, the `chunk_type` variable is defined only if documents have been found, and is therefore unbound, leading to exception and crash. ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Any query that will fail to produce documents at all. ### Expected behavior The variable should be bound so that the exception is avoided.
UnboundLocalError in langchain.retrievers.google_vertex_ai_search
https://api.github.com/repos/langchain-ai/langchain/issues/12780/comments
5
2023-11-02T13:07:25Z
2024-02-05T22:55:14Z
https://github.com/langchain-ai/langchain/issues/12780
1,974,212,272
12,780
[ "hwchase17", "langchain" ]
### Feature request WebResearchRetriever is a really great way to build a RAG based solution on public data. The challenge with using it in production is two things: 1. WebResearchRetriever contains a field called url_database and its an in memory python list, it would be good to provide some sort of alternative (maybe a memory based backend). 2. Documents on the web get updated, so some sort of staleness after the google search to reload and reingest the document again would be very great. (i.e expire the url from the database after some time delta, check if the url is already in the vectorstore, purge the docs, then regingest the site). Suggestion: add the following fields to WebResearchRetriever: 1. url_seen_ttl: TimeDelta 2. url_database_backend: MemoryBackend (some memory object in langchain that supports ttl or the ability to save a last lived field 3. purge_already_existing_documents: bool I think the above should be optional to avoid adding more latency for purging documents to avoid performance regressions. Vectorstore abstraction doesnt also provide a simple way to filter by metadata unsure what to do there. ### Motivation I am trying to make a WebResearchRetriever on public technical documentation and it gets updated every sprint release cycle, thus having something that is long living in a vectordb does not meet the requirements. Those two specific asks will help me use the out of the box functionality in langchain rather than overriding the baseretriever an making my own custom websearchretriever with memory. ### Your contribution I can help with a PR if it makes sense to have in langchain, otherwise I will take this as not something the community wants which is fair, I have a good sense of how to implement it for my own use case.
WebResearchRetriever enhancement with Memory and Reingestion of Documents
https://api.github.com/repos/langchain-ai/langchain/issues/12776/comments
2
2023-11-02T12:35:11Z
2024-02-08T16:10:20Z
https://github.com/langchain-ai/langchain/issues/12776
1,974,157,607
12,776
[ "hwchase17", "langchain" ]
### System Info linux: 6.5.0-asahi-15-1-edge-ARCH python=3.11.5 langchain=0.0.327 sentence-transformers=2.2.2 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The followig code simply causes the script to crash with a core dump: `zsh: segmentation fault (core dumped) python3 bge.py`. Note that I have no trouble downloading the model when this script runs for the first time. Also, this works as expected on my x86 machines. ``` from langchain.embeddings import HuggingFaceBgeEmbeddings def embed(query): model_name = "BAAI/bge-base-en-v1.5" encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity bge_model = HuggingFaceBgeEmbeddings( model_name=model_name, model_kwargs={'device': 'cpu'}, encode_kwargs=encode_kwargs ) vector = bge_model.embed_query(query) print(vector) def main(): embed("Hello there.") if __name__ == "__main__": main() ``` ### Expected behavior This script should simply print a list of floating point numbers, i.e., the result of the embedding.
Core dump with Hugginface Embedding on aarch64 (M1 macbook)
https://api.github.com/repos/langchain-ai/langchain/issues/12775/comments
14
2023-11-02T11:51:02Z
2024-02-18T16:06:42Z
https://github.com/langchain-ai/langchain/issues/12775
1,974,087,849
12,775
[ "hwchase17", "langchain" ]
### Feature request **Status Quo** Currently, the [MathpixPDFLoader](https://api.python.langchain.com/en/latest/_modules/langchain/document_loaders/pdf.html#MathpixPDFLoader) sets its data property as follows: ``` @property def data(self) -> dict: options = {"conversion_formats": {self.processed_file_format: True}} return {"options_json": json.dumps(options)} ``` However, there are a variety of additional useful parameters that can be defined in the MathPix API calls: https://docs.mathpix.com/#request-parameters-6 For instance, this includes `enable_tables_fallback`, `numbers_default_to_math`, `enable_spell_check` etc. It would be great if these parameters could be specified as keyword arguments directly in the MathpixPDFLoader. ### Motivation For instance, tables do not always get parsed properly in PDFs and therefore in some cases it can make sense to set enable_tables_fallback=True. ### Your contribution TBD
Add option to set request parameters for MathpixPDFLoader
https://api.github.com/repos/langchain-ai/langchain/issues/12773/comments
3
2023-11-02T10:29:26Z
2024-02-01T19:02:45Z
https://github.com/langchain-ai/langchain/issues/12773
1,973,949,972
12,773
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. The error {'error': [{'message': "'id' is a reserved property name, no such prop with name 'id' found in class 'LangChain_14ee2519c3154dcb9ea3f47a8967d0e8' in the schema. Check your schema files for which properties in this class are available"}]} loader = ConfluenceLoader( url="URL", username="USERNAME", api_key="API_KEY") documents = loader.load(space_key="SPACE_KEY", include_attachments=True, limit=150) text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0) documents = text_splitter.split_documents(documents) db = Weaviate.from_documents(documents, embeddings, client=client, by_text=False) chatbot = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=db.as_retriever(search_type="similarity",search_kwargs={"k": 1}), ) prompt = PromptTemplate(template=template, input_variables=["query"]) import sys while True: message=input("prompt: ") if message=='exit': print('Exiting') sys.exit() if message=='': continue response = chatbot.run(prompt.format(query=message)) ### Suggestion: _No response_
Issue: Schema error when ingesting documents
https://api.github.com/repos/langchain-ai/langchain/issues/12772/comments
2
2023-11-02T10:19:42Z
2024-02-09T16:09:28Z
https://github.com/langchain-ai/langchain/issues/12772
1,973,933,503
12,772
[ "hwchase17", "langchain" ]
### System Info Running langchain in Nextjs app ``` "langchain": "^0.0.175", "next": "^13.5.2", "react": "18.2.0", ``` ### Who can help? @eyurtsev ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I'm using the YoutubeLoader as described in the docs. The YoutubeLoader is part of a route handler in my NextJs application. ``` // in youtube service export async function getYoutubeTranscript(youtubeUrl: string) { const loader = YoutubeLoader.createFromUrl(youtubeUrl, { language: 'en', addVideoInfo: true, }) const docs = await loader.load() return docs .map((doc) => doc.pageContent) .join(' ') .replaceAll('\n', ' ') } ``` ``` // in route handler const transcript = await getYoutubeTranscript(link) ``` When using the route, I get an error because of a missing dependency - on my machine the route still works though. ![image](https://github.com/langchain-ai/langchain/assets/113305699/0890279e-f741-408a-9cb1-7e6cbc49c878) Manually installing canvas via `npm i canvas` fixes the issue, but when I deploy to vercel, the function exceeds the maximum size limit of 50 MB compressed, because the canvas package is 45 MB compressed... Not manually installing the dependency causes the route on Vercel to always return a 405 because of the missing dependency: ![image](https://github.com/langchain-ai/langchain/assets/113305699/bf1b95bf-4f99-4ceb-8e6f-740f14346cd3) I realize that this is not caused by the YoutubeLoader itself, but still hoping for any pointers or ideas how to fix this. ### Expected behavior YoutubeLoader works without throwing a 'module not found' error
YoutubeLoader throws 'module not found' error
https://api.github.com/repos/langchain-ai/langchain/issues/12771/comments
4
2023-11-02T10:15:03Z
2024-02-12T16:08:54Z
https://github.com/langchain-ai/langchain/issues/12771
1,973,925,566
12,771
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. When using FAISS vector store, how can I get the size of vector store? For example, I used FAISS.load_local(path, embeddings) to load a vector store, how to get the size of the loaded vector store? Which should be the total number of documents in it. ### Suggestion: _No response_
Issue: When using FAISS vector store, how can I get the size of vector store
https://api.github.com/repos/langchain-ai/langchain/issues/12767/comments
5
2023-11-02T07:38:15Z
2024-03-21T04:37:09Z
https://github.com/langchain-ai/langchain/issues/12767
1,973,677,488
12,767
[ "hwchase17", "langchain" ]
### System Info Langchain == 0.0.327 sqlite3 Here's how the setup is done Entity memory ``` memory = ConversationEntityMemory( llm=llm, input_key='question', entity_store=store, chat_history_key='chat_history', return_messages=True ) ``` Entity store ``` store = SQLiteEntityStore( session_id=f'session_{name}', db_file=os.path.abspath('storage/db/chat_history.sqlite3'), table_name='chat_history' ) ``` It doesn't store any chat history to the sqlite database. Ofcourse a table for each conversation id is getting created and it does store only one row i.e. the summary of the conversation. But nothing else getting stored. I am using ```RetrievalQA``` as well as ```ConversationalRetrievalChain``` and it doesn't work with either of them ``` chain = RetrievalQA.from_chain_type( llm=llm, retriever=vectordb, // chroma db is used and works properly chain_type_kwargs={ "prompt": QUERY_PROMPT, "memory": memory, "verbose": False }, # memory=memory, // doesnt work here verbose=False ) ``` or ``` chain = ConversationalRetrievalChain.from_llm( retriever=vectordb, llm=llm, memory=memory ) ``` LLM is as provided ``` llm = ChatOpenAI(...) ``` ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [X] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction As I have checked the chat_history.py script, found out that the particular method is not implemented till now File path ``` .venv/lib/python3.11/site-packages/langchain/schema/chat_history.py ``` Method details (line 56-63) ``` @abstractmethod def add_message(self, message: BaseMessage) -> None: """Add a Message object to the store. Args: message: A BaseMessage object to store. """ raise NotImplementedError() ``` I think this method is responsible for storing the chat history to the persistent database ### Expected behavior The code should save the chat_history (prompt + response) to the sqlite database after each prompt.
ConversationEntityMemory + SQLiteEntityStore doesn't store chat_history to the database at all
https://api.github.com/repos/langchain-ai/langchain/issues/12766/comments
2
2023-11-02T07:34:53Z
2024-02-08T16:10:36Z
https://github.com/langchain-ai/langchain/issues/12766
1,973,673,398
12,766
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. In create_Sql_agent, when the agent is querying a table schema with partition field, it nevers adds parition and the program goes wrong, how to let the agent add partition field everytime it queries a table schema? Many thanks. ### Suggestion: _No response_
create_sql_agent checks schema of a table with partiton field always goes wrong
https://api.github.com/repos/langchain-ai/langchain/issues/12765/comments
2
2023-11-02T06:38:16Z
2024-02-08T16:10:40Z
https://github.com/langchain-ai/langchain/issues/12765
1,973,604,467
12,765
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I tried to use webscraping by using langchain and used the async chromium loader as per the lanchain documentation but I was getting error asyncio.run() error so I made some chnages that i found on web but its still not working . below is the code: import asyncio import playwright from langchain.document_loaders import AsyncChromiumLoader from langchain.document_transformers import BeautifulSoupTransformer import nest_asyncio nest_asyncio.apply() async def main(): # Create an instance of AsyncChromiumLoader with a list of URLs urls = ["https://www.hdfcbank.com/personal/borrow/popular-loans/personal-loan/eligibility"] # Replace with your desired URLs loader = AsyncChromiumLoader(urls) # Load the URLs html = await loader.load() # Print the HTML content for url, content in html.items(): print(f"URL: {url}\nHTML Content: {content}\n") if __name__ == "__main__": asyncio.run(main()) ### Suggestion: _No response_
Issue: getting not implemented error when using async chromium loader
https://api.github.com/repos/langchain-ai/langchain/issues/12763/comments
7
2023-11-02T04:34:54Z
2024-07-06T16:05:28Z
https://github.com/langchain-ai/langchain/issues/12763
1,973,488,881
12,763
[ "hwchase17", "langchain" ]
### Feature request DPP (Determinantal Point Process) retriever for retrieve examples. ### Motivation DPPRetriever can obtain diverse examples. Therefore, I think DDP is needed in langchain. ### Your contribution NO
DPP retriever
https://api.github.com/repos/langchain-ai/langchain/issues/12761/comments
3
2023-11-02T03:08:49Z
2024-02-20T16:07:41Z
https://github.com/langchain-ai/langchain/issues/12761
1,973,423,287
12,761
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I provide the agent a get_date function as a tool,but when I ask the agent what is today's date, the agent provide the action and action input, but instead of executing it, it directly finished the chain。 ![image](https://github.com/langchain-ai/langchain/assets/149214596/737293ae-ef41-4c66-aef2-fe1267217c37) ![image](https://github.com/langchain-ai/langchain/assets/149214596/e1644d5e-9e52-4322-904a-b332f81d32c8) ![image](https://github.com/langchain-ai/langchain/assets/149214596/eaadad2c-0bc9-43ef-8b5e-d5a12626a3ea) ### Suggestion: _No response_
Issue: Why did the agent provide the action and action input, but instead of executing it, it directly finished the chain
https://api.github.com/repos/langchain-ai/langchain/issues/12759/comments
2
2023-11-02T02:24:40Z
2024-02-08T16:10:45Z
https://github.com/langchain-ai/langchain/issues/12759
1,973,385,322
12,759
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I created an analytic chatbot using Langchain (with tools and agents) for the backend and Streamlit for the frontend. It works, but for some users' questions, it takes too much time to output anything. If I look at the output of intermediate steps, I can see that the chatbot tries to print out all relevant rows in the output. For example, below, the chatbot found 40 relevant comments and printed them out in one of its intermediate steps one by one (it takes up to one minute). [![enter image description here][1]][1] Code for chatbot: ``` def load_data(path): return pd.read_csv(path) if st.sidebar.button('Use Data'): # If button is clicked, load the EDW.csv file st.session_state["df"] = load_data('./data/EDW.csv') uploaded_file = st.sidebar.file_uploader("Choose a CSV file", type="csv") if "df" in st.session_state: msgs = StreamlitChatMessageHistory() memory = ConversationBufferWindowMemory(chat_memory=msgs, return_messages=True, k=5, memory_key="chat_history", output_key="output") if len(msgs.messages) == 0 or st.sidebar.button("Reset chat history"): msgs.clear() msgs.add_ai_message("How can I help you?") st.session_state.steps = {} avatars = {"human": "user", "ai": "assistant"} # Display a chat input widget if prompt := st.chat_input(placeholder=""): st.chat_message("user").write(prompt) llm = AzureChatOpenAI( deployment_name = "gpt-4", model_name = "gpt-4", openai_api_key = os.environ["OPENAI_API_KEY"], openai_api_version = os.environ["OPENAI_API_VERSION"], openai_api_base = os.environ["OPENAI_API_BASE"], temperature = 0, streaming=True ) max_number_of_rows = 40 agent_analytics_node = create_pandas_dataframe_agent( llm, st.session_state["df"], verbose=False, agent_type=AgentType.OPENAI_FUNCTIONS, reduce_k_below_max_tokens=True, # to not exceed token limit max_execution_time = 20, early_stopping_method="generate", # will generate a final answer after the max_execution_time has been surpassed # max_iterations=2, # to cap an agent at taking a certain number of steps ) tool_analytics_node = Tool( return_intermediate_steps=False, name='Analytics Node', func=agent_analytics_node.run, description=f''' This tool is useful when you need to answer questions about data stored in a pandas dataframe, referred to as 'df'. 'df' comprises the following columns: {st.session_state["df"].columns.to_list()}. Here is a sample of the data: {st.session_state["df"].head(5)}. When working with df, ensure not to output more than {max_number_of_rows} rows at once, either in intermediate steps or in the final answer. This is because df could contain too many rows, which could potentially overload memory, for example instead of `df[df['survey_comment'].str.contains('wet', na=False, case=False)]['survey_comment'].tolist()` use `df[df['survey_comment'].str.contains('wet', na=False, case=False)]['survey_comment'].head({max_number_of_rows}).tolist()`. ''' ) tools = [tool_analytics_node] chat_agent = ConversationalChatAgent.from_llm_and_tools(llm=llm, tools=tools, return_intermediate_steps=False) executor = AgentExecutor.from_agent_and_tools( agent=chat_agent, tools=tools, memory=memory, return_intermediate_steps=False, handle_parsing_errors=True, verbose=False, ) with st.chat_message("assistant"): st_cb = StreamlitCallbackHandler(st.container(), expand_new_thoughts=False) response = executor(prompt, callbacks=[st_cb]) st.write(response["output"]) ``` [1]: https://i.stack.imgur.com/xyaVZl.png ### Suggestion: My questions are: 1) Is there any way to speed up this process? 2) How can I disable the intermediate output of the chatbot? (I already put `return_intermediate_steps=False`, `verbose=False`, and `expand_new_thoughts=False`, but the chatbot still shows intermediate steps.)
Issue: analytic chatbot that's based on Langchain (with agents and tools) and Streamlit is too slow
https://api.github.com/repos/langchain-ai/langchain/issues/12758/comments
2
2023-11-02T01:05:08Z
2024-04-08T16:07:51Z
https://github.com/langchain-ai/langchain/issues/12758
1,973,327,167
12,758
[ "hwchase17", "langchain" ]
### System Info Running with `langchain==0.0.327` on Fedora Linux and Windows. Finetuning with `gpt-3.5-turbo-16k-0613`. Language: `Python 3.10.11` ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [X] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [x] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Hello, I have created a simple langchain agent with a Tool and with an LLM. However, the LLM passed into the agent actually points to my **finetuned** GPT instance. We notice that the output responses contain an accurate response, but repeated and separated by a newline. I have [fine-tuned on the sample prompts](https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset) from the official OpenAI documentation on making a sarcastic chatbot. Here is the JSONL file I used for fine tuning: https://pastebin.com/BDV0Tjyk. Here is my agent code: ``` llm = ChatOpenAI(temperature=0, model=<OUR_MODEL_ID>, streaming=False) tools = [ Tool.from_function( func=add_asdf, name="add_asdf", description="useful when someone asks to add asdf to a string.", ), ] agent_kwargs = { "system_message": SystemMessage( content="Marv is a factual chatbot that is also sarcastic." ), } agent = initialize_agent( tools, llm, agent=AgentType.OPENAI_FUNCTIONS, agent_kwargs=agent_kwargs, max_execution_time=90, early_stopping_method="generate", handle_parsing_errors=True, tags=[], ) result = agent.run("Where's the sahara desert located?") print(result) ``` Now this does seem to work correctly and use the fine tuned instance (as we can see the responses are definitely sarcastic), but we notice that in its response, the agent seems to be combining **the same response twice, but separated by a newline.** Output: ``` In North Africa. But who's keeping track of giant deserts anyway? In North Africa. But who's keeping track of giant deserts anyway? ``` Looking at the generations from the debug: ``` "generations": [ [ { "text": "In North Africa. But who's keeping track of giant deserts anyway?\nIn North Africa. But who's keeping track of giant deserts anyway?", "generation_info": { "finish_reason": "stop" }, "type": "ChatGeneration", "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessage" ], "kwargs": { "content": "In North Africa. But who's keeping track of giant deserts anyway?\nIn North Africa. But who's keeping track of giant deserts anyway?", "additional_kwargs": {} } } } ] ] ``` Some important details: - We notice that we only execute the LLM _one_ time (i.e. there are no duplicate runs of the llm in the chain). - Our fine tuned instance works perfectly in the OpenAI playground, and with the OpenAI python library. **Hence, this seems to be a langchain-specific issue and nothing to do with our training data or fine tuning.** - The issue does not occur 100% of the time. Occasionally, only one response is returned. I am happy to provide more information on debug logs or anything about the fine-tuned model for replication. We'd love any information or nuances about using a fine-tuned instance with a Langchain agent, or guidance on what could be going wrong. Thanks! ### Expected behavior Only one response should be generated: ``` In North Africa. But who's keeping track of giant deserts anyway? ``` The response contains a copy of the response separated by a newline: ``` { "output": "In North Africa. But who's keeping track of giant deserts anyway?\nIn North Africa. But who's keeping track of giant deserts anyway?" } ```
Langchain Agent Duplicates Response Separated by "\n" with Finetuned LLM
https://api.github.com/repos/langchain-ai/langchain/issues/12752/comments
3
2023-11-01T23:15:52Z
2024-03-27T12:43:12Z
https://github.com/langchain-ai/langchain/issues/12752
1,973,243,397
12,752
[ "hwchase17", "langchain" ]
### Feature request Template that show routing across N datastores ### Motivation Reasoning about [where to fetch information (https://python.langchain.com/docs/expression_language/how_to/routing) from a natural language query is important for retrieval across data sources. ### Your contribution Add template
[Extraction template] Query decomposition + routing
https://api.github.com/repos/langchain-ai/langchain/issues/12745/comments
2
2023-11-01T21:05:55Z
2024-02-07T16:09:43Z
https://github.com/langchain-ai/langchain/issues/12745
1,973,112,630
12,745
[ "hwchase17", "langchain" ]
### Feature request Add template to simplify fine-tuning ### Motivation [LLM fine-tuning](https://blog.langchain.dev/using-langsmith-to-support-fine-tuning-of-open-source-llms/) can improve task-specific performance ### Your contribution Add template
[Fine tuning template] Fine-tune LLM
https://api.github.com/repos/langchain-ai/langchain/issues/12744/comments
1
2023-11-01T21:00:11Z
2024-02-07T16:09:48Z
https://github.com/langchain-ai/langchain/issues/12744
1,973,104,857
12,744
[ "hwchase17", "langchain" ]
### Feature request Set up template to simplify fine-tuning of embedding models. ### Motivation [Fine-tuning](https://blog.langchain.dev/using-langsmith-to-support-fine-tuning-of-open-source-llms/) of embedding models can improve RAG. ### Your contribution Add template
[Fine tuning template] Fine-tune embedding model
https://api.github.com/repos/langchain-ai/langchain/issues/12743/comments
2
2023-11-01T20:57:07Z
2024-02-08T16:10:50Z
https://github.com/langchain-ai/langchain/issues/12743
1,973,100,842
12,743
[ "hwchase17", "langchain" ]
### Feature request Given a runnable (e.g., RAG pipeline) automatically run eval against a given (public) dataset ### Motivation Made evaluation simple by abstracting w/ a template and passing in only a runnable. (Template can be called as RemoteRunnable.) Benchmark your runnable on an existing public LangSmith dataset. Will differ depending on the use-case / dataset of choice. ### Your contribution Add template
[Evaluation template] Runs evaluation against LangSmith (public) dataset
https://api.github.com/repos/langchain-ai/langchain/issues/12742/comments
1
2023-11-01T20:53:55Z
2024-02-07T16:09:58Z
https://github.com/langchain-ai/langchain/issues/12742
1,973,096,837
12,742
[ "hwchase17", "langchain" ]
### Feature request Synthetic dataset generation and extraction (to JSON or similar) ### Motivation Datasets (for eval or fine-tuning) are challenging to generate. Some prompts have used LLMs to [generate](https://smith.langchain.com/hub/homanp/question-answer-pair?ref=blog.langchain.dev&organizationId=1fa8b1f4-fcb9-4072-9aa9-983e35ad61b8) them. ### Your contribution Add template
[Data generation template] Synthetic QA pairs from input document
https://api.github.com/repos/langchain-ai/langchain/issues/12741/comments
1
2023-11-01T20:40:08Z
2024-02-07T16:10:03Z
https://github.com/langchain-ai/langchain/issues/12741
1,973,076,333
12,741
[ "hwchase17", "langchain" ]
### Feature request Summarization on audio content (podcasts, YT videos, lectures) would be useful. ### Motivation Prior work by [AssemblyAI](https://github.com/AssemblyAI/youtube-tutorials/blob/main/podcast-summary-app/podcasts.py) has focused on this. ### Your contribution Add template
[Summarization template] Audio
https://api.github.com/repos/langchain-ai/langchain/issues/12740/comments
5
2023-11-01T20:36:18Z
2024-06-18T09:12:29Z
https://github.com/langchain-ai/langchain/issues/12740
1,973,071,476
12,740