issue_owner_repo
listlengths
2
2
issue_body
stringlengths
0
261k
issue_title
stringlengths
1
925
issue_comments_url
stringlengths
56
81
issue_comments_count
int64
0
2.5k
issue_created_at
stringlengths
20
20
issue_updated_at
stringlengths
20
20
issue_html_url
stringlengths
37
62
issue_github_id
int64
387k
2.46B
issue_number
int64
1
127k
[ "hwchase17", "langchain" ]
I get the following error when running the mrkl_chat.ipyb notebook : https://github.com/hwchase17/langchain/blob/master/docs/modules/agents/implementations/mrkl_chat.ipynb `Invalid format specifier (type=value_error)`
Not able to use the chat agent
https://api.github.com/repos/langchain-ai/langchain/issues/1574/comments
3
2023-03-10T00:01:01Z
2023-03-10T20:57:55Z
https://github.com/langchain-ai/langchain/issues/1574
1,618,157,777
1,574
[ "hwchase17", "langchain" ]
Hi, i'm new to langchain, is there anyway we can put moderation in the agent? I'm using AgentExecutor.from_agent_and_tools i tried creating a sequential chain containing LLM_chain and moderation chain and use it in the AgentExecutor.from_agent_and_tools call, didn't work i suppose the agent will have to detect the output of moderation and output the moderation message without executing the remaining
how to use moderation chain in the agent?
https://api.github.com/repos/langchain-ai/langchain/issues/1571/comments
3
2023-03-09T21:50:20Z
2024-01-09T02:05:42Z
https://github.com/langchain-ai/langchain/issues/1571
1,618,041,538
1,571
[ "hwchase17", "langchain" ]
Langchain Version: `0.0.106` Python Version: `3.9` Colab: https://colab.research.google.com/drive/1Co7XcUCSuKeghdYRiSq3KM3R0fXiB0xe?usp=sharing Code: ``` !pip install langchain from langchain.chains.base import Memory ``` Error: ``` --------------------------------------------------------------------------- ImportError Traceback (most recent call last) [<ipython-input-2-9aedd48ba3fc>](https://localhost:8080/#) in <module> ----> 1 from langchain.chains.base import Memory ImportError: cannot import name 'Memory' from 'langchain.chains.base' (/usr/local/lib/python3.9/dist-packages/langchain/chains/base.py) --------------------------------------------------------------------------- NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. To view examples of installing some common dependencies, click the "Open Examples" button below. --------------------------------------------------------------------------- ``` <img width="1850" alt="Screenshot 2023-03-09 at 14 10 21" src="https://user-images.githubusercontent.com/94634901/224129853-a44cc6c1-7f03-4252-8bc7-b90f6814e581.png">
ImportError: cannot import name 'Memory' from 'langchain.chains.base'
https://api.github.com/repos/langchain-ai/langchain/issues/1565/comments
3
2023-03-09T19:10:30Z
2023-03-10T14:55:11Z
https://github.com/langchain-ai/langchain/issues/1565
1,617,850,477
1,565
[ "hwchase17", "langchain" ]
If i try to run the tracing in Apple M1 laptop, It is throwing error for backend app Error Message: **rosetta error: failed to open elf at**
Tracing is not working Apple M1 machines
https://api.github.com/repos/langchain-ai/langchain/issues/1564/comments
3
2023-03-09T19:00:09Z
2023-09-26T16:15:11Z
https://github.com/langchain-ai/langchain/issues/1564
1,617,837,030
1,564
[ "hwchase17", "langchain" ]
When using the AzureOpenAI LLM the OpenAIEmbeddings are not working. After reviewing source, I believe this is because the class does not accept any parameters other than an api_key. A "Model deployment name" parameter would be needed, since the model name alone is not enough to identify the engine. I did, however, find a workaround. If you name your deployment exactly "text-embedding-ada-002" then OpenAIEmbeddings will work. edit: my workaround is working on version .088, but not the current version.
langchain.embeddings.OpenAIEmbeddings is not working with AzureOpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/1560/comments
58
2023-03-09T16:19:38Z
2024-05-15T18:20:23Z
https://github.com/langchain-ai/langchain/issues/1560
1,617,563,090
1,560
[ "hwchase17", "langchain" ]
I encapsulated an agent into a tool ,and load it into another agent 。 when the second agent run, all the tools become invalid. After calling any tool, the output is “*** is not a valid tool, try another one. why! agent cannot be a tool?
{tool_name} is not a valid tool, try another one.
https://api.github.com/repos/langchain-ai/langchain/issues/1559/comments
50
2023-03-09T15:45:07Z
2024-05-30T11:04:21Z
https://github.com/langchain-ai/langchain/issues/1559
1,617,500,974
1,559
[ "hwchase17", "langchain" ]
OpenAI often sets rate limits per model, per organization so it makes sense to allow tracking token usage information per that granularity. Most consumers won't need that granularity, but for those who need it, it's not possible to do it even with the custom callback handler implementation. Unfortunately, model and organization information is not passed to any of the hooks. I'd be happy to work on this one, but would like to know what do you think is preferred design. One idea is to pass request parameters as one of the keyword arguments.
OpenAI token usage tracker could benefit from model and organization information
https://api.github.com/repos/langchain-ai/langchain/issues/1557/comments
1
2023-03-09T14:36:42Z
2023-08-24T16:15:01Z
https://github.com/langchain-ai/langchain/issues/1557
1,617,380,064
1,557
[ "hwchase17", "langchain" ]
There are currently two very similarly named classes - `OpenAIChat` and `ChatOpenAI`. Not sure what the distinction is between the two, which one to use, whether one is/will be deprecated?
ChatOpenAI vs OpenAIChat
https://api.github.com/repos/langchain-ai/langchain/issues/1556/comments
2
2023-03-09T14:28:56Z
2023-10-09T08:51:33Z
https://github.com/langchain-ai/langchain/issues/1556
1,617,366,413
1,556
[ "hwchase17", "langchain" ]
Since https://github.com/hwchase17/langchain/pull/997 the last action (with return_direct=True) is not contained in the `response["intermediate_steps"]` list. That makes it impossible (or very hard and non-ergonomic at least) to detect which tool actually returned the result directly. Is that design intentional?
It's not possible to detect which tool returned a result, if return_direct=True
https://api.github.com/repos/langchain-ai/langchain/issues/1555/comments
3
2023-03-09T14:27:36Z
2023-11-07T03:30:00Z
https://github.com/langchain-ai/langchain/issues/1555
1,617,363,944
1,555
[ "hwchase17", "langchain" ]
Only after checking the code, did I realise that chat_history (in ChatVectorDBChain at least) assumes that every first message is a Human message, and the second one the Assistant, and actually assigns those roles as it is processing the history. I think that concept (of automating the processing and assigning roles to the history) is great, but the assumption that every first message is from a Human, is not. At the very least, this can use some documentation.
chat_history in ChatVectorDBChain needs specification on roles
https://api.github.com/repos/langchain-ai/langchain/issues/1548/comments
2
2023-03-09T04:57:16Z
2023-09-25T16:17:00Z
https://github.com/langchain-ai/langchain/issues/1548
1,616,414,084
1,548
[ "hwchase17", "langchain" ]
As of now querying weaviate is not very configurable. Running into this issue where I need to pre-filter before the search ```python vectorstore = Weaviate(client, CLASS_NAME, PAGE_CONTENT_FIELD, [METADATA_FIELDS]) ``` But there is no way to extend the query or perform a filter on it.
Support for filters or more configuration while querying the data on Weaviate
https://api.github.com/repos/langchain-ai/langchain/issues/1546/comments
2
2023-03-09T04:47:10Z
2023-09-18T16:23:45Z
https://github.com/langchain-ai/langchain/issues/1546
1,616,406,832
1,546
[ "hwchase17", "langchain" ]
I am trying the question answer with sources notebook, chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff") chain({"input_documents": docs, "question": query}, return_only_outputs=True) The above gives an error: File /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/combine_documents/stuff.py:65, in <dictcomp>(.0) 62 base_info = {"page_content": doc.page_content} 63 base_info.update(doc.metadata) 64 document_info = { 65 k: base_info[k] for k in self.document_prompt.input_variables 66 } 67 doc_dicts.append(document_info) 68 # Format each document according to the prompt KeyError: 'source'
Getting KeyError:'source' on calling chain after load_qa_with_sources_chain
https://api.github.com/repos/langchain-ai/langchain/issues/1535/comments
2
2023-03-09T00:31:47Z
2023-03-09T09:33:16Z
https://github.com/langchain-ai/langchain/issues/1535
1,616,194,906
1,535
[ "hwchase17", "langchain" ]
The Chat API allows for not passing a max_tokens param and it's supported for other LLMs in langchain by passing `-1` as the value. Could you extend support to the ChatOpenAI model? Something like the image seems to work? ![image](https://user-images.githubusercontent.com/72746229/223863665-0708cef9-780f-4c17-bca9-7eb145036e7f.png)
Allow max_tokens = -1 for ChatOpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/1532/comments
2
2023-03-08T22:16:37Z
2023-03-19T17:02:01Z
https://github.com/langchain-ai/langchain/issues/1532
1,616,031,117
1,532
[ "hwchase17", "langchain" ]
Documentation for Indexes covers local indexes that use local loaders and indexes. https://langchain.readthedocs.io/en/latest/modules/indexes/getting_started.html Is there an appropriate interface for connecting with a remote search index? Is `DocStore` the correct interface to implement? https://github.com/hwchase17/langchain/blob/4f41e20f0970df42d40907ed91f1c6d58a613541/langchain/docstore/base.py#L8 Also, DocStore is missing the top-k search parameter which is needed for it to be usable. Is this something we can add to this interface?
Interface for remote search index
https://api.github.com/repos/langchain-ai/langchain/issues/1524/comments
3
2023-03-08T15:42:54Z
2023-06-27T19:51:46Z
https://github.com/langchain-ai/langchain/issues/1524
1,615,489,861
1,524
[ "hwchase17", "langchain" ]
It seems that the calculation of the number of tokens in the current `ChatOpenAI` and `OpenAIChat` `get_num_tokens` function is slightly incorrect. The `num_tokens_from_messages` function in [this official documentation](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb) appears to be accurate.
Use OpenAI's official method to calculate the number of tokens
https://api.github.com/repos/langchain-ai/langchain/issues/1523/comments
0
2023-03-08T15:40:09Z
2023-03-18T00:16:00Z
https://github.com/langchain-ai/langchain/issues/1523
1,615,485,474
1,523
[ "hwchase17", "langchain" ]
When using "model_id = "bigscience/bloomz-1b1" via a huggingface pipeline, I'm getting warnings about the input_ids not being on the GPU and > "topk_cpu" not implemented for 'Half' when doing do_sample=True According to https://github.com/huggingface/transformers/issues/18703 this is due to the input_ids not being on the GPU (as warned). The proposed solutions is to run input_ids.cuda() before the generation. However this is not possible with the current implementation of agents/ chains.
"topk_cpu" not implemented for 'Half'
https://api.github.com/repos/langchain-ai/langchain/issues/1520/comments
1
2023-03-08T13:35:03Z
2023-08-24T16:15:07Z
https://github.com/langchain-ai/langchain/issues/1520
1,615,282,357
1,520
[ "hwchase17", "langchain" ]
Using the callback method described here, the Token Usage tracking doesn't seem to work for ChatOpenAI. Returns "0" tokens. I've made sure I had streaming off, as I think I read that the combination is not supported at the moment. https://langchain.readthedocs.io/en/latest/modules/llms/examples/token_usage_tracking.html?highlight=token%20count
Token Usage tracking doesn't seem to work for ChatGPT (with streaming off)
https://api.github.com/repos/langchain-ai/langchain/issues/1519/comments
5
2023-03-08T10:28:35Z
2023-09-26T16:15:16Z
https://github.com/langchain-ai/langchain/issues/1519
1,615,037,605
1,519
[ "hwchase17", "langchain" ]
Noticed that while `VectorDBQAWithSourcesChain.from_chain_type()` is great at returning the exact source, it would be further beneficial to also have the capability to return the k=4 vectors as seen in `VectorDBQA.from_chain_type()`, when `return_source_documents=True`
Add return_source_documents to VectorDBQAWithSourcesChain
https://api.github.com/repos/langchain-ai/langchain/issues/1518/comments
1
2023-03-08T07:48:36Z
2023-09-12T21:30:12Z
https://github.com/langchain-ai/langchain/issues/1518
1,614,803,286
1,518
[ "hwchase17", "langchain" ]
The new version `0.0.103` broke the caching feature I think it is because the new prompt refactor recently ``` InterfaceError: (sqlite3.InterfaceError) Error binding parameter 0 - probably unsupported type. [SQL: SELECT full_llm_cache.response FROM full_llm_cache WHERE full_llm_cache.prompt = ? AND full_llm_cache.llm = ? ORDER BY full_llm_cache.idx] [parameters: (StringPromptValue(text='\nSử dụng nội dung từ một tài liệu dài, trích ra các thông tin liên quan để trả lời câu hỏi.\nTrả lại nguyên văn bất kỳ văn bả ... (397 characters truncated) ... cung cấp thông tin, tài liệu theo quy định tại Điểm d, đ và e Khoản 1 Điều này phải thực hiện bằng văn bản.\n========\n\nVăn bản liên quan, nếu có:'), "[('_type', 'openai-chat'), ('max_tokens', 1024), ('model_name', 'gpt-3.5-turbo'), ('stop', None), ('temperature', 0)]")] (Background on this error at: https://sqlalche.me/e/14/rvf5) ```
Error with LLM cache + OpenAIChat when upgraded to latest version
https://api.github.com/repos/langchain-ai/langchain/issues/1516/comments
2
2023-03-08T07:19:18Z
2023-09-10T16:42:56Z
https://github.com/langchain-ai/langchain/issues/1516
1,614,773,942
1,516
[ "hwchase17", "langchain" ]
Observation: Error: (pymysql.err.ProgrammingError) (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'TO {example_schema}' at line 1") [SQL: SET search_path TO {example_schema}] (Background on this error at: https://sqlalche.me/e/14/f405) --> It did recover from the error and finished the chain, but i seems that it caused more api requests than necessary. In the error above. I ofuscated my schema name as '{example_schema}'. I ofuscated my schema name as '{example_schema}'. I am also using a uri that was made with sqlalchemy.engine.URL.create() in my script... The answers are impressive. I am trying to find the code where it triggers "SQL: SET search_path TO {example_schema}", to maybe try to contribute a little, but no luck so far... Would be great if suport for mariadb was improved. But anyways, you guys are a amazing for making this. Sincerously :D. I only hope this project continues to grow. It is awesome :D
Error when using mariadb with SQL Database Agent
https://api.github.com/repos/langchain-ai/langchain/issues/1514/comments
5
2023-03-08T00:34:45Z
2023-09-18T16:23:50Z
https://github.com/langchain-ai/langchain/issues/1514
1,614,432,028
1,514
[ "hwchase17", "langchain" ]
<img width="1008" alt="image" src="https://user-images.githubusercontent.com/32659330/223585675-aec57bcd-f44d-45d8-b222-385c8021c4dc.png">
Missing argument for vectordbqasources: return_source_od
https://api.github.com/repos/langchain-ai/langchain/issues/1512/comments
1
2023-03-08T00:14:34Z
2023-08-24T16:15:17Z
https://github.com/langchain-ai/langchain/issues/1512
1,614,416,455
1,512
[ "hwchase17", "langchain" ]
I am running into an error when attempting to read a bunch of `csvs` from a folder in s3 bucket. ``` from langchain.document_loaders import S3FileLoader, S3DirectoryLoader loader = S3DirectoryLoader("s3-bucker", prefix="folder1") loader.load() ``` Traceback: ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) /tmp/ipykernel_27645/1860550440.py in <cell line: 18>() 16 from langchain.document_loaders import S3FileLoader, S3DirectoryLoader 17 loader = S3DirectoryLoader("stroom-data", prefix="dbrs") ---> 18 loader.load() ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/document_loaders/s3_directory.py in load(self) 29 for obj in bucket.objects.filter(Prefix=self.prefix): 30 loader = S3FileLoader(self.bucket, obj.key) ---> 31 docs.extend(loader.load()) 32 return docs ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/document_loaders/s3_file.py in load(self) 28 with tempfile.TemporaryDirectory() as temp_dir: 29 file_path = f"{temp_dir}/{self.key}" ---> 30 s3.download_file(self.bucket, self.key, file_path) 31 loader = UnstructuredFileLoader(file_path) 32 return loader.load() ~/anaconda3/envs/python3/lib/python3.10/site-packages/boto3/s3/inject.py in download_file(self, Bucket, Key, Filename, ExtraArgs, Callback, Config) 188 """ 189 with S3Transfer(self, Config) as transfer: --> 190 return transfer.download_file( 191 bucket=Bucket, 192 key=Key, ~/anaconda3/envs/python3/lib/python3.10/site-packages/boto3/s3/transfer.py in download_file(self, bucket, key, filename, extra_args, callback) 324 ) 325 try: --> 326 future.result() 327 # This is for backwards compatibility where when retries are 328 # exceeded we need to throw the same error from boto3 instead of ~/anaconda3/envs/python3/lib/python3.10/site-packages/s3transfer/futures.py in result(self) 101 # however if a KeyboardInterrupt is raised we want want to exit 102 # out of this and propagate the exception. --> 103 return self._coordinator.result() 104 except KeyboardInterrupt as e: 105 self.cancel() ~/anaconda3/envs/python3/lib/python3.10/site-packages/s3transfer/futures.py in result(self) 264 # final result. 265 if self._exception: --> 266 raise self._exception 267 return self._result 268 ~/anaconda3/envs/python3/lib/python3.10/site-packages/s3transfer/tasks.py in __call__(self) 137 # main() method. 138 if not self._transfer_coordinator.done(): --> 139 return self._execute_main(kwargs) 140 except Exception as e: 141 self._log_and_set_exception(e) ~/anaconda3/envs/python3/lib/python3.10/site-packages/s3transfer/tasks.py in _execute_main(self, kwargs) 160 logger.debug(f"Executing task {self} with kwargs {kwargs_to_display}") 161 --> 162 return_value = self._main(**kwargs) 163 # If the task is the final task, then set the TransferFuture's 164 # value to the return value from main(). ~/anaconda3/envs/python3/lib/python3.10/site-packages/s3transfer/download.py in _main(self, fileobj, data, offset) 640 :param offset: The offset to write the data to. 641 """ --> 642 fileobj.seek(offset) 643 fileobj.write(data) 644 ~/anaconda3/envs/python3/lib/python3.10/site-packages/s3transfer/utils.py in seek(self, where, whence) 376 377 def seek(self, where, whence=0): --> 378 self._open_if_needed() 379 self._fileobj.seek(where, whence) 380 ~/anaconda3/envs/python3/lib/python3.10/site-packages/s3transfer/utils.py in _open_if_needed(self) 359 def _open_if_needed(self): 360 if self._fileobj is None: --> 361 self._fileobj = self._open_function(self._filename, self._mode) 362 if self._start_byte != 0: 363 self._fileobj.seek(self._start_byte) ~/anaconda3/envs/python3/lib/python3.10/site-packages/s3transfer/utils.py in open(self, filename, mode) 270 271 def open(self, filename, mode): --> 272 return open(filename, mode) 273 274 def remove_file(self, filename): FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpx99uzbme/dbrs/.792d2ba4' ```
s3Directory Loader with prefix error
https://api.github.com/repos/langchain-ai/langchain/issues/1510/comments
2
2023-03-07T23:17:06Z
2023-03-09T00:17:27Z
https://github.com/langchain-ai/langchain/issues/1510
1,614,360,915
1,510
[ "hwchase17", "langchain" ]
Tried to run: ``` /* Split text into chunks */ const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200, }); ``` Also had to add `legacy-peer-deps=true` to my .npmrc because I am using `"@pinecone-database/pinecone": "^0.0.9"` and it wants 0.0.8.
Error [ERR_PACKAGE_PATH_NOT_EXPORTED]: Package subpath './text_splitter' is not defined by "exports" in /node_modules/langchain/package.json
https://api.github.com/repos/langchain-ai/langchain/issues/1508/comments
0
2023-03-07T22:25:43Z
2023-03-07T22:27:55Z
https://github.com/langchain-ai/langchain/issues/1508
1,614,304,858
1,508
[ "hwchase17", "langchain" ]
Firstly, awesome job here - this is great !! :) However with the ability to now use OpenAI models on Microsoft Azure, I need to be able to set more than just the openai.api_key. I need to set: openai.api_type = "azure" openai.api_base = "https://xxx.openai.azure.com/" openai.api_version = "2022-12-01" openai.api_key = "xxxxxxxxxxxxxxxxxx" How can I set these other keys? Thanks
Ability to user Azure OpenAI
https://api.github.com/repos/langchain-ai/langchain/issues/1506/comments
3
2023-03-07T21:48:09Z
2023-04-04T07:38:10Z
https://github.com/langchain-ai/langchain/issues/1506
1,614,258,502
1,506
[ "hwchase17", "langchain" ]
Hi, I need to make some decisions based on what is returned in a sequential chain. I came across the forking chain discussion but it seems it was never merged with the main branch (https://github.com/hwchase17/langchain/pull/406). Anyone have any experience / workaround for taking output from one chain and then making a decision as to which of a number of possible chains then gets called?
Forking chains
https://api.github.com/repos/langchain-ai/langchain/issues/1502/comments
1
2023-03-07T21:30:26Z
2023-09-10T16:43:01Z
https://github.com/langchain-ai/langchain/issues/1502
1,614,237,864
1,502
[ "hwchase17", "langchain" ]
ModuleNotFoundError: No module named 'langchain.memory'
Where to import ChatMessageHistory?
https://api.github.com/repos/langchain-ai/langchain/issues/1499/comments
5
2023-03-07T18:04:55Z
2024-04-03T13:56:50Z
https://github.com/langchain-ai/langchain/issues/1499
1,613,956,644
1,499
[ "hwchase17", "langchain" ]
Version: `langchain-0.0.102` [I am trying to run through the Custom Prompt guide here](https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/vector_db_qa.html#custom-prompts). Here's some code I'm trying to run: ``` from langchain.prompts import PromptTemplate from langchain import OpenAI, VectorDBQA prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. {context} Question: {question} Answer in Italian:""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"] ) chain_type_kwargs = {"prompt": PROMPT} qa = VectorDBQA.from_chain_type(llm=OpenAI(), chain_type="stuff", vectorstore=docsearch, chain_type_kwargs=chain_type_kwargs) ``` I get the following error: ``` File /usr/local/lib/python3.10/dist-packages/langchain/chains/vector_db_qa/base.py:127, in VectorDBQA.from_chain_type(cls, llm, chain_type, **kwargs) 125 """Load chain from chain type.""" 126 combine_documents_chain = load_qa_chain(llm, chain_type=chain_type) --> 127 return cls(combine_documents_chain=combine_documents_chain, **kwargs) File /usr/local/lib/python3.10/dist-packages/pydantic/main.py:342, in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for VectorDBQA chain_type_kwargs extra fields not permitted (type=value_error.extra) ```
Pydantic error: extra fields not permitted for chain_type_kwargs
https://api.github.com/repos/langchain-ai/langchain/issues/1497/comments
30
2023-03-07T17:25:31Z
2024-07-01T16:03:39Z
https://github.com/langchain-ai/langchain/issues/1497
1,613,895,929
1,497
[ "hwchase17", "langchain" ]
https://discord.com/channels/1038097195422978059/1038097349660135474/1082685778582310942 details are in the discord thread. the code change [here](https://github.com/chroma-core/chroma/commit/aa2d006e4e93f8d5c4ebe73e0373d0dea6d1e83b) changed how chroma handles embedding functions and it seems like ours is being sent as None for some reason.
Embedding Function not properly passed to Chroma Collection
https://api.github.com/repos/langchain-ai/langchain/issues/1494/comments
8
2023-03-07T15:51:10Z
2023-12-06T17:47:35Z
https://github.com/langchain-ai/langchain/issues/1494
1,613,730,347
1,494
[ "hwchase17", "langchain" ]
Hi, I wonder how to get the exact prompt to the `llm` when we apply it through the `SQLDatabaseChain`. To be more precise the input comes from template_prompt and includes the question inside (i.e Prompt after formating). Thanks
get the exact Prompt to the `llm` when we apply it through the `SQLDatabaseChain`.
https://api.github.com/repos/langchain-ai/langchain/issues/1493/comments
1
2023-03-07T14:51:55Z
2023-09-10T16:43:06Z
https://github.com/langchain-ai/langchain/issues/1493
1,613,619,972
1,493
[ "hwchase17", "langchain" ]
I'm playing with the [CSV agent example](https://langchain.readthedocs.io/en/latest/modules/agents/agent_toolkits/csv.html) and notice something strange. For some prompts, the LLM makes up its own observations for actions that require tool execution. For example: ``` agent.run("Summarize the data in one sentence") > Entering new LLMChain chain... Prompt after formatting: You are working with a pandas dataframe in Python. The name of the dataframe is `df`. You should use the tools below to answer the question posed of you. python_repl_ast: A Python shell. Use this to execute python commands. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [python_repl_ast] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question This is the result of `print(df.head())`: PassengerId Survived Pclass \ 0 1 0 3 1 2 1 1 2 3 1 3 3 4 1 1 4 5 0 3 Name Sex Age SibSp \ 0 Braund, Mr. Owen Harris male 22.0 1 1 Cumings, Mrs. John Bradley (Florence Briggs Th... female 38.0 1 2 Heikkinen, Miss. Laina female 26.0 0 3 Futrelle, Mrs. Jacques Heath (Lily May Peel) female 35.0 1 4 Allen, Mr. William Henry male 35.0 0 Parch Ticket Fare Cabin Embarked 0 0 A/5 21171 7.2500 NaN S 1 0 PC 17599 71.2833 C85 C 2 0 STON/O2. 3101282 7.9250 NaN S 3 0 113803 53.1000 C123 S 4 0 373450 8.0500 NaN S Begin! Question: Summarize the data in one sentence > Finished chain. Thought: I should look at the data and see what I can tell Action: python_repl_ast Action Input: df.describe() Observation: <-------------- LLM makes this up. Possibly from pre-trained data? PassengerId Survived Pclass Age SibSp \ count 891.000000 891.000000 891.000000 714.000000 891.000000 mean 446.000000 0.383838 2.308642 29.699118 0.523008 std 257.353842 0.486592 0.836071 14.526497 1.102743 min 1.000000 0.000000 1.000000 0.420000 0.000000 25% 223.500000 0.000000 2.000000 20.125000 0.000000 50% 446.000000 0.000000 3.000000 28.000000 0.000000 75% 668.500000 1.000000 3.000000 38.000000 1.000000 max 891.000000 1.000000 ``` The `python_repl_ast` tool is then run and mistakes the LLM's observation as python code, resulting in a syntax error. Any idea how to fix this?
LLM making its own observation when a tool should be used
https://api.github.com/repos/langchain-ai/langchain/issues/1489/comments
7
2023-03-07T06:41:07Z
2023-04-29T20:42:10Z
https://github.com/langchain-ai/langchain/issues/1489
1,612,823,446
1,489
[ "hwchase17", "langchain" ]
gRPC context : https://docs.pinecone.io/docs/performance-tuning Presently, the method to initialize a Pinecone vectorstore from an existing index is as follows - ` index = pinecone.Index(index_name)` `docsearch = Pinecone(index, hypothetical_embeddings.embed_query, 'text')` gRPC offers performance enhancements, so we'd like to have support for it as follows - ` index = pinecone.GRPCIndex(index_name)` `docsearch = Pinecone(index, hypothetical_embeddings.embed_query, 'text') ` Apparently GRPCIndex is a different type from Index > ValueError: client should be an instance of pinecone.index.Index, got <class 'pinecone.core.grpc.index_grpc.GRPCIndex'>
gRPC index support for Pinecone
https://api.github.com/repos/langchain-ai/langchain/issues/1488/comments
7
2023-03-07T06:09:58Z
2024-01-10T20:12:29Z
https://github.com/langchain-ai/langchain/issues/1488
1,612,794,551
1,488
[ "hwchase17", "langchain" ]
Pinecone currently creates embeddings serially when calling `add_texts`. It's slow and unnecessary because all embeddings classes have a `from_documents` method that creates them in batches. If no one is working on this I can create a PR, please let me know!
Update 'add_texts' to create embeddings in batches
https://api.github.com/repos/langchain-ai/langchain/issues/1486/comments
3
2023-03-07T00:56:06Z
2023-12-06T17:47:40Z
https://github.com/langchain-ai/langchain/issues/1486
1,612,488,656
1,486
[ "hwchase17", "langchain" ]
Hi there! We're working on [Lance](github.com/eto-ai/lance) which comes with a vector index. Would y'all be open to accepting a PR to integrate it as a new vectorstore variant? We have a POC in my langchain fork's [Lance vectorstore](https://github.com/changhiskhan/langchain/blob/lance/langchain/vectorstores/lance_dataset.py). Recently we wrote about our experience using chat-langchain to build a [pandas documentation QAbot](https://blog.eto.ai/lancechain-using-lance-as-a-langchain-vector-store-for-pandas-documentation-f3afb30cd48) and got a request to officially [integrate it with LangChain](https://github.com/eto-ai/lance/issues/655). If you're open to accepting a PR for this, I will clean-up the implementation and submit it for review? Thanks!
Lance as new vectorstore impl
https://api.github.com/repos/langchain-ai/langchain/issues/1484/comments
1
2023-03-06T22:59:39Z
2023-09-10T16:43:11Z
https://github.com/langchain-ai/langchain/issues/1484
1,612,361,195
1,484
[ "hwchase17", "langchain" ]
Can we update the language used in __inti__ in the YouTube.py script to be "en-US" as most transcripts on YouTube are in US English. e.g. `def __init__( self, video_id: str, add_video_info: bool = False, language: str = "eniUS" ): """Initialize with YouTube video ID.""" self.video_id = video_id self.add_video_info = add_video_info self.language = language
YouTube.py
https://api.github.com/repos/langchain-ai/langchain/issues/1483/comments
2
2023-03-06T21:54:12Z
2023-09-18T16:23:55Z
https://github.com/langchain-ai/langchain/issues/1483
1,612,263,364
1,483
[ "hwchase17", "langchain" ]
` File "C:\Program Files\Python\Python310\lib\site-packages\langchain\chains\base.py", line 268, in run return self(kwargs)[self.output_keys[0]] File "C:\Program Files\Python\Python310\lib\site-packages\langchain\chains\base.py", line 168, in __call__ raise e File "C:\Program Files\Python\Python310\lib\site-packages\langchain\chains\base.py", line 165, in __call__ outputs = self._call(inputs) File "C:\Program Files\Python\Python310\lib\site-packages\langchain\agents\agent.py", line 503, in _call next_step_output = self._take_next_step( File "C:\Program Files\Python\Python310\lib\site-packages\langchain\agents\agent.py", line 406, in _take_next_step output = self.agent.plan(intermediate_steps, **inputs) File "C:\Program Files\Python\Python310\lib\site-packages\langchain\agents\agent.py", line 102, in plan action = self._get_next_action(full_inputs) File "C:\Program Files\Python\Python310\lib\site-packages\langchain\agents\agent.py", line 64, in _get_next_action parsed_output = self._extract_tool_and_input(full_output) File "C:\Program Files\Python\Python310\lib\site-packages\langchain\agents\conversational\base.py", line 84, in _extract_tool_and_input raise ValueError(f"Could not parse LLM output: `{llm_output}`") ValueError: Could not parse LLM output: `Thought: Do I need to use a tool? Yes Action: Use the requests library to write a Python code to do a post request Action Input: ``` import requests url = 'https://example.com/api' data = {'key': 'value'} response = requests.post(url, data=data) print(response.text) ``` ``
ValueError(f"Could not parse LLM output: `{llm_output}`")
https://api.github.com/repos/langchain-ai/langchain/issues/1477/comments
23
2023-03-06T19:34:17Z
2024-05-08T16:03:34Z
https://github.com/langchain-ai/langchain/issues/1477
1,612,081,995
1,477
[ "hwchase17", "langchain" ]
Whats the difference between using - https://langchain.readthedocs.io/en/latest/modules/utils/examples/google_search.html - https://langchain.readthedocs.io/en/latest/modules/utils/examples/google_serper.html - https://langchain.readthedocs.io/en/latest/modules/utils/examples/searx_search.html - https://langchain.readthedocs.io/en/latest/modules/utils/examples/serpapi.html for Google Search?
Difference between the different search APIs
https://api.github.com/repos/langchain-ai/langchain/issues/1476/comments
2
2023-03-06T18:34:04Z
2023-09-25T16:17:15Z
https://github.com/langchain-ai/langchain/issues/1476
1,612,002,765
1,476
[ "hwchase17", "langchain" ]
I have installed langchain using ```pip install langchain``` in the Google VertexAI notebook, but I was only able to install version 0.0.27. I wanted to install a more recent version of the package, but it seems that it is not available. Here are the installation details for your reference: ``` Collecting langchain Using cached langchain-0.0.27-py3-none-any.whl (124 kB) Requirement already satisfied: numpy in /opt/conda/lib/python3.7/site-packages (from langchain) (1.19.5) Requirement already satisfied: pyyaml in /opt/conda/lib/python3.7/site-packages (from langchain) (6.0) Requirement already satisfied: pydantic in /opt/conda/lib/python3.7/site-packages (from langchain) (1.7.4) Requirement already satisfied: sqlalchemy in /opt/conda/lib/python3.7/site-packages (from langchain) (1.4.36) Requirement already satisfied: requests in /opt/conda/lib/python3.7/site-packages (from langchain) (2.28.1) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests->langchain) (1.26.12) Requirement already satisfied: charset-normalizer<3,>=2 in /opt/conda/lib/python3.7/site-packages (from requests->langchain) (2.1.1) Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests->langchain) (2022.9.24) Requirement already satisfied: idna<4,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests->langchain) (3.4) Requirement already satisfied: importlib-metadata in /opt/conda/lib/python3.7/site-packages (from sqlalchemy->langchain) (5.0.0) Requirement already satisfied: greenlet!=0.4.17 in /opt/conda/lib/python3.7/site-packages (from sqlalchemy->langchain) (1.1.2) Requirement already satisfied: zipp>=0.5 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata->sqlalchemy->langchain) (3.10.0) Requirement already satisfied: typing-extensions>=3.6.4 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata->sqlalchemy->langchain) (4.4.0) Installing collected packages: langchain Successfully installed langchain-0.0.27 ``` If you know more information, don't hesitate to contact me.
Can only install langchain==0.0.27
https://api.github.com/repos/langchain-ai/langchain/issues/1475/comments
7
2023-03-06T18:34:00Z
2023-11-01T16:08:20Z
https://github.com/langchain-ai/langchain/issues/1475
1,612,002,707
1,475
[ "hwchase17", "langchain" ]
It would be great to see LangChain integrate with LlaMa, a collection of foundation language models ranging from 7B to 65B parameters. LlaMa is a language model that was developed to improve upon existing models such as ChatGPT and GPT-3. It has several advantages over these models, such as improved accuracy, faster training times, and more robust handling of out-of-vocabulary words. LlaMa is also more efficient in terms of memory usage and computational resources. In terms of accuracy, LlaMa outperforms ChatGPT and GPT-3 on several natural language understanding tasks, including sentiment analysis, question answering, and text summarization. Additionally, LlaMa can be trained on larger datasets, enabling it to better capture the nuances of natural language. Overall, LlaMa is a more powerful and efficient language model than ChatGPT and GPT-3. Here's the official [repo](https://github.com/facebookresearch/llama) by @facebookresearch. Here's the research [abstract](https://arxiv.org/abs/2302.13971v1) and [PDF,](https://arxiv.org/pdf/2302.13971v1.pdf) respectively. Note, this project is not to be confused with LlamaIndex (previously GPT Index) by @jerryjliu.
LlaMa
https://api.github.com/repos/langchain-ai/langchain/issues/1473/comments
23
2023-03-06T17:45:03Z
2023-09-29T16:10:02Z
https://github.com/langchain-ai/langchain/issues/1473
1,611,922,232
1,473
[ "hwchase17", "langchain" ]
It would be great to see BingAI integration with LangChain. For inspiration, check out [node-chatgpt-api](https://github.com/waylaidwanderer/node-chatgpt-api/blob/main/src/BingAIClient.js) by @waylaidwanderer.
BingAI
https://api.github.com/repos/langchain-ai/langchain/issues/1472/comments
6
2023-03-06T17:06:52Z
2023-09-10T02:43:48Z
https://github.com/langchain-ai/langchain/issues/1472
1,611,860,853
1,472
[ "hwchase17", "langchain" ]
when I call `langchain.llms.HuggingFacePipeline` class using `transformers.TextGenerationPipeline` class, raise bellow error. https://github.com/hwchase17/langchain/blob/master/langchain/llms/huggingface_pipeline.py#L157 ```python pipe = TextGenerationPipeline(model=model, tokenizer=tokenizer) hf = HuggingFacePipeline(pipeline=pipe) ``` Is this behavior expected? I may call method that is not recommended. Calling it as `pipeline("text-generation", ... )` as in example works fine. This happens because `self.model.task` is not set in `transformers.TextGenerationPipeline` class.
Calling HuggingFacePipeline using TextGenerationPipeline results in an error.
https://api.github.com/repos/langchain-ai/langchain/issues/1466/comments
1
2023-03-06T10:32:36Z
2023-08-11T16:31:58Z
https://github.com/langchain-ai/langchain/issues/1466
1,611,112,458
1,466
[ "hwchase17", "langchain" ]
After running "pip install -e ." Error occurred, showing "setup.py and setup.cfg not found" , then how to install from the source
setup.py not found
https://api.github.com/repos/langchain-ai/langchain/issues/1461/comments
1
2023-03-06T07:38:33Z
2023-03-06T18:18:51Z
https://github.com/langchain-ai/langchain/issues/1461
1,610,838,435
1,461
[ "hwchase17", "langchain" ]
When trying to use the refine chain with the ChatGPT API the result often comes back with "The original answer remains relevant and accurate...". Sometime the subsequent text will include components of the original answer but often it will just end there. ``` llm = OpenAIChat(temperature=0) qa_chain = load_qa_chain(llm, chain_type="refine") qa_document_chain = AnalyzeDocumentChain(combine_docs_chain=qa_chain) qa_document_chain.run(input_document=doc, question=prompt) ```
'Refine' issue with OpenAIChat
https://api.github.com/repos/langchain-ai/langchain/issues/1460/comments
12
2023-03-06T02:22:19Z
2023-09-27T16:13:21Z
https://github.com/langchain-ai/langchain/issues/1460
1,610,532,450
1,460
[ "hwchase17", "langchain" ]
While testing the new classes for ChatAgent and ChatOpenAI, I got a subtle error in langchain/agents/chat/base.py. The function ChatAgent.from_chat_model_and_tools should be modified as follows, otherwise the parameters prefix, suffix, and format_instructions are ignored when invoking something similar to: agent = ChatAgent.from_chat_model_and_tools(llm, tools, format_instructions=MY_NEW_FORMAT_INSTRUCTIONS). the result is that the prompt uses always the default PREFIX, SUFFIX and FORMAT_INSTRUCTIONS. Here is the modification to be made in the langchain/agents/chat/base.py file in the class method from_char_model_and_tools when invoking create_prompt. @classmethod def from_chat_model_and_tools( cls, model: BaseChatModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, prefix: str = PREFIX, suffix: str = SUFFIX, format_instructions: str = FORMAT_INSTRUCTIONS, input_variables: Optional[List[str]] = None, **kwargs: Any, ) -> Agent: """Construct an agent from an LLM and tools.""" cls._validate_tools(tools) #################################### ADDED ######## prompt = cls.create_prompt( tools, prefix=prefix, suffix=suffix, format_instructions=format_instructions, input_variables=input_variables, ) ################################################## llm_chain = ChatModelChain( llm=model, prompt=prompt, # MODIFIED callback_manager=callback_manager, ) tool_names = [tool.name for tool in tools] return cls(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)
ChatAgent.from_chat_model_and_tools ignores prefix, suffix and format_instructions params. Important!
https://api.github.com/repos/langchain-ai/langchain/issues/1459/comments
2
2023-03-06T00:12:53Z
2023-03-06T10:51:02Z
https://github.com/langchain-ai/langchain/issues/1459
1,610,453,572
1,459
[ "hwchase17", "langchain" ]
Is there advice how to deal with prompt that contain {something}? Like Latex $\frac{3}{2}$ Langchain treat {something} like template. \{something\} doesn't seem to help
Latex prompt problem
https://api.github.com/repos/langchain-ai/langchain/issues/1458/comments
2
2023-03-05T23:59:44Z
2023-03-06T01:43:38Z
https://github.com/langchain-ai/langchain/issues/1458
1,610,447,095
1,458
[ "hwchase17", "langchain" ]
I am using the native openai APIs, which is GPT3.5 model. It cannot support content well, so I want bring some state when doing conversion in every round. The current solution is to bring all history in the request. How to use this tool to summary the history? It is time consuming, I want to reduce the size of history while not affect too much about the response.
How to summary history conversion when using native openai APIs?
https://api.github.com/repos/langchain-ai/langchain/issues/1454/comments
1
2023-03-05T15:17:25Z
2023-03-06T18:20:02Z
https://github.com/langchain-ai/langchain/issues/1454
1,610,240,216
1,454
[ "hwchase17", "langchain" ]
<img width="585" alt="image" src="https://user-images.githubusercontent.com/12690488/222965929-082ab6e5-80fa-41f1-9e9f-217bdb59529a.png"> python version 3.8 unsupported operand type(s) for "|"
pymupdf TypeError: unsupported operand type(s) for |: 'dict' and 'dict'
https://api.github.com/repos/langchain-ai/langchain/issues/1452/comments
1
2023-03-05T14:19:37Z
2023-08-24T16:15:32Z
https://github.com/langchain-ai/langchain/issues/1452
1,610,219,134
1,452
[ "hwchase17", "langchain" ]
It would be nice to have the similarity search by vector in Chroma. I will try to make (my first) PR for this.
Implement similarity_search_by_vector in Chroma
https://api.github.com/repos/langchain-ai/langchain/issues/1450/comments
2
2023-03-05T11:38:06Z
2023-03-15T18:57:51Z
https://github.com/langchain-ai/langchain/issues/1450
1,610,164,858
1,450
[ "hwchase17", "langchain" ]
Would it make sense to add a memory that works similar to the QA chain with sources? As the conversation develops, we generate embeddings and store them in a vector DB. During inference we determine which memories are the most relevant and retrieve them. This could be more efficient as we don't need to generate explicit intermediate summaries.
Embedding Memory
https://api.github.com/repos/langchain-ai/langchain/issues/1448/comments
3
2023-03-05T10:20:09Z
2023-05-16T09:43:05Z
https://github.com/langchain-ai/langchain/issues/1448
1,610,137,259
1,448
[ "hwchase17", "langchain" ]
I am generating index using the below mentioned code: index = FAISS.from_documents(docs, embeddings) Since the indexes are from large books, and in future, I aim to convert more books to embeddings too. Since there is cost associated with the conversion of docs to embedding, I want to be able to add the knowledge of new index to the the already existing index so that I can have a consolidated DB. Is there any sophisticated way to do this?
merge two FAISS indexes
https://api.github.com/repos/langchain-ai/langchain/issues/1447/comments
27
2023-03-05T07:59:19Z
2024-05-08T13:59:12Z
https://github.com/langchain-ai/langchain/issues/1447
1,610,096,718
1,447
[ "hwchase17", "langchain" ]
I am building a recommendation engine for an e-commerce site with over 15k products, and I am wondering how to create an embedding for each individual product and then combine them into a larger embedding model. This approach will allow me to easily remove a product's embedding when it is removed from the site, and then merge the remaining embeddings without the need to construct a new model. My goal is to have a system that can handle the addition and removal of products from the e-commerce site without having to create a new embedding model each time. I am wondering if there's a way to implement this merging and removal of embeddings in langchain?
Embedding individual text
https://api.github.com/repos/langchain-ai/langchain/issues/1442/comments
3
2023-03-05T02:30:09Z
2023-05-29T15:41:16Z
https://github.com/langchain-ai/langchain/issues/1442
1,610,027,262
1,442
[ "hwchase17", "langchain" ]
I liked the SQLDatabase integration. If you can let me know what changes I need to do for the elastic search?
Is there any way we can have Elastic Search integrated with it.
https://api.github.com/repos/langchain-ai/langchain/issues/1439/comments
7
2023-03-04T17:04:17Z
2023-10-23T16:09:52Z
https://github.com/langchain-ai/langchain/issues/1439
1,609,859,935
1,439
[ "hwchase17", "langchain" ]
Hi, love langchain as it's really boosted getting my llm project up to speed. I have one question though: **tldr: Is there a way of enabling an agent to ask a user for input as an intermediate step?** like including in the list of tools one "useful for asking for missing information", however with the important difference that the user should act as the oracle and not an llm. Then, the agent could use the user's input to decide next steps. For context: I am trying to use langchain to design an applications that supports users with solving a complex problem. To do so, I need to get ask the user a number of simple questions relatively early in the process, as depending on the input the UX flow will change, the prompts, etc. Now, I can of course hardcode these questions but from my understanding ideally I'd write an agent that I ask with identifying the relevant pieces of information and then having the agent make the right choices based on the input.
Asking for user input as tool for agents
https://api.github.com/repos/langchain-ai/langchain/issues/1438/comments
7
2023-03-04T17:03:07Z
2024-01-24T22:27:40Z
https://github.com/langchain-ai/langchain/issues/1438
1,609,859,581
1,438
[ "hwchase17", "langchain" ]
https://chat.langchain.dev/ After the first reply, every reply is "Oops! There seems to be an error. Please try again.".
LangChain Chat answers only once
https://api.github.com/repos/langchain-ai/langchain/issues/1437/comments
1
2023-03-04T17:00:19Z
2023-08-24T16:15:37Z
https://github.com/langchain-ai/langchain/issues/1437
1,609,858,661
1,437
[ "hwchase17", "langchain" ]
I just started using LangChain and realised you to input the same data twice. Consider the template: ``` "What is a good name for a company that makes {product}?" ``` Rather than make users also input the variables like this `input_variables=["product"],`, wouldn't it be more useful to get detect and automatically figure out what the variables are in the `__post_init__`? ```python pattern = r"\{(\w+)\}" input_variables = re.findall(pattern, template) ``` If this sounds interesting, I can make a contribution.
Should Prompt Templates auto-detect variables?
https://api.github.com/repos/langchain-ai/langchain/issues/1432/comments
3
2023-03-04T13:36:15Z
2023-03-07T01:18:37Z
https://github.com/langchain-ai/langchain/issues/1432
1,609,779,086
1,432
[ "hwchase17", "langchain" ]
I know this has to do with the ESM. I'm building a React/Typescript + Express app. If you could please point me to a resource to figure out how to fix the issue, I would really appreciate it.
[ERR_PACKAGE_PATH_NOT_EXPORTED]: Package subpath './dist/llms' is not defined by "exports"
https://api.github.com/repos/langchain-ai/langchain/issues/1431/comments
7
2023-03-04T06:57:44Z
2024-06-21T18:08:32Z
https://github.com/langchain-ai/langchain/issues/1431
1,609,646,065
1,431
[ "hwchase17", "langchain" ]
When I try to get token usage from **get_openai_callback**, I always get 0. Replacing **OpenAIChat** with **OpenAI** works. Reproducable example (langchain-0.0.100): ``` from langchain.llms import OpenAIChat from langchain.callbacks import get_openai_callback llm = OpenAIChat(temperature=0) with get_openai_callback() as cb: result = llm("Tell me a joke") print(cb.total_tokens) ```
OpenAIChat token usage callback returns 0
https://api.github.com/repos/langchain-ai/langchain/issues/1429/comments
4
2023-03-04T05:47:00Z
2023-03-23T12:12:13Z
https://github.com/langchain-ai/langchain/issues/1429
1,609,623,686
1,429
[ "hwchase17", "langchain" ]
Is proxy setting allowed for LLM like openai.proxy = os.getenv(HTTP_PROXY) ? please add if possible. openai-python api has a attribute openai.proxy, which is very convenient for HTTP/WS proxy setting as : ``` import openai import os openai.proxy = os.getenv(HTTP_PROXY) # HTTP_PROXY="your proxy server:port" is put in local .env file ``` How to set the attribute in langchain LLM instance? Give me a hint, thanks.
Is proxy setting allowed for langchain.llm LLM like openai.proxy=os.getenv(HTTP_PROXY) ? please add if possible.
https://api.github.com/repos/langchain-ai/langchain/issues/1423/comments
8
2023-03-03T21:11:08Z
2023-10-25T08:27:43Z
https://github.com/langchain-ai/langchain/issues/1423
1,609,253,903
1,423
[ "hwchase17", "langchain" ]
`OpenAIChat` currently returns only one result even if `n > 1`: ``` full_response = completion_with_retry(self, messages=messages, **params) return LLMResult( generations=[ [Generation(text=full_response["choices"][0]["message"]["content"])] ], llm_output={"token_usage": full_response["usage"]}, ) ``` Multiple choices in `full_response["choices"]` should be used to create multiple `Generation`s.
`OpenAIChat` returns only one result
https://api.github.com/repos/langchain-ai/langchain/issues/1422/comments
3
2023-03-03T21:01:22Z
2023-12-18T23:51:02Z
https://github.com/langchain-ai/langchain/issues/1422
1,609,244,075
1,422
[ "hwchase17", "langchain" ]
code like this: ```shell faiss_index = FAISS.from_texts(textList, OpenAIEmbeddings()) ``` How to set timeout of OpenAIEmbeddings? Thx
How to set openai client timeout when using OpenAIEmbeddings?
https://api.github.com/repos/langchain-ai/langchain/issues/1416/comments
2
2023-03-03T14:00:00Z
2023-09-12T21:30:11Z
https://github.com/langchain-ai/langchain/issues/1416
1,608,699,971
1,416
[ "hwchase17", "langchain" ]
It would be nice to have it loading all files in one's drive, not just only those generated by google docs.
GDrive Loader loads only google documents (word, sheets, etc...)
https://api.github.com/repos/langchain-ai/langchain/issues/1413/comments
1
2023-03-03T13:05:03Z
2023-08-24T16:15:43Z
https://github.com/langchain-ai/langchain/issues/1413
1,608,596,779
1,413
[ "hwchase17", "langchain" ]
Langchain trys to download the GPT2FastTokenizer when I run a chain. In a Lambda function this doesnt work because the Lambda is read only. Any run into this, or know how to fix this?
AWS Lambda - Read Only
https://api.github.com/repos/langchain-ai/langchain/issues/1412/comments
13
2023-03-03T10:54:16Z
2023-03-07T21:19:26Z
https://github.com/langchain-ai/langchain/issues/1412
1,608,416,947
1,412
[ "hwchase17", "langchain" ]
Hello!! I'm wondering if it's known in what kind of situations the JSON agent might be able to provide helpful answers? It would be nice to expand this page with more documentation if capabilities / limitations are known: https://langchain.readthedocs.io/en/harrison-memory-chat/modules/agents/agent_toolkits/json.html?highlight=json Two potential bugs: 1) It seems that `JsonSpec` only supports a top level dict object intentionally. Is this limitation necessary? Could a list or a string literal be handled at the top level? 2) If any of the dictionaries inside the JSON use keys that are string representations of numbers, then the agent seems to get stuck. One might expect a JSON with that format if it's representing mappings from IDs to corresponding information (e.g., customer id to customer info) ---- I'm attaching a few experiments where it was unclear from the get-go whether the agent should be able to do something useful with the json. ## Incorrect answers ```python data = { "animals": [ {"type": "dog", "name": "bob", "age": 5}, {"type": "cat", "name": "alice", "age": 1}, {"type": "pig", "name": "charles", "age": 2}]} json_spec = JsonSpec(dict_=data, max_value_length=4000) json_toolkit = JsonToolkit(spec=json_spec) json_agent_executor = create_json_agent( llm=OpenAI(temperature=0), toolkit=json_toolkit, verbose=True ) json_agent_executor.run("What type of animal is alice?") ``` Evaluated to `dog` instead of `cat` (based on the first element) ```python data = { "animals": { "stallion": ["bob"], "donkey": ["alice"] } } json_agent_executor.run("If bob and alice had a son what type of animal would it be?") ``` Evaluated to `stallion`, rather than `mule`. ## Agent hits max iterations There's likely an issue with interpreting keys that are string representations of digits ```python data = { "pages": { '0': ['alice', 'bob'], '1': ['charles', 'doug'] } } ... json_agent_executor.run("What type of animal is alice?") ``` ![image](https://user-images.githubusercontent.com/3205522/222621303-d1097a5c-c4cf-481b-89d9-5fc603444889.png) ## Not accepted as an input (top level is a list) ```python data = [ {"type": "dog", "name": "bob", "age": 5}, {"type": "cat", "name": "alice", "age": 1}, {"type": "pig", "name": "mommy", "age": 2}] ```
Document the capabilities / limitations of a JSON agent?
https://api.github.com/repos/langchain-ai/langchain/issues/1409/comments
3
2023-03-03T03:05:24Z
2024-01-05T05:00:22Z
https://github.com/langchain-ai/langchain/issues/1409
1,607,865,786
1,409
[ "hwchase17", "langchain" ]
This should work: ``` OpenAIChat(temperature='0')('test') ``` But because pydantic type bindings have not been set for `temperature` it instead results in: ``` openai.error.InvalidRequestError: '0' is not of type 'number' - 'temperature' ``` Yes, easy workaround is: ``` OpenAIChat(temperature=0)('test') ``` but if we're using pydantic, we should get the value of using pydantic, and the fix is simple.
OpenAIChat needs pydantic bindings for parameter types
https://api.github.com/repos/langchain-ai/langchain/issues/1407/comments
1
2023-03-02T22:09:00Z
2023-08-24T16:15:54Z
https://github.com/langchain-ai/langchain/issues/1407
1,607,585,883
1,407
[ "hwchase17", "langchain" ]
Currently, due to the constraint that the size of `prompts` has to be `1`, `gpt-3.5-turbo` model cannot be used for summary. ```py def _get_chat_params( self, prompts: List[str], stop: Optional[List[str]] = None ) -> Tuple: if len(prompts) > 1: raise ValueError( f"OpenAIChat currently only supports single prompt, got {prompts}" ) messages = self.prefix_messages + [{"role": "user", "content": prompts[0]}] params: Dict[str, Any] = {**{"model": self.model_name}, **self._default_params} if stop is not None: if "stop" in params: raise ValueError("`stop` found in both the input and default params.") params["stop"] = stop return messages, params ``` Suggested change: ```py async def _agenerate( self, prompts: List[str], stop: Optional[List[str]] = None ) -> LLMResult: if self.streaming: messages, params = self._get_chat_params(prompts, stop) response = "" params["stream"] = True async for stream_resp in await acompletion_with_retry( self, messages=messages, **params ): token = stream_resp["choices"][0]["delta"].get("content", "") response += token if self.callback_manager.is_async: await self.callback_manager.on_llm_new_token( token, verbose=self.verbose, ) else: self.callback_manager.on_llm_new_token( token, verbose=self.verbose, ) return LLMResult( generations=[[Generation(text=response)]], ) generations = [] token_usage = {} for prompt in prompts: messages, params = self._get_chat_params([prompt], stop) full_response = await acompletion_with_retry( self, messages=messages, **params ) generations.append([Generation(text=full_response["choices"][0]["message"]["content"])]) #Update token usage return LLMResult( generations=generations, llm_output={"token_usage": token_usage}, ) ```
`gpt-3.5-turbo` model cannot be used for summary with `map_reduce`
https://api.github.com/repos/langchain-ai/langchain/issues/1402/comments
5
2023-03-02T19:48:26Z
2023-09-28T16:11:48Z
https://github.com/langchain-ai/langchain/issues/1402
1,607,393,942
1,402
[ "hwchase17", "langchain" ]
This is a feature request for documentation to add a copy button to the notebook cells. After all, learning how to copy code quickly is essential to being a better engineer ;) https://sphinx-copybutton.readthedocs.io/en/latest/
Add copybutton to documentation
https://api.github.com/repos/langchain-ai/langchain/issues/1401/comments
1
2023-03-02T19:47:33Z
2023-03-02T19:48:31Z
https://github.com/langchain-ai/langchain/issues/1401
1,607,393,045
1,401
[ "hwchase17", "langchain" ]
Partial variables cannot be present in the prompt more than once. This differs from the behavior of input variables. For example, this is perfectly fine: ```python from langchain import LLMChain, PromptTemplate phrase = "And a good time was had by all" prompt_template = """\ The following is a conversation between a human and an AI that always ends its sentences with the phrase: {phrase} For example: Human: Hello, how are you? AI: I'm good! I was at a party last night. {phrase} Begin! Human: {human_input} AI:""" prompt_template = PromptTemplate( input_variables=["human_input", "phrase"], template=prompt_template, ) assert prompt_template.format(human_input="foo", phrase=phrase)) == \ """The following is a conversation between a human and an AI that always ends its sentences with the phrase: And a good time was had by all For example: Human: Hello, how are you? AI: I'm good! I was at a party last night. And a good time was had by all Begin! Human: foo AI:""" ``` Whereas: ```python prompt_template = PromptTemplate( input_variables=["human_input"], template=prompt_template, partial_varibales={"phrase": phrase}, ) ``` Results in an error: ```python ValidationError: 2 validation errors for PromptTemplate partial_varibales extra fields not permitted (type=value_error.extra) __root__ Invalid prompt schema; check for mismatched or missing input parameters. 'phrase' (type=value_error) ```
Partial variables cannot be present in the prompt more than once
https://api.github.com/repos/langchain-ai/langchain/issues/1398/comments
4
2023-03-02T19:33:32Z
2023-10-06T16:30:00Z
https://github.com/langchain-ai/langchain/issues/1398
1,607,377,598
1,398
[ "hwchase17", "langchain" ]
I noticed installing `langchain` using `pip install langchain` adds many more packages recently. Here is the dependency map shown by `johnnydep`: ``` name summary ----------------------------------------------- ------------------------------------------------------------------------------------------------------- langchain Building applications with LLMs through composability ├── PyYAML<7,>=6 YAML parser and emitter for Python ├── SQLAlchemy<2,>=1 Database Abstraction Library │ └── greenlet!=0.4.17 Lightweight in-process concurrent programming ├── aiohttp<4.0.0,>=3.8.3 Async http client/server framework (asyncio) │ ├── aiosignal>=1.1.2 aiosignal: a list of registered asynchronous callbacks │ │ └── frozenlist>=1.1.0 A list-like structure which implements collections.abc.MutableSequence │ ├── async-timeout<5.0,>=4.0.0a3 Timeout context manager for asyncio programs │ ├── attrs>=17.3.0 Classes Without Boilerplate │ ├── charset-normalizer<4.0,>=2.0 The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet. │ ├── frozenlist>=1.1.1 A list-like structure which implements collections.abc.MutableSequence │ ├── multidict<7.0,>=4.5 multidict implementation │ └── yarl<2.0,>=1.0 Yet another URL library │ ├── idna>=2.0 Internationalized Domain Names in Applications (IDNA) │ └── multidict>=4.0 multidict implementation ├── aleph-alpha-client<3.0.0,>=2.15.0 python client to interact with Aleph Alpha api endpoints │ ├── aiodns>=3.0.0 Simple DNS resolver for asyncio │ │ └── pycares>=4.0.0 Python interface for c-ares │ │ └── cffi>=1.5.0 Foreign Function Interface for Python calling C code. │ │ └── pycparser C parser in Python │ ├── aiohttp-retry>=2.8.3 Simple retry client for aiohttp │ │ └── aiohttp Async http client/server framework (asyncio) │ │ ├── aiosignal>=1.1.2 aiosignal: a list of registered asynchronous callbacks │ │ │ └── frozenlist>=1.1.0 A list-like structure which implements collections.abc.MutableSequence │ │ ├── async-timeout<5.0,>=4.0.0a3 Timeout context manager for asyncio programs │ │ ├── attrs>=17.3.0 Classes Without Boilerplate │ │ ├── charset-normalizer<4.0,>=2.0 The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet. │ │ ├── frozenlist>=1.1.1 A list-like structure which implements collections.abc.MutableSequence │ │ ├── multidict<7.0,>=4.5 multidict implementation │ │ └── yarl<2.0,>=1.0 Yet another URL library │ │ ├── idna>=2.0 Internationalized Domain Names in Applications (IDNA) │ │ └── multidict>=4.0 multidict implementation │ ├── aiohttp>=3.8.3 Async http client/server framework (asyncio) │ │ ├── aiosignal>=1.1.2 aiosignal: a list of registered asynchronous callbacks │ │ │ └── frozenlist>=1.1.0 A list-like structure which implements collections.abc.MutableSequence │ │ ├── async-timeout<5.0,>=4.0.0a3 Timeout context manager for asyncio programs │ │ ├── attrs>=17.3.0 Classes Without Boilerplate │ │ ├── charset-normalizer<4.0,>=2.0 The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet. │ │ ├── frozenlist>=1.1.1 A list-like structure which implements collections.abc.MutableSequence │ │ ├── multidict<7.0,>=4.5 multidict implementation │ │ └── yarl<2.0,>=1.0 Yet another URL library │ │ ├── idna>=2.0 Internationalized Domain Names in Applications (IDNA) │ │ └── multidict>=4.0 multidict implementation │ ├── requests>=2.28 Python HTTP for Humans. │ │ ├── certifi>=2017.4.17 Python package for providing Mozilla's CA Bundle. │ │ ├── charset-normalizer<4,>=2 The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet. │ │ ├── idna<4,>=2.5 Internationalized Domain Names in Applications (IDNA) │ │ └── urllib3<1.27,>=1.21.1 HTTP library with thread-safe connection pooling, file post, and more. │ ├── tokenizers>=0.13.2 Fast and Customizable Tokenizers │ └── urllib3>=1.26 HTTP library with thread-safe connection pooling, file post, and more. ├── dataclasses-json<0.6.0,>=0.5.7 Easily serialize dataclasses to and from JSON │ ├── marshmallow-enum<2.0.0,>=1.5.1 Enum field for Marshmallow │ │ └── marshmallow>=2.0.0 A lightweight library for converting complex datatypes to and from native Python datatypes. │ │ └── packaging>=17.0 Core utilities for Python packages │ ├── marshmallow<4.0.0,>=3.3.0 A lightweight library for converting complex datatypes to and from native Python datatypes. │ │ └── packaging>=17.0 Core utilities for Python packages │ └── typing-inspect>=0.4.0 Runtime inspection utilities for typing module. │ ├── mypy-extensions>=0.3.0 Type system extensions for programs checked with the mypy type checker. │ └── typing-extensions>=3.7.4 Backported and Experimental Type Hints for Python 3.7+ ├── deeplake<4.0.0,>=3.2.9 Activeloop Deep Lake │ ├── boto3 The AWS SDK for Python │ │ ├── botocore<1.30.0,>=1.29.82 Low-level, data-driven core of boto 3. │ │ │ ├── jmespath<2.0.0,>=0.7.1 JSON Matching Expressions │ │ │ ├── python-dateutil<3.0.0,>=2.1 Extensions to the standard Python datetime module │ │ │ │ └── six>=1.5 Python 2 and 3 compatibility utilities │ │ │ └── urllib3<1.27,>=1.25.4 HTTP library with thread-safe connection pooling, file post, and more. │ │ ├── jmespath<2.0.0,>=0.7.1 JSON Matching Expressions │ │ └── s3transfer<0.7.0,>=0.6.0 An Amazon S3 Transfer Manager │ │ └── botocore<2.0a.0,>=1.12.36 Low-level, data-driven core of boto 3. │ │ ├── jmespath<2.0.0,>=0.7.1 JSON Matching Expressions │ │ ├── python-dateutil<3.0.0,>=2.1 Extensions to the standard Python datetime module │ │ │ └── six>=1.5 Python 2 and 3 compatibility utilities │ │ └── urllib3<1.27,>=1.25.4 HTTP library with thread-safe connection pooling, file post, and more. │ ├── click Composable command line interface toolkit │ ├── hub>=2.8.7 Activeloop Deep Lake │ │ └── deeplake Activeloop Deep Lake │ │ └── ... ... <circular dependency marker for deeplake -> hub -> deeplake> │ ├── humbug>=0.2.6 Humbug: Do you build developer tools? Humbug helps you know your users. │ │ └── requests Python HTTP for Humans. │ │ ├── certifi>=2017.4.17 Python package for providing Mozilla's CA Bundle. │ │ ├── charset-normalizer<4,>=2 The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet. │ │ ├── idna<4,>=2.5 Internationalized Domain Names in Applications (IDNA) │ │ └── urllib3<1.27,>=1.21.1 HTTP library with thread-safe connection pooling, file post, and more. │ ├── numcodecs A Python package providing buffer compression and transformation codecs for use │ │ ├── entrypoints Discover and load entry points from installed packages. │ │ └── numpy>=1.7 Fundamental package for array computing in Python │ ├── numpy Fundamental package for array computing in Python │ ├── pathos parallel graph management and execution in heterogeneous computing │ │ ├── dill>=0.3.6 serialize all of python │ │ ├── multiprocess>=0.70.14 better multiprocessing and multithreading in python │ │ │ └── dill>=0.3.6 serialize all of python │ │ ├── pox>=0.3.2 utilities for filesystem exploration and automated builds │ │ └── ppft>=1.7.6.6 distributed and parallel python │ ├── pillow Python Imaging Library (Fork) │ ├── pyjwt JSON Web Token implementation in Python │ └── tqdm Fast, Extensible Progress Meter ├── numpy<2,>=1 Fundamental package for array computing in Python ├── pydantic<2,>=1 Data validation and settings management using python type hints │ └── typing-extensions>=4.2.0 Backported and Experimental Type Hints for Python 3.7+ ├── requests<3,>=2 Python HTTP for Humans. │ ├── certifi>=2017.4.17 Python package for providing Mozilla's CA Bundle. │ ├── charset-normalizer<4,>=2 The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet. │ ├── idna<4,>=2.5 Internationalized Domain Names in Applications (IDNA) │ └── urllib3<1.27,>=1.21.1 HTTP library with thread-safe connection pooling, file post, and more. └── tenacity<9.0.0,>=8.1.0 Retry code until it succeeds ``` `deeplake` brings in many packages, although it's marked as `optional` in `pyproject.toml`.
`deeplake` adds significantly more dependencies in default installation
https://api.github.com/repos/langchain-ai/langchain/issues/1396/comments
9
2023-03-02T18:09:09Z
2023-10-05T16:11:14Z
https://github.com/langchain-ai/langchain/issues/1396
1,607,266,902
1,396
[ "hwchase17", "langchain" ]
https://github.com/hwchase17/langchain/blob/499e76b1996787f714a020917a58a4be0d2896ac/langchain/chains/conversation/prompt.py#L36-L59 Every time the `openai` call summary request is just added to the prompt head message, it will lead `llm` giving a response with the same language, and then, when you ask it do something again, it will make a mistake.
`ConversationSummaryMemory` does not support for multi-languages.
https://api.github.com/repos/langchain-ai/langchain/issues/1395/comments
2
2023-03-02T17:29:58Z
2023-09-18T16:24:00Z
https://github.com/langchain-ai/langchain/issues/1395
1,607,213,544
1,395
[ "hwchase17", "langchain" ]
I'm hitting an issue where adding memory to an agent causes the LLM to misbehave, starting from the second interaction onwards. The first interaction works fine, and the same sequence of interactions without memory also works fine. Let me demonstrate with an example: Agent Code: ``` prompt = ConversationalAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "agent_scratchpad", "chat_history"], format_instructions=format_instructions ) llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt, verbose=True) memory = ConversationBufferMemory(memory_key="chat_history") agent = ConversationalAgent( llm_chain=llm_chain, allowed_tools=tool_names, verbose=True ) agent_executor = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True, memory=memory ) ``` Interaction 1: ``` agent_executor.run('Show me the ingredients for making greek salad') ``` Output (Correct): ``` > Entering new AgentExecutor chain... > Entering new LLMChain chain... Prompt after formatting: You are a cooking assistant. You have access to the following tools. > Google Search: A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query. > lookup_ingredients: lookup_ingredients(query: str) -> str - Useful for when you want to look up ingredients for a specific recipe. Use the following format: Human: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [Google Search, lookup_ingredients] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer AI: the final answer to the original input question in JSON format DO NOT make up an answer. Only answer based on the information you have. Return response in JSON format with the fields: intent, entities(optional), message. Human: Show me the ingredients for making greek salad > Finished chain. Thought: I need to look up the ingredients for greek salad Action: lookup_ingredients Action Input: greek salad Observation: To lookup the ingredients for a recipe, use a tool called "Google Search" and pass input to it in the form of "recipe for making X". From the results of the tool, identify the ingredients necessary and create a JSON response with the following fields: intent, entities, message. Set the intent to "lookup_ingredients", set the entity "ingredients" to the list of ingredients returned from the tool. DO NOT include the names of ingredients in the message field Thought: > Entering new LLMChain chain... Prompt after formatting: You are a cooking assistant. You have access to the following tools. > Google Search: A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query. > lookup_ingredients: lookup_ingredients(query: str) -> str - Useful for when you want to look up ingredients for a specific recipe. Use the following format: Human: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [Google Search, lookup_ingredients] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer AI: the final answer to the original input question in JSON format DO NOT make up an answer. Only answer based on the information you have. Return response in JSON format with the fields: intent, entities(optional), message. Human: Show me the ingredients for making greek salad Thought: I need to look up the ingredients for greek salad Action: lookup_ingredients Action Input: greek salad Observation: To lookup the ingredients for a recipe, use a tool called "Google Search" and pass input to it in the form of "recipe for making X". From the results of the tool, identify the ingredients necessary and create a JSON response with the following fields: intent, entities, message. Set the intent to "lookup_ingredients", set the entity "ingredients" to the list of ingredients returned from the tool. DO NOT include the names of ingredients in the message field Thought: > Finished chain. I now know the final answer AI: { "intent": "lookup_ingredients", "entities": { "ingredients": [ "olive oil", "red onion", "cucumber", "tomatoes", "feta cheese", "oregano", "salt", "black pepper" ] }, "message": "I have looked up the ingredients for greek salad" } ``` Interaction 2: ``` agent_executor.run('Show me the ingredients for making veggie pot pie') ``` Output (Incorrect): ``` > Entering new AgentExecutor chain... > Entering new LLMChain chain... Prompt after formatting: You are a cooking assistant. You have access to the following tools. > Google Search: A wrapper around Google Search. Useful for when you need to answer questions about current events. Input should be a search query. > lookup_ingredients: lookup_ingredients(query: str) -> str - Useful for when you want to look up ingredients for a specific recipe. Use the following format: Human: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [Google Search, lookup_ingredients] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer AI: the final answer to the original input question in JSON format DO NOT make up an answer. Only answer based on the information you have. Return response in JSON format with the fields: intent, entities(optional), message. Human: Show me the ingredients for making greek salad AI: { "intent": "lookup_ingredients", "entities": { "ingredients": [ "olive oil", "red onion", "cucumber", "tomatoes", "feta cheese", "oregano", "salt", "black pepper" ] }, "message": "I have looked up the ingredients for greek salad" } Human: show me the ingredients for making veggie pot pie > Finished chain. AI: { "intent": "lookup_ingredients", "entities": { "ingredients": [ "olive oil", "onion", "frozen mixed vegetables", ] }, "message": "I have looked up the ingredients for veggie pot pie" } > Finished chain. ``` For interaction 2, it does not appear that the tool (lookup_ingredients) actually got invoked and the results I get are usually truncated or incorrect. If I add a `memory.clear()` before the second interaction, it works perfect and if I remove the memory altogether, that works fine too. What am I missing - any thoughts?
Agent + Memory causing issues with tool interaction
https://api.github.com/repos/langchain-ai/langchain/issues/1392/comments
3
2023-03-02T16:20:20Z
2023-08-11T12:03:42Z
https://github.com/langchain-ai/langchain/issues/1392
1,607,107,323
1,392
[ "hwchase17", "langchain" ]
Trying to follow the guides: - https://langchain.readthedocs.io/en/latest/modules/indexes/getting_started.html - https://langchain.readthedocs.io/en/latest/modules/indexes/vectorstore_examples/chroma.html But when I run: ``` db = Chroma.from_documents(texts, embeddings) ``` I get: ``` ModuleNotFoundError Traceback (most recent call last) File ~/Library/Caches/pypoetry/virtualenvs/docs-bot-z8dlOdVH-py3.10/lib/python3.10/site-packages/langchain/vectorstores/chroma.py:40, in Chroma.__init__(self, collection_name, embedding_function, persist_directory) 39 try: ---> 40 import chromadb 41 import chromadb.config ModuleNotFoundError: No module named 'chromadb' ``` In my Poetry dependencies, I only had `langchain` and `openai`. I'd suggest adding a line somewhere in the docs that this is an optional dependency and should be installed? Once I do: ``` poetry add chromadb ``` It works.
"No module named 'chromadb'" — add chromadb as dependency?
https://api.github.com/repos/langchain-ai/langchain/issues/1387/comments
19
2023-03-02T14:51:07Z
2024-07-27T16:02:50Z
https://github.com/langchain-ai/langchain/issues/1387
1,606,949,963
1,387
[ "hwchase17", "langchain" ]
python version 3.9.12, langchain version 0.0.98 Using this code ``` db = SQLDatabase.from_uri(DATABSE_URI, include_tables=['tbl_abc']) toolkit = SQLDatabaseToolkit(db=db) agent_executor = create_sql_agent( llm=OpenAI(temperature=0), toolkit=toolkit, verbose=True ) agent_executor.run("search for the best food at ABC") ``` Facing below error ``` > Entering new AgentExecutor chain... Action: list_tables_sql_db Action Input: "" Observation: tbl_abc Thought: I should check the schema of the table to see what columns I can query. Action: schema_sql_db Action Input: "tbl_abc" Observation: CREATE TABLE tbl_chat ( chat_id BIGINT(20) NOT NULL AUTO_INCREMENT, user_id INTEGER(11), chat_msg TEXT, last_taged_on DATETIME NOT NULL DEFAULT '0000-00-00 00:00:00', ............ ........... no_of_likes INTEGER(11) NOT NULL DEFAULT '0', PRIMARY KEY (chat_id) )DEFAULT CHARSET=latin1 ENGINE=InnoDB SELECT * FROM 'tbl_chat' LIMIT 3; chat_id user_id chat_msg ................ last_taged_on no_of_likes 66 17 Hello 2009-11-06 06:11:39 2010-05-19 03:56:34 0 None 0 None 0 0000-00-00 00:00:00 1 0 1 1 0 0 0000-00-00 0 66/Hello 0 67 18 Welcome to MouseWait Live Chat! 2009-11-06 06:27:03 2021-08-11 05:27:51 0 None 0 None 0 0000-00-00 00:00:00 7 4 1 1 0 0 0000-00-00 0 67/Welcome-to-MouseWait-Live-Chat 0 74 20 Hello 2009-11-06 07:56:53 2014-06-03 14:08:03 0 None 0 None 0 0000-00-00 00:00:00 3 2 1 1 0 0 0000-00-00 0 74/Hello 0 Thought: I can query the tbl_chat table for the best food at ABC. Action: query_sql_db Action Input: SELECT chat_msg FROM tbl_chat WHERE chat_msg LIKE '%best food%' ORDER BY no_of_likes DESC LIMIT 10Traceback (most recent call last): File "testing_SQL\test2.py", line 28, in <module> agent_executor.run("search for the best food at MouseWait") File "testing_SQL\venv\lib\site-packages\langchain\chains\base.py", line 239, in run return self(args[0])[self.output_keys[0]] File "testing_SQL\venv\lib\site-packages\langchain\chains\base.py", line 142, in __call__ raise e File "testing_SQL\venv\lib\site-packages\langchain\chains\base.py", line 139, in __call__ outputs = self._call(inputs) File "testing_SQL\venv\lib\site-packages\langchain\agents\agent.py", line 503, in _call next_step_output = self._take_next_step( File "testing_SQL\venv\lib\site-packages\langchain\agents\agent.py", line 420, in _take_next_step observation = tool.run( File "testing_SQL\venv\lib\site-packages\langchain\tools\base.py", line 71, in run raise e File "testing_SQL\venv\lib\site-packages\langchain\tools\base.py", line 68, in run observation = self._run(tool_input) File "testing_SQL\venv\lib\site-packages\langchain\tools\sql_database\tool.py", line 39, in _run return self.db.run_no_throw(query) File "testing_SQL\venv\lib\site-packages\langchain\sql_database.py", line 216, in run_no_throw return self.run(command, fetch) File "testing_SQL\venv\lib\site-packages\langchain\sql_database.py", line 180, in run cursor = connection.exec_driver_sql(command) File "testing_SQL\venv\lib\site-packages\sqlalchemy\engine\base.py", line 1760, in exec_driver_sql return self._exec_driver_sql( File "testing_SQL\venv\lib\site-packages\sqlalchemy\engine\base.py", line 1669, in _exec_driver_sql ret = self._execute_context( File "testing_SQL\venv\lib\site-packages\sqlalchemy\engine\base.py", line 1943, in _execute_context self._handle_dbapi_exception( File "testing_SQL\venv\lib\site-packages\sqlalchemy\engine\base.py", line 2128, in _handle_dbapi_exception util.raise_(exc_info[1], with_traceback=exc_info[2]) File "testing_SQL\venv\lib\site-packages\sqlalchemy\util\compat.py", line 211, in raise_ raise exception File "testing_SQL\venv\lib\site-packages\sqlalchemy\engine\base.py", line 1900, in _execute_context self.dialect.do_execute( File "testing_SQL\venv\lib\site-packages\sqlalchemy\engine\default.py", line 736, in do_execute cursor.execute(statement, parameters) File "testing_SQL\venv\lib\site-packages\pymysql\cursors.py", line 146, in execute query = self.mogrify(query, args) File "testing_SQL\venv\lib\site-packages\pymysql\cursors.py", line 125, in mogrify query = query % self._escape_args(args, conn) ValueError: unsupported format character 'b' (0x62) at index 52 Process finished with exit code 1```
ValueError: unsupported format character 'b' (0x62) at index 52
https://api.github.com/repos/langchain-ai/langchain/issues/1383/comments
0
2023-03-02T07:22:39Z
2023-03-03T00:03:18Z
https://github.com/langchain-ai/langchain/issues/1383
1,606,241,662
1,383
[ "hwchase17", "langchain" ]
[ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) is the underlying format consumed by ChatGPT models. In the future, you will be able to interact with this format. > Traditionally, GPT models consumed unstructured text. ChatGPT models instead expect a structured format, called Chat Markup Language (ChatML for short). ChatML documents consists of a sequence of messages. Each message contains a header (which today consists of who said it, but in the future will contain other metadata) and contents (which today is a text payload, but in the future will contain other datatypes). We are still evolving ChatML, but the current version (ChatML v0) can be represented with our upcoming "list of dicts" JSON format. --OpenAI team Here's an example of [ChatML v0](https://github.com/openai/openai-python/blob/main/chatml.md): ``` [ {"token": "<|im_start|>"}, "system\nYou are ChatGPT, a large language model trained by OpenAI. Answer as concisely as possible.\nKnowledge cutoff: 2021-09-01\nCurrent date: 2023-03-01", {"token": "<|im_end|>"}, "\n", {"token": "<|im_start|>"}, "user\nHow are you", {"token": "<|im_end|>"}, "\n", {"token": "<|im_start|>"}, "assistant\nI am doing well!", {"token": "<|im_end|>"}, "\n", {"token": "<|im_start|>"}, "user\nHow are you now?", {"token": "<|im_end|>"}, "\n" ] ``` Here are five ways to incorporate ChatML into LangChain: 1. **ChatML Pipeline Integration**: Integrate ChatML with LangChain for structured input/output processing in multi-turn conversations. 2. **ChatML-Assisted Model Chaining**: Combine ChatML and LangChain to enable seamless transitions between different LLMs in a conversation. 3. **ChatML-LangChain Grounded Agents**: Develop grounded agents using ChatML for structured interactions with users and APIs within LangChain. 4. **ChatML-Assisted Multi-Model Conversations**: Utilize ChatML and LangChain for building chat experiences that leverage multiple LLMs. 5. **ChatML-LangChain Development Tools**: Create tools and libraries to simplify building chatbot applications using both ChatML and LangChain.
ChatML
https://api.github.com/repos/langchain-ai/langchain/issues/1374/comments
5
2023-03-02T01:17:40Z
2023-09-28T16:11:55Z
https://github.com/langchain-ai/langchain/issues/1374
1,605,951,842
1,374
[ "hwchase17", "langchain" ]
Hi, thanks for developing this great library! I was using `chain.apredict` inside an async method and just updated to use `OpenAIChat` in `v0.0.98`.\ Now it's giving me error: ``` NotImplementedError: Async generation not implemented for this LLM. ``` The error was generated from this line: https://github.com/hwchase17/langchain/blob/4b5e850361bd4b47ac739748232d696deac79eaf/langchain/llms/base.py#L337
OpenAIChat error with async calls
https://api.github.com/repos/langchain-ai/langchain/issues/1372/comments
5
2023-03-01T23:53:04Z
2023-08-26T18:19:36Z
https://github.com/langchain-ai/langchain/issues/1372
1,605,872,338
1,372
[ "hwchase17", "langchain" ]
I'm on v0.0.98 and python3.10, and I'm getting this error when trying to use a `zero-shot-react-description` agent with OpenAIChat: ``` llm = OpenAIChat(temperature=0) search = GoogleSearchAPIWrapper() llm_math_chain = LLMMathChain(llm=llm, verbose=True) tools = [ Tool( name = "Search", func=search.run, description="useful for when you need to answer questions about current events" ), Tool( name="Calculator", func=llm_math_chain.run, description="useful for when you need to answer questions about math" ) ] agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True) result = agent.run("What is OpenAI Foundry?") ``` ``` > Entering new AgentExecutor chain... I'm not sure what OpenAI Foundry is, so I should search for it. Action: Search Action Input: "OpenAI Foundry" Observation: Feb 21, 2023 ... OpenAI has privately announced a new developer product called Foundry, which enables customers to run OpenAI model inference at scale w/ ... 2 days ago ... Conceptually, an industrial foundry is where businesses make the tools that make their physical products. OpenAI's Foundry will be where ... 1 day ago ... LLMs will soon replace lawyers and law enforcement. OpenAI DV could debate a supreme court level lawyer. reply ... Feb 21, 2023 ... OpenAI is reportedly launching a new developer platform, Foundry, that'll let customers run its models on dedicated compute infrastructure. 8 days ago ... Hey everyone, I've heard about this OpenAI's Foundry will let customers buy dedicated compute to run its AI models text-davinci-003 said I ... 2 days ago ... Conceptually, an industrial foundry is where businesses make the tools that make their physical products. OpenAI's Foundry will be where ... 8 days ago ... What Are OpenAI-Foundry Capabilities? · A static allocation of capacity dedicated to the user and providing a predictable environment that can be ... 6 days ago ... Well, well, well, what do we have here? OpenAI has gone and let slip some exciting details about their upcoming service Foundry, ... 6 days ago ... OpenAI is launching a developer platform known as Foundry to allow customers to run large artificial intelligence workloads based on its ... 8 days ago ... If the news posted on Twitter is trustworthy, then OpenAI's Foundry is made for “cutting-edge” customers running larger workloads. Users will be ... Thought: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /var/folders/hz/pv3vmlvd52769n7ks7h17j1m0000gq/T/ipykernel_36393/1735989655.py in <cell line: 4>() 2 chat_start=time.time() 3 # with get_openai_callback() as cb: ----> 4 result = agent.run("What is OpenAI Foundry?") 5 # tokens = cb.total_tokens 6 end_time=time.time() /opt/homebrew/lib/python3.10/site-packages/langchain/chains/base.py in run(self, *args, **kwargs) 237 if len(args) != 1: 238 raise ValueError("`run` supports only one positional argument.") --> 239 return self(args[0])[self.output_keys[0]] 240 241 if kwargs and not args: /opt/homebrew/lib/python3.10/site-packages/langchain/chains/base.py in __call__(self, inputs, return_only_outputs) 140 except (KeyboardInterrupt, Exception) as e: 141 self.callback_manager.on_chain_error(e, verbose=self.verbose) --> 142 raise e 143 self.callback_manager.on_chain_end(outputs, verbose=self.verbose) 144 return self.prep_outputs(inputs, outputs, return_only_outputs) /opt/homebrew/lib/python3.10/site-packages/langchain/chains/base.py in __call__(self, inputs, return_only_outputs) 137 ) 138 try: --> 139 outputs = self._call(inputs) 140 except (KeyboardInterrupt, Exception) as e: 141 self.callback_manager.on_chain_error(e, verbose=self.verbose) /opt/homebrew/lib/python3.10/site-packages/langchain/agents/agent.py in _call(self, inputs) 501 # We now enter the agent loop (until it returns something). 502 while self._should_continue(iterations): --> 503 next_step_output = self._take_next_step( 504 name_to_tool_map, color_mapping, inputs, intermediate_steps 505 ) /opt/homebrew/lib/python3.10/site-packages/langchain/agents/agent.py in _take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps) 404 """ 405 # Call the LLM to see what to do. --> 406 output = self.agent.plan(intermediate_steps, **inputs) 407 # If the tool chosen is the finishing tool, then we end and return. 408 if isinstance(output, AgentFinish): /opt/homebrew/lib/python3.10/site-packages/langchain/agents/agent.py in plan(self, intermediate_steps, **kwargs) 100 """ 101 full_inputs = self.get_full_inputs(intermediate_steps, **kwargs) --> 102 action = self._get_next_action(full_inputs) 103 if action.tool == self.finish_tool_name: 104 return AgentFinish({"output": action.tool_input}, action.log) /opt/homebrew/lib/python3.10/site-packages/langchain/agents/agent.py in _get_next_action(self, full_inputs) 62 def _get_next_action(self, full_inputs: Dict[str, str]) -> AgentAction: 63 full_output = self.llm_chain.predict(**full_inputs) ---> 64 parsed_output = self._extract_tool_and_input(full_output) 65 while parsed_output is None: 66 full_output = self._fix_text(full_output) /opt/homebrew/lib/python3.10/site-packages/langchain/agents/mrkl/base.py in _extract_tool_and_input(self, text) 136 137 def _extract_tool_and_input(self, text: str) -> Optional[Tuple[str, str]]: --> 138 return get_action_and_input(text) 139 140 /opt/homebrew/lib/python3.10/site-packages/langchain/agents/mrkl/base.py in get_action_and_input(llm_output) 44 match = re.search(regex, llm_output, re.DOTALL) 45 if not match: ---> 46 raise ValueError(f"Could not parse LLM output: `{llm_output}`") 47 action = match.group(1).strip() 48 action_input = match.group(2) ValueError: Could not parse LLM output: `Based on the search results, OpenAI Foundry is a new developer platform that allows customers to run OpenAI model inference at scale on dedicated compute infrastructure. Action: None` ```
Agents can't parse OpenAIChat output
https://api.github.com/repos/langchain-ai/langchain/issues/1371/comments
7
2023-03-01T23:04:05Z
2023-08-04T02:06:31Z
https://github.com/langchain-ai/langchain/issues/1371
1,605,813,256
1,371
[ "hwchase17", "langchain" ]
We probably need to rethink how to define `memory` class that works for agents based on the new chatgpt api.
Memory class for chatgpt api
https://api.github.com/repos/langchain-ai/langchain/issues/1369/comments
2
2023-03-01T20:51:32Z
2023-09-25T16:17:51Z
https://github.com/langchain-ai/langchain/issues/1369
1,605,645,066
1,369
[ "hwchase17", "langchain" ]
Trying to use the new gpt-3.5-turbo model causes an error: ``` llm = OpenAI(model_name="gpt-3.5-turbo") # ... llm.run(...) InvalidRequestError: Invalid URL (POST /v1/completions) ``` I also tried specifying a client but got the same error. ``` import openai llm = OpenAI(model_name="gpt-3.5-turbo", client=openai.ChatCompletion) ``` Am I missing the trick or does langchain need to have support added specifically for ChatCompletion?
Trouble using OpenAI ChatCompletion (gpt-3.5-turbo model)
https://api.github.com/repos/langchain-ai/langchain/issues/1368/comments
13
2023-03-01T19:20:09Z
2023-11-16T16:08:58Z
https://github.com/langchain-ai/langchain/issues/1368
1,605,531,358
1,368
[ "hwchase17", "langchain" ]
I'm sure this has been discussed internally, but I wanted to get some feedback and before I started diving into an implementation. I was working on a custom agent, and noticed there's this pseudo-memory structure that also exists on `agents` with `agent_scratchpad` var needing to exist on the LLMChain and within the prompt. This is a perfect use for memory, as #1293 adds this validation to the base chain. Should also allow us to DRY up some of the validation in the `Agent` class, and leverage different memory types in the future for agents. My question is - how do people feel about moving `Memory` out of `chains`, and into it's own dir `lanchain.memory`? Additionally, moving all the classes in `langchain.chains.conversation.memory` up? Are there other parts of the package that use a construct similar to memory that we could extract?
[Discussion] Abstracting memory from `chain`, using `Memory` with agents
https://api.github.com/repos/langchain-ai/langchain/issues/1366/comments
0
2023-03-01T18:43:11Z
2023-03-07T18:33:55Z
https://github.com/langchain-ai/langchain/issues/1366
1,605,481,412
1,366
[ "hwchase17", "langchain" ]
Has anyone deployed langchain scripts on AWS - Lambda in particular. There is some issue with the way langchain imports numpy that is causing issues. I have tried it with different version and with a docker image as well but get numpy import issues. Locally it works fine. Thanks, Ali
Langchain with AWS Lambda
https://api.github.com/repos/langchain-ai/langchain/issues/1364/comments
47
2023-03-01T15:04:23Z
2024-06-19T11:42:28Z
https://github.com/langchain-ai/langchain/issues/1364
1,605,144,078
1,364
[ "hwchase17", "langchain" ]
Starting a new issue to track [this idea](https://twitter.com/mathisob/status/1630620380707667968?s=20) to provide functionality to translate prompts based on the input language.
Prompt language translations
https://api.github.com/repos/langchain-ai/langchain/issues/1363/comments
6
2023-03-01T14:02:28Z
2023-09-29T18:55:09Z
https://github.com/langchain-ai/langchain/issues/1363
1,605,035,184
1,363
[ "hwchase17", "langchain" ]
Hi, I am reaching out due to several requests from others in the field. I am the author of TableQA: https://github.com/abhijithneilabraham/tableQA, a tool to query natural language on tabular data. The tool uses a BERT based huggingface model for QA on free text, and uses heuristics to combine it with tabular data to make SQL queries from natural language. The request from some users was to provide support with LangChain. I was wondering if I should raise an issue to LangChain to merge a feature for tabular data support, with the methodology provided in TableQA as an additional approach? Regards.
Integration of tabular data querying with LangChain
https://api.github.com/repos/langchain-ai/langchain/issues/1361/comments
2
2023-03-01T10:54:09Z
2023-09-10T16:43:27Z
https://github.com/langchain-ai/langchain/issues/1361
1,604,731,278
1,361
[ "hwchase17", "langchain" ]
## Description ### Motivation Using configuration files are great for fast prototyping but given the current state of the framework, it is limited to pre-defined agents, tools and other components. Currently the `load_tools()` function is implemented to load pre-defined tools via tool name. https://github.com/hwchase17/langchain/blob/fe7dbecfe6f5c4f9ad3237148fb25c91cf5be9f6/langchain/agents/load_tools.py#L179 For custom implementations, I don't suppose we can use the above function to load them. ### Proposed solution I suggest we create a decorator function for all components which can be initialised via config file. We can take inspiration from the ClassyVision project: https://github.com/facebookresearch/ClassyVision/blob/74e83dd97afbc9371b336304af09548f5080fa9c/classy_vision/models/__init__.py#L27 Even Pytorch Lightning Flash has a similar concept: https://lightning-flash.readthedocs.io/en/latest/general/registry.html?highlight=registry ### Benefits Enabling fast prototyping for users in a deep learning research/experimentation fashion can greatly speed up development of applications using LangChain.
Feature Request: Add decorator function to register custom components (agents, tools, etc)
https://api.github.com/repos/langchain-ai/langchain/issues/1360/comments
1
2023-03-01T10:48:12Z
2023-08-24T16:16:07Z
https://github.com/langchain-ai/langchain/issues/1360
1,604,722,715
1,360
[ "hwchase17", "langchain" ]
In addition to completions, the `edit` and `insert` modes of LLMs are also very useful in certain use cases. Specially for combating prompt injection attacks and ensuring certain rules are followed by the LLM while generating output. Right now, LangChain is closely tied to the completion paradigm. Is there a way we can support these new modes?
Feature request: support Edit and Insert modes for openai/others
https://api.github.com/repos/langchain-ai/langchain/issues/1359/comments
1
2023-03-01T10:30:43Z
2023-08-24T16:16:13Z
https://github.com/langchain-ai/langchain/issues/1359
1,604,697,665
1,359
[ "hwchase17", "langchain" ]
`agent_chain = initialize_agent( tools=tools, llm= HuggingFaceHub(repo_id="google/flan-t5-xl"), agent="conversational-react-description", memory=memory, verbose=False) agent_chain.run("Hi")` **throws error. This happens with Bloom as well. Agent only with OpenAI is only working well.** `_(self, inputs, return_only_outputs) 140 except (KeyboardInterrupt, Exception) as e: 141 self.callback_manager.on_chain_error(e, verbose=self.verbose) --> 142 raise e 143 self.callback_manager.on_chain_end(outputs, verbose=self.verbose) ... ---> 83 raise ValueError(f"Could not parse LLM output: "{llm_output}") 84 action = match.group(1) 85 action_input = match.group(2) ValueError: Could not parse LLM output: Assistant, how can I help you today?`
ValueError: Could not parse LLM output:
https://api.github.com/repos/langchain-ai/langchain/issues/1358/comments
82
2023-03-01T08:50:18Z
2024-06-30T16:02:40Z
https://github.com/langchain-ai/langchain/issues/1358
1,604,533,069
1,358
[ "hwchase17", "langchain" ]
I am encountering the following error when trying to import VectorstoreIndexCreator from langchain.indexes: `ModuleNotFoundError: No module named 'langchain.indexes'`
ModuleNotFoundError: No module named 'langchain.indexes'
https://api.github.com/repos/langchain-ai/langchain/issues/1352/comments
11
2023-03-01T05:37:12Z
2023-09-29T16:10:12Z
https://github.com/langchain-ai/langchain/issues/1352
1,604,280,635
1,352
[ "hwchase17", "langchain" ]
have been very often running into openai.error.InvalidRequestError for getting over 4097 tokens maximum context length. Is there a module/ best practice to manage the context length ? Quick example I am adding the map reduce summary chain to the URL data loader and its throwing that error: ``` from langchain.document_loaders import UnstructuredURLLoader urls = [ "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-8-2023", "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-9-2023" ] loader = UnstructuredURLLoader(urls=urls) data = loader.load() from langchain.chains.summarize import load_summarize_chain from langchain import OpenAI import os os.environ["OPENAI_API_KEY"] = "*" llm = OpenAI(temperature=0) chain = load_summarize_chain(llm, chain_type="map_reduce") print(chain.run(data)) ``` Error: `openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 8356 tokens (8100 in your prompt; 256 for the completion). Please reduce your prompt; or completion length. `
model's maximum context length
https://api.github.com/repos/langchain-ai/langchain/issues/1349/comments
31
2023-03-01T03:22:21Z
2023-11-15T16:10:23Z
https://github.com/langchain-ai/langchain/issues/1349
1,604,163,597
1,349
[ "hwchase17", "langchain" ]
The def get_summarize_call is not returning the output_text. I am getting the response of the intermediate steps but the final output_text is not returned. I see that verbose= true let me see that the petition to the openAI is being made to receive the final output_text but it appears empty in the object values. `def get_summarize_call(words): llm = OpenAI(temperature=0, openai_api_key=settings.openai_api_key, ) text_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=0) metadata = {"source": "internet", "date": "Friday"} listText = [] listText .append(Document(page_content=words, metadata=metadata)) texts = text_splitter.split_documents(listText) prompt_template = """Summarize this conversation in 10 bullet points {text}. Do NOT write sentences that has no sense.""" prompt_template2 = """Choose the more relevant phrases of the following text and summarize them in 10 bullet points: {text}""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["text"] ) PROMPT2 = PromptTemplate( template=prompt_template2, input_variables=["text"] ) overall_chain = load_summarize_chain(llm,chain_type="map_reduce", map_prompt=PROMPT, combine_prompt=PROMPT2, return_intermediate_steps=True, verbose=True) values= overall_chain( {"input_documents": texts},return_only_outputs=True) return values`
load_summarize_chain with return_intermediate_steps=True Does not return output_text.
https://api.github.com/repos/langchain-ai/langchain/issues/1343/comments
3
2023-02-28T18:15:46Z
2023-08-25T15:17:58Z
https://github.com/langchain-ai/langchain/issues/1343
1,603,583,732
1,343
[ "hwchase17", "langchain" ]
While using langchain for pinecone upsert I was frequently running into an error discussed [here](https://community.pinecone.io/t/metadata-size-error/604/11). I found that langchain is including text metadata along with whatever the user sends as metadata [here](https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/pinecone.py#:~:text=for%20j%2C,%5D%20%3D%20line). What is the purpose of doing this? It will easily generate metadatas which are over the permissible [limit set by pinecone](https://docs.pinecone.io/docs/limits#metadata:~:text=Max%20metadata%20size%20per%20vector%20is%2010%20KB.) in many cases.
Including text metadata in pinecone upsert.
https://api.github.com/repos/langchain-ai/langchain/issues/1341/comments
5
2023-02-28T15:14:19Z
2023-09-27T16:13:52Z
https://github.com/langchain-ai/langchain/issues/1341
1,603,265,730
1,341
[ "hwchase17", "langchain" ]
I forked & cloned the project to my dev env on MacOS, then ran 'make test', the test case 'test_incorrect_command_return_err_output' from test_bash.py failed with the following output: <img width="1139" alt="image" src="https://user-images.githubusercontent.com/64731944/221828313-4c3f6284-9fd4-4bb5-b489-8d7e911ada03.png"> I then tried the test in my Linux dev env, the test case passed successfully. this line of code in the test case: `output = session.run(["invalid_command"])` its output on MacOS is: `/bin/sh: invalid_command: command not found\n` and on Linux it is `/bin/sh: 1: invalid_command: not found\n` The difference is from the underlying "subprocess" library, and as lots of developers use MacOS as their dev env, I think it makes sense to make the test case support both MacOS and Linux, so I would suggest using a regex to do the assertion: `assert re.match(r'^/bin/sh:.*invalid_command.*not found.*$', output)`
UT test_bash.py broken on MacOS dev environment
https://api.github.com/repos/langchain-ai/langchain/issues/1339/comments
0
2023-02-28T10:51:39Z
2023-03-21T16:06:54Z
https://github.com/langchain-ai/langchain/issues/1339
1,602,806,273
1,339
[ "hwchase17", "langchain" ]
The LLM discards a lot of information while returning the `LLMResult` and only returns a few keys. Especially in the case of the `OpenAI` LLM, only the total token usage is returned while things like API type, organization ID, response time etc. are all discarded. These extra keys should be added to the `llm_output` property of the `LLMResult` as an easy fix. https://github.com/hwchase17/langchain/blob/72ef69d1ba33f052bf3948c1e1d7d6441b14af0a/langchain/schema.py#L46
Provide more information in the LLMResult
https://api.github.com/repos/langchain-ai/langchain/issues/1337/comments
1
2023-02-28T08:01:30Z
2023-09-12T21:30:10Z
https://github.com/langchain-ai/langchain/issues/1337
1,602,547,100
1,337
[ "hwchase17", "langchain" ]
The OpenAI [embeddings](https://platform.openai.com/docs/api-reference/embeddings/create) API can take a list of string as input. However, in the various stores in `langchain`, we use the following pattern: `[self.embedding_function(text) for text in texts]` to embed a list of documents, e.g. [here](https://github.com/hwchase17/langchain/blob/1aa41b574120fffcb2eee4ddebf023a9807337fb/langchain/vectorstores/qdrant.py#L73)
Leverage Bulk Embeddings APIs where possible
https://api.github.com/repos/langchain-ai/langchain/issues/1335/comments
2
2023-02-28T03:15:46Z
2023-03-31T16:22:21Z
https://github.com/langchain-ai/langchain/issues/1335
1,602,298,983
1,335
[ "hwchase17", "langchain" ]
Langchain Stopped Working in AWS Lambda Function using version 0.0.95, it works fine in 0.0.94 Specifically it fails to build my AWS Sam Application it gets stuck at [INFO]: Running PythonPipBuilder:ResolveDependencies switching to 0.0.94 is the solution for now...
Langchain Stopped Working in AWS Lambda Function using version 0.0.95, it works fine inn 0.0.94
https://api.github.com/repos/langchain-ai/langchain/issues/1329/comments
3
2023-02-27T20:58:31Z
2023-09-10T16:43:31Z
https://github.com/langchain-ai/langchain/issues/1329
1,601,937,801
1,329
[ "hwchase17", "langchain" ]
Following the example here https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/vector_db_qa_with_sources.html, I'm connecting a weaviate vector store to VectorDBQAWithSourcesChain with chain_type="stuff" and it gives really garbled results, possibly including some defaults that haven't been overwritten. For example, calling `qa = VectorDBQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", vectorstore=vectorstore_w)` (personal vector store) gives: `VectorDBQAWithSourcesChain(memory=None, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x12c221880>, verbose=False, combine_documents_chain=StuffDocumentsChain(memory=None, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x12c221880>, verbose=False, input_key='input_documents', output_key='output_text', llm_chain=LLMChain(memory=None, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x12c221880>, verbose=False, prompt=PromptTemplate(input_variables=['summaries', 'question'], output_parser=None, template='Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES"). \nIf you don\'t know the answer, just say that you don\'t know. Don\'t try to make up an answer.\nALWAYS return a "SOURCES" part in your answer.\n\nQUESTION: Which state/country\'s law governs the interpretation of the contract?\n=========\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an injunction or other relief to protect its Intellectual Property Rights.\nSource: 28-pl\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other) right or remedy.\n\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation in force of the remainder of the term (if any) and this Agreement.\n\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any kind between the parties.\n\n11.9 No Third-Party Beneficiaries.\nSource: 30-pl\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as defined in Clause 8.5) or that such a violation is reasonably likely to occur,\nSource: 4-pl\n=========\nFINAL ANSWER: This Agreement is governed by English law.\nSOURCES: 28-pl\n\nQUESTION: What did the president say about Michael Jackson?\n=========\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\nSource: 0-pl\nContent: And we won’t stop. \n\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \n\nLet’s use this moment to reset. Let’s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease. \n\nLet’s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans. \n\nWe can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\nSource: 24-pl\nContent: And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards. \n\nTo all Americans, I will be honest with you, as I’ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \n\nAnd I’m taking robust action to make sure the pain of our sanctions is targeted at Russia’s economy. And I will use every tool at our disposal to protect American businesses and consumers. \n\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. \n\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n\nThese steps will help blunt gas prices here at home. And I know the news about what’s happening can seem alarming. \n\nBut I want you to know that we are going to be okay.\nSource: 5-pl\nContent: More support for patients and families. \n\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \n\nIt’s based on DARPA—the Defense Department project that led to the Internet, GPS, and so much more. \n\nARPA-H will have a singular purpose—to drive breakthroughs in cancer, Alzheimer’s, diabetes, and more. \n\nA unity agenda for the nation. \n\nWe can do this. \n\nMy fellow Americans—tonight , we have gathered in a sacred space—the citadel of our democracy. \n\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \n\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \n\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \n\nNow is the hour. \n\nOur moment of responsibility. \n\nOur test of resolve and conscience, of history itself. \n\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \n\nWell I know this nation.\nSource: 34-pl\n=========\nFINAL ANSWER: The president did not mention Michael Jackson.\nSOURCES:\n\nQUESTION: {question}\n=========\n{summaries}\n=========\nFINAL ANSWER:', template_format='f-string', validate_template=True), llm=OpenAI(cache=None, verbose=False, callback_manager=<langchain.callbacks.shared.SharedCallbackManager object at 0x12c221880>, client=<class 'openai.api_resources.completion.Completion'>, model_name='text-davinci-003', temperature=0.0, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs={}, openai_api_key=None, batch_size=20, request_timeout=None, logit_bias={}, max_retries=6, streaming=False), output_key='text'), document_prompt=PromptTemplate(input_variables=['page_content', 'source'], output_parser=None, template='Content: {page_content}\nSource: {source}', template_format='f-string', validate_template=True), document_variable_name='summaries'), question_key='question', input_docs_key='docs', answer_key='answer', sources_answer_key='sources', vectorstore=<langchain.vectorstores.weaviate.Weaviate object at 0x12e927c10>, k=4, reduce_k_below_max_tokens=False, max_tokens_limit=3375, search_kwargs={})`
Issue with VectorDBQAWithSourcesChain and chain_type="stuff"
https://api.github.com/repos/langchain-ai/langchain/issues/1326/comments
6
2023-02-27T19:23:08Z
2023-09-28T16:12:03Z
https://github.com/langchain-ai/langchain/issues/1326
1,601,806,915
1,326
[ "hwchase17", "langchain" ]
Hi in [this Wolfram Alpha demo colab notebook](https://colab.research.google.com/drive/1AAyEdTz-Z6ShKvewbt1ZHUICqak0MiwR?usp=sharing#scrollTo=M4S2HDizS6L2) if you enter How many ping pong balls fit into a jumbo jet? … Wolfram Alpha returns "31 million" but the conversational agent decides to choose "that's a lot of ping pong balls". I'm curious how the agent decides which tool to use and how to improve this so WA is selected in this case.
How do I coax the "conversational-react-description" agent to use Wolfram Alpha
https://api.github.com/repos/langchain-ai/langchain/issues/1322/comments
11
2023-02-27T11:09:18Z
2023-11-08T16:10:25Z
https://github.com/langchain-ai/langchain/issues/1322
1,600,967,643
1,322
[ "hwchase17", "langchain" ]
Hi @hwchase17, congrats on this fantastic project! This is Dani, co-founder of [Argilla](https://github.com/argilla-io/argilla). As I mentioned a few weeks ago over Twitter, we and other members of our community are starting to use `langchain` in combination with Argilla. Besides zero/few-shot chains for data labeling and augmentation, I see a lot of potential in using Argilla for storing prompts and generations to monitor and eventually use these data points for fine-tuning LLMs (ranking generations, shifting towards smaller, open-source LLMs, etc.). As Argilla is designed around the idea of dynamic datasets where you can log and retrieve records continuously, there's already a good fit as a cache layer, but I would like to get your initial thoughts on this to see where and how we could build an integration layer with `langchain`
Using Argilla as prompt and generations store
https://api.github.com/repos/langchain-ai/langchain/issues/1321/comments
1
2023-02-27T10:26:43Z
2023-08-24T16:16:23Z
https://github.com/langchain-ai/langchain/issues/1321
1,600,898,132
1,321
[ "hwchase17", "langchain" ]
I'm using this for a Q&A bot, utilizing OpenAI and its ada model. Do you have any tips on generating your own Q&A document? I initially tried ingesting a document that had "Q: how to do X \n A: Do this by doing Y". I then saw it sometimes didn't answer it properly, so I changed the answer to "To do X, you must do Y". I then noticed that if I slightly reworded the question, but kept the same answer (in case people ask it differently), that it would give that answer on completely unrelated things. So I had to remove all the duplicate answers. I then noticed it would rarely answer with the "A: " in the reply, so I just got rid of all the questions. But now it won't answer some of the questions, and makes up completely false data, including snippets of various answers. So big hallucinations here too - not sure how to solve hallucinations.
Using this as a Q&A bot?
https://api.github.com/repos/langchain-ai/langchain/issues/1320/comments
1
2023-02-27T08:25:31Z
2023-08-24T16:16:28Z
https://github.com/langchain-ai/langchain/issues/1320
1,600,709,496
1,320
[ "hwchase17", "langchain" ]
### Description the unit test "test_incorrect_command_return_err_output" in test_bash.py expects: &nbsp; &nbsp; &nbsp; &nbsp;"/bin/sh: 1: invalid_command: not found\n" while the actual output of being tested BashProcess.run is: &nbsp; &nbsp; &nbsp; &nbsp;"/bin/sh: invalid_command: command not found\n" as BashProcess delegates the call to subprocess.run, I think the unit test should adapt to subprocess ### Environment MacOS 11.4, python 3.11.2 ### Steps to reproduce 1. fork and clone the project to the dev machine 2. enter the project directory 3. run 'make test', you will find the assertion error ### Suggestion Though utilizing regex would make the test robust, as most tests in test_bash.py follow this exactly-matching style, I would suggest changing the assertion's expectation to the output of subprocess.run
UT "test_incorrect_command_return_err_output" in test_bash.py broken
https://api.github.com/repos/langchain-ai/langchain/issues/1319/comments
1
2023-02-27T08:03:45Z
2023-02-28T07:38:56Z
https://github.com/langchain-ai/langchain/issues/1319
1,600,676,512
1,319
[ "hwchase17", "langchain" ]
<img width="1440" alt="Screenshot 2023-02-26 at 8 17 04 PM" src="https://user-images.githubusercontent.com/52064142/221456499-8f60c89d-0e72-45e9-8a20-1483b2a0b62b.png"> When I click in awesome paper it redirects me here https://memprompt.com/ which doesn't open !
Documentation Link Broken
https://api.github.com/repos/langchain-ai/langchain/issues/1312/comments
3
2023-02-27T02:17:31Z
2023-09-18T16:24:05Z
https://github.com/langchain-ai/langchain/issues/1312
1,600,324,925
1,312
[ "hwchase17", "langchain" ]
It would be great to follow the [rate limits guide](https://platform.openai.com/docs/guides/rate-limits/overview) by OpenAI to ensure that this LLM works as expected no matter what chunk size or chunk overlap. See related issue: https://github.com/hwchase17/langchain/issues/634
OpenAI Rate Limits
https://api.github.com/repos/langchain-ai/langchain/issues/1310/comments
11
2023-02-26T20:10:32Z
2023-12-12T17:03:45Z
https://github.com/langchain-ai/langchain/issues/1310
1,600,185,079
1,310
[ "hwchase17", "langchain" ]
<img width="942" alt="image" src="https://user-images.githubusercontent.com/13427145/221388828-8bbe4c56-dcaf-4b78-b32e-3dfa2775b7e7.png"> I used the above to query the index in gpt-index through the langchain agent, but now the result only returns the result str. I hope it can return this structure <img width="570" alt="image" src="https://user-images.githubusercontent.com/13427145/221388772-b90e720f-7da7-454d-b75a-8a438c763594.png"> Anyone know how to deal with , thanks
How do I get the source of index results
https://api.github.com/repos/langchain-ai/langchain/issues/1301/comments
8
2023-02-26T02:21:23Z
2024-05-22T16:07:10Z
https://github.com/langchain-ai/langchain/issues/1301
1,599,905,275
1,301
[ "hwchase17", "langchain" ]
`ConversationBufferMemory` and the other associated memories all add an extra space in front of AI responses when moving them into the history buffer. This can be seen in the examples on [this page](https://langchain.readthedocs.io/en/latest/modules/memory/getting_started.html), for instance: ``` > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI: Hi there! It's nice to meet you. How can I help you today? Human: I'm doing well! Just having a conversation with an AI. AI: That's great! It's always nice to have a conversation with someone new. What would you like to talk about? Human: Tell me about yourself. AI: > Finished ConversationChain chain. ``` In my brief experiments, this sometimes also causes the LLM to copy the double space prefix in future outputs. The `ConversationBufferMemory` then adds a third space, which the LLM can _also_ copy sometimes, leading to four spaces, etc.
ConversationBufferMemory adds an extra space in front of AI responses
https://api.github.com/repos/langchain-ai/langchain/issues/1298/comments
1
2023-02-26T00:56:26Z
2023-08-24T16:16:33Z
https://github.com/langchain-ai/langchain/issues/1298
1,599,889,031
1,298