issue_owner_repo
listlengths
2
2
issue_body
stringlengths
0
261k
βŒ€
issue_title
stringlengths
1
925
issue_comments_url
stringlengths
56
81
issue_comments_count
int64
0
2.5k
issue_created_at
stringlengths
20
20
issue_updated_at
stringlengths
20
20
issue_html_url
stringlengths
37
62
issue_github_id
int64
387k
2.46B
issue_number
int64
1
127k
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Looking to use `AsyncChromiumLoader` to scrape URLs but I keep running into the following error: ```python loader = AsyncChromiumLoader(["https://www.espn.com/nfl/story/_/id/38652877/lions-rb-jahmyr-gibbs-wr-amon-ra-st-brown-return-vs-bucs"]) loader.load() --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[17], line 2 1 loader = AsyncChromiumLoader(["https://www.espn.com/nfl/story/_/id/38652877/lions-rb-jahmyr-gibbs-wr-amon-ra-st-brown-return-vs-bucs"]) ----> 2 loader.load() File /opt/conda/lib/python3.10/site-packages/langchain/document_loaders/chromium.py:90, in AsyncChromiumLoader.load(self) 81 def load(self) -> List[Document]: 82 """ 83 Load and return all Documents from the provided URLs. 84 (...) 88 89 """ ---> 90 return list(self.lazy_load()) File /opt/conda/lib/python3.10/site-packages/langchain/document_loaders/chromium.py:77, in AsyncChromiumLoader.lazy_load(self) 66 """ 67 Lazily load text content from the provided URLs. 68 (...) 74 75 """ 76 for url in self.urls: ---> 77 html_content = asyncio.run(self.ascrape_playwright(url)) 78 metadata = {"source": url} 79 yield Document(page_content=html_content, metadata=metadata) File /opt/conda/lib/python3.10/asyncio/runners.py:33, in run(main, debug) 9 """Execute the coroutine and return the result. 10 11 This function runs the passed coroutine, taking care of (...) 30 asyncio.run(main()) 31 """ 32 if events._get_running_loop() is not None: ---> 33 raise RuntimeError( 34 "asyncio.run() cannot be called from a running event loop") 36 if not coroutines.iscoroutine(main): 37 raise ValueError("a coroutine was expected, got {!r}".format(main)) RuntimeError: asyncio.run() cannot be called from a running event loop ``` I am running langchain 0.0.314. I looked through [the documentation](https://python.langchain.com/docs/integrations/document_loaders/async_html) which wasn't helpful. I found [this stackoverflow thread](https://stackoverflow.com/questions/55409641/asyncio-run-cannot-be-called-from-a-running-event-loop-when-using-jupyter-no) which suggests adding an `await` but that seems useful if we are directly using the asychio class. It's worth noting that I run into a similar error when using the `PlaywrightURLLoader` and I run into the following error: ```python loader = PlaywrightURLLoader(["https://www.espn.com/nfl/story/_/id/38652877/lions-rb-jahmyr-gibbs-wr-amon-ra-st-brown-return-vs-bucs"]) loader.load() --------------------------------------------------------------------------- Error Traceback (most recent call last) Cell In[186], line 2 1 loader = PlaywrightURLLoader(["https://www.espn.com/nfl/story/_/id/38652877/lions-rb-jahmyr-gibbs-wr-amon-ra-st-brown-return-vs-bucs"]) ----> 2 loader.load() File /opt/conda/lib/python3.10/site-packages/langchain/document_loaders/url_playwright.py:154, in PlaywrightURLLoader.load(self) 150 from playwright.sync_api import sync_playwright 152 docs: List[Document] = list() --> 154 with sync_playwright() as p: 155 browser = p.chromium.launch(headless=self.headless) 156 for url in self.urls: File /opt/conda/lib/python3.10/site-packages/playwright/sync_api/_context_manager.py:47, in PlaywrightContextManager.__enter__(self) 45 self._own_loop = True 46 if self._loop.is_running(): ---> 47 raise Error( 48 """It looks like you are using Playwright Sync API inside the asyncio loop. 49 Please use the Async API instead.""" 50 ) 52 # Create a new fiber for the protocol dispatcher. It will be pumping events 53 # until the end of times. We will pass control to that fiber every time we 54 # block while waiting for a response. 55 def greenlet_main() -> None: Error: It looks like you are using Playwright Sync API inside the asyncio loop. Please use the Async API instead. ``` ### Suggestion: _No response_
Issue: `AsyncChromiumLoader` RuntimeError: asyncio.run() cannot be called from a running event loop
https://api.github.com/repos/langchain-ai/langchain/issues/11810/comments
2
2023-10-14T16:12:24Z
2024-02-08T16:18:46Z
https://github.com/langchain-ai/langchain/issues/11810
1,943,375,169
11,810
[ "hwchase17", "langchain" ]
### System Info LangChain: langchain-0.0.314 Python: Anaconda Python 3.9.18 X86 RTX3080 Laptop (16G) CUDA 11.8 cuDNN 8.9.5 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1.git clone https://github.com/ymcui/Chinese-LLaMA-Alpaca-2.git 2.cd Chinese-LLaMA-Alpaca-2/scripts/langchain 3.python langchain_sum.py --model_path chinese-alpaca-2-7b-hf --file_path doc.txt --chain_type refine (langchain) zhanghui@zhanghui-OMEN-by-HP-Laptop-17-ck0xxx:~/Chinese-LLaMA-Alpaca-2/scripts/langchain$ python langchain_sum.py --model_path chinese-alpaca-2-7b-hf --file_path doc.txt --chain_type refine /home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/__init__.py:39: UserWarning: Importing HuggingFacePipeline from langchain root module is no longer supported. warnings.warn( loading LLM... Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:03<00:00, 1.86s/it] /home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:362: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed. warnings.warn( /home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:367: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.6` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed. warnings.warn( Traceback (most recent call last): File "/home/zhanghui/Chinese-LLaMA-Alpaca-2/scripts/langchain/langchain_sum.py", line 50, in <module> model = HuggingFacePipeline.from_model_id(model_id=model_path, File "/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/llms/huggingface_pipeline.py", line 112, in from_model_id model.is_quantized File "/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'LlamaForCausalLM' object has no attribute 'is_quantized' ### Expected behavior ![image](https://github.com/langchain-ai/langchain/assets/63148804/b7dc429d-e19b-45fc-ba70-b977b72af9b8)
AttributeError: 'LlamaForCausalLM' object has no attribute 'is_quantized'
https://api.github.com/repos/langchain-ai/langchain/issues/11809/comments
4
2023-10-14T13:46:33Z
2023-10-16T23:54:22Z
https://github.com/langchain-ai/langchain/issues/11809
1,943,280,693
11,809
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I'm encountering an error while trying to use a Falcon model to chat with a bunch of text files. I've downloaded the model locally on my Google Drive, and everything seems fine up to this point. However, when I try to define the LLM model in Langchain using the following code: ``` from langchain.llms import HuggingFacePipeline llm = HuggingFacePipeline.from_model_id( model_id='/content/gdrive/MyDrive/Colab Notebooks/falcon model/898df1396f35e447d5fe44e0a3ccaaaa69f30d36', task="text-generation", model_kwargs={"temperature": 0, "max_length": 64}, ) ``` I encounter the following error: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-31-902f2455d555> in <cell line: 3>() 1 from langchain.llms import HuggingFacePipeline 2 ----> 3 llm = HuggingFacePipeline.from_model_id( 4 model_id='/content/gdrive/MyDrive/Colab Notebooks/falcon model/898df1396f35e447d5fe44e0a3ccaaaa69f30d36', 5 task="text-generation", AttributeError: 'FalconForCausalLM' object has no attribute 'is_quantized' ``` the whole code ``` device = torch.device('cuda') checkpoint = "tiiuae/falcon-7b" print(f"Checkpoint path: {checkpoint}") # Add this line for debugging tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path="/content/gdrive/MyDrive/Colab Notebooks/falcon model/898df1396f35e447d5fe44e0a3ccaaaa69f30d36") base_model = AutoModel.from_pretrained( pretrained_model_name_or_path='/content/gdrive/MyDrive/Colab Notebooks/falcon model/898df1396f35e447d5fe44e0a3ccaaaa69f30d36', torch_dtype=torch.float32 ) loader = DirectoryLoader('/content/gdrive/MyDrive/Colab Notebooks/chatbot 3.0/', glob="./*.txt", loader_cls=TextLoader) documents = loader.load() text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=500) texts = text_splitter.split_documents(documents) embeddings = HuggingFaceEmbeddings(cache_folder='/content/gdrive/MyDrive/Colab Notebooks/falcon model/898df1396f35e447d5fe44e0a3ccaaaa69f30d36',) from langchain.llms import HuggingFacePipeline llm = HuggingFacePipeline.from_model_id( model_id='/content/gdrive/MyDrive/Colab Notebooks/falcon model/898df1396f35e447d5fe44e0a3ccaaaa69f30d36', task="text-generation", model_kwargs={"temperature": 0, "max_length": 64}, ) ``` ### Suggestion: _No response_
Issue: : Issue with HuggingFacePipeline while trying to use a Falcon model for chat with text files
https://api.github.com/repos/langchain-ai/langchain/issues/11807/comments
2
2023-10-14T12:59:34Z
2024-02-07T16:18:58Z
https://github.com/langchain-ai/langchain/issues/11807
1,943,251,646
11,807
[ "hwchase17", "langchain" ]
### Issue with current documentation: [Link](https://python.langchain.com/docs/expression_language/cookbook/retrieval#with-memory-and-returning-source-documents) Hi, I've been exploring the Cookbook RAG (link provided above) and noticed that `expanded_memory` is not defined anywhere in the code snippet(**With Memory and returning source documents**). I was wondering if a portion of the code is missing or if it's just a typo. Thank you in advance for looking over my issue! ### Idea or request for content: _No response_
DOC: LangChain Cookbook RAG typo
https://api.github.com/repos/langchain-ai/langchain/issues/11805/comments
3
2023-10-14T09:54:28Z
2023-10-15T14:39:12Z
https://github.com/langchain-ai/langchain/issues/11805
1,943,154,020
11,805
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Sorry to raise this as an issue, but I have a quick question. I am preparing come code for `langchain` and noticed that most of the code I wish to build on lacks unit tests. I was considering first submitting a PR of purely unit tests for the existing code I wish to use in order to keep subsequent PRs more digestible. _Is submitting a PR of just unit tests acceptable?_ ### Suggestion: _No response_
Issue: unit tests, a quick question.
https://api.github.com/repos/langchain-ai/langchain/issues/11803/comments
2
2023-10-14T09:11:36Z
2024-02-07T16:19:03Z
https://github.com/langchain-ai/langchain/issues/11803
1,943,107,712
11,803
[ "hwchase17", "langchain" ]
### Feature request - If user wants to get content from document which are having confidence HIGH or HIGHEST then with help of this feature Kendra will return only more relevance records. ### Motivation - Sometimes kendra return content with LOW `ScoreConfidence` which may lead to inaccurate results. ### Your contribution - I've checked the `_get_relevant_documents` method from `AmazonKendraRetriever` class but it is only returning top_k docs not actually filtering based on `ScoreConfidence`. - With the help of `boto3` we can achieve that. - I would like to do more research and open PR for it.
Add support for AWS Kendra `ScoreConfidence` filter.
https://api.github.com/repos/langchain-ai/langchain/issues/11801/comments
2
2023-10-14T07:10:08Z
2024-03-08T01:28:10Z
https://github.com/langchain-ai/langchain/issues/11801
1,943,036,454
11,801
[ "hwchase17", "langchain" ]
### Feature request The feature proposal is to integrate a new Langchain tool that would provide support for queries on [https://api.nasa.gov/](NASA APIs) Here is the list of NASA APIs that will be integrated into the new tool: - APOD - Asteroids NeoWs - Earth - EONET - EPIC - GeneLab - Insight - Mars Rover Photos - NASA Image and Video Library - TechTransfer - TLE API - Vesta/Moon/Mars Trek WMTS ### Motivation Currently, Langchain does not provide access to a large data pool of reliable information related to astronomy. The integration of a NASA API tool would allow users to retrieve updated and accurate data related to satellite imagery, mission data and other space-related information. In cases where accuracy of data is critical, the NASA API tool may also be preferred over its GPT-generated counterpart. ### Your contribution I am part of a group of 3 other students that will be able to submit a PR for this issue sometime in November.
feat: add NASA API tool
https://api.github.com/repos/langchain-ai/langchain/issues/11787/comments
8
2023-10-13T23:29:42Z
2024-03-13T19:56:56Z
https://github.com/langchain-ai/langchain/issues/11787
1,942,774,990
11,787
[ "hwchase17", "langchain" ]
### System Info Python Version: 3.11.0 LangChain Version: 0.0.264 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ### define a custom chain: ``` class BlastAPIChain(Chain): prompt: BasePromptTemplate llm: BaseLanguageModel @property def input_keys(self) -> List[str]: return ["prompt"] @property def output_keys(self) -> List[str]: return ["response"] @property def _chain_type(self) -> str: return "custom_chain" def _call( self, inputs: Dict[str, Any], run_manager: Optional[CallbackManagerForChainRun] = None ) -> Dict[str, str]: ''' The primary method for this chain. :param inputs: a dictionary containing the input keys and values. :param run_manager: an optional callback manager for the chain run. ''' # some custom logic goes here, but is omitted now response = 'some custom logic would go here' return {"response": response,} ``` ### setup an instance of your custom chain ``` blast_api_chain_description = { "name": "blast_api_chain", "description": "lalalala, ", "template": """ lalalala """, } blast_api_chain_prompt = PromptTemplate( template=blast_api_chain_description["template"], input_variables=[] ) blast_api_chain = BlastAPIChain(llm=llm, verbose=True, memory=memory, prompt=blast_api_chain_prompt) ``` ### add the destination chains to destination_chains ``` DESTINATION_CHAINS['blast_api_chain'] = blast_api_chain ``` ### create a multi-prompt-chain from the destination chain ``` master_chain = MultiPromptChain( router_chain=router_chain, destination_chains=DESTINATION_CHAINS, default_chain=default_chain, verbose=True, ) ``` ### observe the issue: the blast_router_chain destination has been reassigned ``` print (master_chain.__dict__['destination_chains']['blast_api_chain']) print (DESTINATION_CHAINS['blast_api_chain']) ``` ### Expected behavior I expect that after defining `master_chain` with the code below, the `destination_chains` attribute should hold all of the same chains that are defined in `DESTINATION_CHAINS`. ``` master_chain = MultiPromptChain( router_chain=router_chain, destination_chains=DESTINATION_CHAINS, default_chain=default_chain, verbose=True, ) ``` However, this is not the case. Instead, the chains in DESTINATION_CHAINS have been over-written with different objects. This can be seen here; ![image](https://github.com/langchain-ai/langchain/assets/34389563/3af6b117-ada8-4cdf-993f-486fa060a1b7) You can see that my custom BlastAPIChain instance has been replaced with an LLMChain instance. How am I supposed to route traffic to my custom chain?
MultiPromptChain does not use its assigned destination chains
https://api.github.com/repos/langchain-ai/langchain/issues/11786/comments
4
2023-10-13T23:23:40Z
2024-02-09T16:15:58Z
https://github.com/langchain-ai/langchain/issues/11786
1,942,772,500
11,786
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. my Azure search index has 4 fields id, user_id, content, content_vector. I am trying to add a conversation to the azure search vector store for a user_id ="msunkara" The code is given below: memory_vector_store: AzureSearch = AzureSearch( azure_search_endpoint=vector_store_address, azure_search_key=vector_store_password, index_name=memory_index_name, embedding_function=embeddings.embed_query) history = [] user_id = "msunkara" session_id = "1" query = Hi res.content = Hi, How can I assist you today? history.append("User: "+ query + " Assistant: " + res.content) memory_id = session_id + user_id metadata = [{"user_id": user_id}] memory_vector_store.add_texts(history,keys = [memory_id],metadatas=metadata) The above code is throwing the below error: Result: Failure Exception: HttpResponseError: () The request is invalid. Details: The property 'metadata' does not exist on type 'search.documentFields'. Make sure to only use property names that are defined by the type. Help me solve this problem ### Suggestion: _No response_
Issue: <add_texts method in AzureSearch class throwing a metadata error>
https://api.github.com/repos/langchain-ai/langchain/issues/11780/comments
10
2023-10-13T19:51:52Z
2024-02-14T16:09:28Z
https://github.com/langchain-ai/langchain/issues/11780
1,942,553,296
11,780
[ "hwchase17", "langchain" ]
### Feature request As it stands, `LLMRouterChain` basically only supports a single input. Here is the template: https://github.com/langchain-ai/langchain/blob/v0.0.314/libs/langchain/langchain/chains/router/multi_prompt_prompt.py While it is currently supported to define multiple input variables on a `PromptTemplate`, the `LLMRouterChain` basically doesn't support it because of the way the output from the LLM call is handled. ``` {{{{ "destination": string \\ name of the prompt to use or "DEFAULT" "next_inputs": string \\ a potentially modified version of the original input }}}} ``` With a lot of massaging, you can get to a place where an LLM causes `next_inputs` to be a comma-separated dict-like object but, down the chain (pun intended), you end up with an explosion because the way inputs are passed to chains doesn't handle this. Overall, this implementation is odd. It expects the LLM to massage the original input(s) and then wants to use that output directly in future things. Wouldn't it make more sense to simply ask the LLM to reason out which destination to send to (alone) and then take the original inputs and pass them along? This current implementation is combining two things into one: * a routing decision * paraphrasing user input The whole purpose of Langchain is to chain things, so why wouldn't you have "paraphrase/rephrase user input" as a chain step, instead of smushing it into the router implementation? ### Motivation I have a use case where various inputs are collected separately and need to be maintained separately for a custom-tuned LLM later on. This caused me to attempt to use `LLMRouterChain` with multiple inputs, whereby I hit the roadblock described above. It doesn't seem that wild to want to be able to make a routing decision based on input A, input B, or multiple inputs, and subsequently send all of those inputs to different templates later. ### Your contribution I don't really have sufficient coding skills to help. I could maybe make some diagrams that help illustrate how this could work.
LLMRouterChain should easily support multiple inputs
https://api.github.com/repos/langchain-ai/langchain/issues/11778/comments
4
2023-10-13T19:41:14Z
2024-02-11T16:11:17Z
https://github.com/langchain-ai/langchain/issues/11778
1,942,541,303
11,778
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I tried to follow the contributing Quick Start [here](https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md#-quick-start), but I'm running into issues when I run the unit tests using `make test` Here are the failing tests: ``` FAILED tests/unit_tests/agents/test_agent_iterator.py::test_agent_iterator_stopped_early - AssertionError: assert 'curses foiled again' == 'Agent stopped due to iteration limit or time limit.' FAILED tests/unit_tests/agents/test_agent_iterator.py::test_agent_iterator_reset - assert 0.0 > 0.0 FAILED tests/unit_tests/callbacks/test_callback_manager.py::test_callback_manager_with_async - pytest_socket.SocketBlockedError: A test tried to use socket.socket. FAILED tests/unit_tests/storage/test_filesystem.py::test_yield_keys - AssertionError: assert ['key1', 'subdir\\key2'] == ['key1', 'subdir/key2'] FAILED tests/unit_tests/tools/file_management/test_utils.py::test_get_validated_relative_path_errs_on_absolute - re.error: incomplete escape \U at position 53 FAILED tests/unit_tests/tools/file_management/test_utils.py::test_get_validated_relative_path_errs_on_parent_dir - re.error: incomplete escape \U at position 69 FAILED tests/unit_tests/tools/file_management/test_utils.py::test_get_validated_relative_path_errs_for_symlink_outside_root - re.error: incomplete escape \U at position 64 FAILED tests/unit_tests/tools/file_management/test_utils.py::test_get_validated_relative_path_for_symlink_inside_root - langchain.tools.file_management.utils.FileValidationError: Path symlink_inside_root is outside of the allowed direc... FAILED tests/unit_tests/tools/requests/test_tool.py::test_requests_get_tool - pytest_socket.SocketBlockedError: A test tried to use socket.socket. FAILED tests/unit_tests/tools/requests/test_tool.py::test_requests_post_tool - pytest_socket.SocketBlockedError: A test tried to use socket.socket. FAILED tests/unit_tests/tools/requests/test_tool.py::test_requests_patch_tool - pytest_socket.SocketBlockedError: A test tried to use socket.socket. FAILED tests/unit_tests/tools/requests/test_tool.py::test_requests_put_tool - pytest_socket.SocketBlockedError: A test tried to use socket.socket. FAILED tests/unit_tests/tools/requests/test_tool.py::test_requests_delete_tool - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/agents/test_agent_iterator.py::test_agent_async_iterator_stopped_early - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/agents/test_agent_iterator.py::test_agent_async_iterator_with_callbacks - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/agents/test_agent_iterator.py::test_agent_async_iterator_output_structure - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/callbacks/test_callback_manager.py::test_callback_manager_with_async_with_running_loop - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/callbacks/test_callback_manager.py::test_async_callback_manager - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/callbacks/test_callback_manager.py::test_async_callback_manager_sync_handler - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/chains/test_sequential.py::test_simple_sequential_functionality_with_callbacks[False] - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/chains/test_sequential.py::test_simple_sequential_functionality_with_callbacks[True] - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/indexes/test_indexing.py::test_abatch - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/output_parsers/test_json.py::test_partial_text_json_output_parser_async - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/output_parsers/test_json.py::test_partial_functions_json_output_parser_async - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/output_parsers/test_json.py::test_partial_text_json_output_parser_diff_async - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/output_parsers/test_json.py::test_partial_functions_json_output_parser_diff_async - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/retrievers/test_base.py::test_fake_retriever_v1_upgrade_async - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/retrievers/test_base.py::test_fake_retriever_v1_with_kwargs_upgrade_async - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/retrievers/test_base.py::test_fake_retriever_v2_async - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/retrievers/self_query/test_base.py::test__aget_relevant_documents - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_locals.py::test_put_get_async[<lambda>-foo-foo] - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_locals.py::test_put_get_async[<lambda>-input1-output1] - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_passthrough_tap_async - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_with_config - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_default_method_implementations - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_prompt - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_prompt_with_chat_model - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_prompt_with_llm - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_stream_log_retriever - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_prompt_with_llm_and_async_lambda - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_router_runnable - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_higher_order_lambda_runnable - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_map_astream - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_map_astream_iterator_input - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_deep_astream - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_deep_astream_assign - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_runnable_sequence_atransform - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_llm_with_fallbacks[llm_with_fallbacks] - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_llm_with_fallbacks[llm_with_multi_fallbacks] - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_llm_with_fallbacks[llm_chain_with_fallbacks] - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_async_retrying - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_seq_abatch_return_exceptions - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_runnable_branch_ainvoke - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_runnable_branch_ainvoke_callbacks - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_runnable_branch_abatch - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_tool_from_runnable - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_runnable_gen - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/schema/runnable/test_runnable.py::test_runnable_gen_transform - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/smith/evaluation/test_runner_utils.py::test_arun_on_dataset - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/tools/test_base.py::test_create_async_tool - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/tools/test_base.py::test_async_exception_handling_bool - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/tools/test_base.py::test_async_exception_handling_str - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/tools/test_base.py::test_async_exception_handling_callable - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/tools/test_base.py::test_async_exception_handling_non_tool_exception - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/tools/test_zapier.py::test_apreview - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/tools/test_zapier.py::test_arun - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/tools/test_zapier.py::test_alist - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ERROR tests/unit_tests/tools/shell/test_shell.py::test_shell_tool_arun - pytest_socket.SocketBlockedError: A test tried to use socket.socket. ``` Here is the complete log: [test-log.txt](https://github.com/langchain-ai/langchain/files/12895952/test-log.txt) ### Suggestion: _No response_
Issue: Many unit tests failing when running on Windows
https://api.github.com/repos/langchain-ai/langchain/issues/11775/comments
3
2023-10-13T17:01:59Z
2023-12-21T21:45:43Z
https://github.com/langchain-ai/langchain/issues/11775
1,942,295,198
11,775
[ "hwchase17", "langchain" ]
### System Info Using `langchain==0.0.300` ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [x] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I tried running the [ElasticsearchDatabaseChain](https://python.langchain.com/docs/use_cases/qa_structured/sql#elastic-search) on a simple Elasticsearch database that contains one index as defined below, but I always get a `SerializationError` due to some faulty json. Here are the details: 1. Get a local Elasticsearch instance running ``` docker run -p 9200:9200 -e "discovery.type=single-node" -e "xpack.security.enabled=false" -e "xpack.security.http.ssl.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:8.10.2 ``` 2. Create the index mapping as follows ```bash curl --request PUT \ --url 'http://localhost:9200/products-index?pretty=' \ --header 'content-type: application/json' \ --header 'user-agent: vscode-restclient' \ --data '{"settings": {"index.mapping.coerce": false},"mappings": {"properties": {"name": { "type": "text" },"price": { "type": "double" },"reviews": { "type": "text" },"createdAt": { "type": "date", "format": "epoch_second" }},"dynamic": true}}' ``` 3. Bulk insert some data ```bash curl --request POST \ --url http://localhost:9200/_bulk \ --header 'content-type: application/x-ndjson' \ --header 'user-agent: vscode-restclient' \ --data '{ "index": { "_index": "products-index" } }{ "name": "Bulk Item 666", "price": 666, "reviews": ["Good"], "createdAt": 1697213529 }{ "index": { "_index": "products-index" } }{ "name": "Bulk Item 123", "price": 123, "reviews": ["Good"], "createdAt": 1697213529 }' ``` 4. Create and run the chain ```python from elasticsearch import Elasticsearch from langchain import OpenAI from langchain.chains.elasticsearch_database import ElasticsearchDatabaseChain db = Elasticsearch("http://localhost:9200") chain = ElasticsearchDatabaseChain.from_llm( llm=OpenAI(), database=db, verbose=True, ) result = chain.run(prompt) print(result) ``` <details> <summary>Traceback (most recent call last):</summary> > File "/home/sindeavour/Documents/github/sinaapps/datavacuum-old/ai-service/src/routes/v1/elastic.py", line 82, in elastic result = chain.run(prompt) ^^^^^^^^^^^^^^^^^ File "/home/sindeavour/.pyenv/versions/3.11.5/envs/ai-service/lib/python3.11/site-packages/langchain/chains/base.py", line 487, in run return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/sindeavour/.pyenv/versions/3.11.5/envs/ai-service/lib/python3.11/site-packages/langchain/chains/base.py", line 292, in __call__ raise e File "/home/sindeavour/.pyenv/versions/3.11.5/envs/ai-service/lib/python3.11/site-packages/langchain/chains/base.py", line 286, in __call__ self._call(inputs, run_manager=run_manager) File "/home/sindeavour/.pyenv/versions/3.11.5/envs/ai-service/lib/python3.11/site-packages/langchain/chains/elasticsearch_database/base.py", line 127, in _call indices_info = self._get_indices_infos(indices) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/sindeavour/.pyenv/versions/3.11.5/envs/ai-service/lib/python3.11/site-packages/langchain/chains/elasticsearch_database/base.py", line 100, in _get_indices_infos hits = self.database.search( ^^^^^^^^^^^^^^^^^^^^^ File "/home/sindeavour/.pyenv/versions/3.11.5/envs/ai-service/lib/python3.11/site-packages/elasticsearch/_sync/client/utils.py", line 414, in wrapped return api(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^ File "/home/sindeavour/.pyenv/versions/3.11.5/envs/ai-service/lib/python3.11/site-packages/elasticsearch/_sync/client/__init__.py", line 3924, in search return self.perform_request( # type: ignore[return-value] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/sindeavour/.pyenv/versions/3.11.5/envs/ai-service/lib/python3.11/site-packages/elasticsearch/_sync/client/_base.py", line 285, in perform_request meta, resp_body = self.transport.perform_request( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/sindeavour/.pyenv/versions/3.11.5/envs/ai-service/lib/python3.11/site-packages/elastic_transport/_transport.py", line 347, in perform_request data = self.serializers.loads(raw_data, meta.mimetype) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/sindeavour/.pyenv/versions/3.11.5/envs/ai-service/lib/python3.11/site-packages/elastic_transport/_serializer.py", line 196, in loads return self.get_serializer(mimetype).loads(data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/sindeavour/.pyenv/versions/3.11.5/envs/ai-service/lib/python3.11/site-packages/elastic_transport/_serializer.py", line 95, in loads raise SerializationError( elastic_transport.SerializationError: Unable to deserialize as JSON: b'{"took":1,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":4,"relation":"eq"},"max_score":1.0,"hits":[{"_index":"products-index","_id":"1","_score":1.0,"_source":{\n "name": "Samsung S22 Ultra",\n "price": 975.99,\n "reviews": ["Good"],\n "createdAt": 1697145247 // epoch_second\n}},{"_index":"products-index","_id":"pRG9JYsB0QTitqW3X0qT","_score":1.0,"_source":{\n "name": "iPhone 14 Pro",\n "price": 1099,\n "reviews": ["Good"],\n "createdAt": 1697145249 // epoch_second\n}},{"_index":"products-index","_id":"phG9JYsB0QTitqW3ZEq5","_score":1.0,"_source":{ "name": "Bulk Item 666", "price": 666, "reviews": ["Good"], "createdAt": 1697145250 }}]}}' </details> ``` elastic_transport.SerializationError: Unable to deserialize as JSON: b'{"took":0,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":4,"relation":"eq"},"max_score":1.0,"hits":[{"_index":"products-index","_id":"1","_score":1.0,"_source":{\n "name": "Samsung S22 Ultra",\n "price": 975.99,\n "reviews": ["Good"],\n "createdAt": 1697145247 // epoch_second\n}},{"_index":"products-index","_id":"pRG9JYsB0QTitqW3X0qT","_score":1.0,"_source":{\n "name": "iPhone 14 Pro",\n "price": 1099,\n "reviews": ["Good"],\n "createdAt": 1697145249 // epoch_second\n}},{"_index":"products-index","_id":"phG9JYsB0QTitqW3ZEq5","_score":1.0,"_source":{ "name": "Bulk Item 666", "price": 666, "reviews": ["Good"], "createdAt": 1697145250 }}]}}' ``` If you look closely, you can see that there is an invalid comment `// epoch_second` in multiple places in the generated json, hence why the parsing fails. ### Expected behavior The chain should run without issues when executing `chain.run(prompt)`
ElasticsearchDatabaseChain SerializationError
https://api.github.com/repos/langchain-ai/langchain/issues/11771/comments
4
2023-10-13T16:35:06Z
2023-11-04T04:21:52Z
https://github.com/langchain-ai/langchain/issues/11771
1,942,262,577
11,771
[ "hwchase17", "langchain" ]
### Feature request Implement a feature using Langchain's image_captions.py and audio_speech_to_text.py to produce .srt files. This system will provide both subtitles and visual scene descriptions, essentially creating closed captioning. ### Motivation Current closed captioning primarily captures spoken content, omitting crucial visual elements of videos. This lack of comprehensive description hinders a full understanding, especially for the hearing-impaired community. Our goal is to elevate video accessibility by integrating both visual and auditory components in closed captioning, ensuring a more inclusive viewing experience. ### Your contribution We are students at the University of Toronto, interested in contributing to Langchain for a school project. We would like to submit a PR that implements this feature sometime in November.
Video imagery to text (Closed Captioning)
https://api.github.com/repos/langchain-ai/langchain/issues/11770/comments
8
2023-10-13T16:29:52Z
2024-03-18T16:05:44Z
https://github.com/langchain-ai/langchain/issues/11770
1,942,255,454
11,770
[ "hwchase17", "langchain" ]
### System Info I encountered a very strange problem when using AsyncCallbackHandler. Most of the time, it works normally, but occasionally strange problems occur. Therefore, I logged the entire call process in each function, and finally learned from the logs that under normal circumstances, the logs are on_ llm_ start, then the API call log for calling openai, and finally on_llm_ end, but occasionally this is not the case, and in exceptional cases, the log order is on_ llm_ start, on_ llm_ end, followed by the API call log of OpenAI, which occasionally prevents me from obtaining the complete token concatenated string because I need to record the complete answer of AI each time. Therefore, I checked the source code and temporarily did not find how the while loop broke out. While not self. queue. empty() or not self. done.is_ Set () This needs to satisfy both that the queue is not empty and that done has been set. Where are the specific issues and I hope to receive answers ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [X] Async ### Reproduction class AsyncIteratorCallbackHandler(AsyncCallbackHandler): """Callback handler that returns an async iterator.""" queue: asyncio.Queue[str] done: asyncio.Event @property def always_verbose(self) -> bool: return True def __init__(self) -> None: self.queue = asyncio.Queue() self.done = asyncio.Event() async def on_llm_start( self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any ) -> None: # If two calls are made in a row, this resets the state self.done.clear() async def on_llm_new_token(self, token: str, **kwargs: Any) -> None: if token is not None and token != "": self.queue.put_nowait(token) async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None: self.done.set() async def on_llm_error( self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any ) -> None: self.done.set() # TODO implement the other methods async def aiter(self) -> AsyncIterator[str]: while not self.queue.empty() or not self.done.is_set(): # Wait for the next token in the queue, # but stop waiting if the done event is set done, other = await asyncio.wait( [ # NOTE: If you add other tasks here, update the code below, # which assumes each set has exactly one task each asyncio.ensure_future(self.queue.get()), asyncio.ensure_future(self.done.wait()), ], return_when=asyncio.FIRST_COMPLETED, ) # Cancel the other task if other: other.pop().cancel() # Extract the value of the first completed task token_or_done = cast(Union[str, Literal[True]], done.pop().result()) # If the extracted value is the boolean True, the done event was set if token_or_done is True: break # Otherwise, the extracted value is a token, which we yield yield token_or_done ### Expected behavior AsyncCallbackHandler can execute strictly in the order of on_llm_start, openai call(on_llm_new_token), and on_llm_end each time.
When using Streaming and AsyncCallbackHandler, sometimes it is not possible to obtain the complete token concatenation string.
https://api.github.com/repos/langchain-ai/langchain/issues/11769/comments
3
2023-10-13T14:56:55Z
2024-02-09T16:16:13Z
https://github.com/langchain-ai/langchain/issues/11769
1,942,118,596
11,769
[ "hwchase17", "langchain" ]
### System Info llm = ChatOpenAI(model_name="gpt-4", temperature=0.5, request_timeout=300) qa_chain = ConversationalRetrievalChain.from_llm( verbose=True, llm=llm, retriever=retriever, return_source_documents=True, combine_docs_chain_kwargs={"prompt": prompt}, rephrase_question=False, ) given code I get error: Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')). ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction llm = ChatOpenAI(model_name="gpt-4", temperature=0.5, request_timeout=300) qa_chain = ConversationalRetrievalChain.from_llm( verbose=True, llm=llm, retriever=retriever, return_source_documents=True, combine_docs_chain_kwargs={"prompt": prompt}, rephrase_question=False, ) ### Expected behavior successfully call OpenAI embedding api
Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')).
https://api.github.com/repos/langchain-ai/langchain/issues/11767/comments
2
2023-10-13T11:51:17Z
2024-02-11T16:11:22Z
https://github.com/langchain-ai/langchain/issues/11767
1,941,800,234
11,767
[ "hwchase17", "langchain" ]
### Feature request I would like to propose a new vector database to be included in the Langchain ecosystem to expand Langchain's influence. Below is the official link to our vector database. https://www.transwarp.cn/starwarp ### Motivation I hope that everyone can more conveniently use Hippo + Langchain to build AI Agents. ### Your contribution Integrating the Hippo vector database into the Langchain ecosystem can expand Langchain's influence.
Add a new vector store hippo to langchain
https://api.github.com/repos/langchain-ai/langchain/issues/11763/comments
1
2023-10-13T10:54:02Z
2024-02-06T16:21:51Z
https://github.com/langchain-ai/langchain/issues/11763
1,941,719,788
11,763
[ "hwchase17", "langchain" ]
### Feature request It would be great to make it possible to use multiple retrievers, LLM, and conversation history together. ### Motivation Currently, when using `MultiRetrievalQAChain.fromLLMAndRetrievers`, you can provide an LLM and multiple retrievers for context. If there was a possibility to add a conversation/chat history to this flow, it could help further understand the context, especially in cases where there was a previous conversation between a user and a human agent/bot. ### Your contribution I'm not familiar with Python but could provide help with the TypeScript implementation on top of the base logic.
Use conversation history with MultiRetrievalQAChain
https://api.github.com/repos/langchain-ai/langchain/issues/11760/comments
2
2023-10-13T10:28:58Z
2024-02-20T16:14:35Z
https://github.com/langchain-ai/langchain/issues/11760
1,941,683,584
11,760
[ "hwchase17", "langchain" ]
### System Info Google Colab langchain==0.0.313 transformers==4.34.0 Python 3.10.12 [Dependencies](https://pastebin.com/raw/T1BqzVq1) I am trying to download huggingface model and feed them to the chain. It was working prior to few weeks. I have tried to change model id to other models. ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` !pip install langchain transformers from langchain import HuggingFacePipeline llm = HuggingFacePipeline.from_model_id(model_id = "bigscience/bloom-560m", task = "text-generation") ``` ### Expected behavior Successful execution ## Screenshot ![Screenshot_2023-10-13_09-37-20](https://github.com/langchain-ai/langchain/assets/86911396/51f46569-6bcc-4057-81a6-68eba060af50)
AttributeError: 'BloomForCausalLM' object has no attribute 'is_quantized'
https://api.github.com/repos/langchain-ai/langchain/issues/11759/comments
6
2023-10-13T09:35:49Z
2024-02-11T16:11:27Z
https://github.com/langchain-ai/langchain/issues/11759
1,941,602,588
11,759
[ "hwchase17", "langchain" ]
### Issue with current documentation: From the page https://python.langchain.com/docs/integrations/platforms/aws If you select See a [usage example.](https://python.langchain.com/docs/integrations/llms/amazon_api_gateway_example) the following message is generated: **Page Not Found** We could not find what you were looking for. Please contact the owner of the site that linked you to the original URL and let them know their link is broken. https://python.langchain.com/docs/integrations/llms/amazon_api_gateway_example ### Idea or request for content: _No response_
DOC:**Page Not Found**
https://api.github.com/repos/langchain-ai/langchain/issues/11758/comments
1
2023-10-13T09:16:52Z
2024-03-13T20:03:18Z
https://github.com/langchain-ai/langchain/issues/11758
1,941,572,219
11,758
[ "hwchase17", "langchain" ]
### Feature request will you update to follow the update of https://github.com/openai/openai-python/discussions/631 ? ### Motivation Many issues in langchain point that when calling async api and using openai LLM(Chains like MapReduceDocumentsChain), it will raise '[Error Response payload is not complete](https://github.com/langchain-ai/langchain/issues/9187#top)' or [TimeoutError](https://github.com/langchain-ai/langchain/issues/7148) After discussion, they all point to [an openai issue](https://github.com/openai/openai-python/issues/574), which will be fix in v1.0.0(now v1.0.0 beta is avaliable) So if you can adapt to their updates, this critical issue can be fixed ### Your contribution none
Adapt to openai-python v1.0.0 Beta
https://api.github.com/repos/langchain-ai/langchain/issues/11755/comments
11
2023-10-13T07:18:30Z
2024-02-15T16:08:42Z
https://github.com/langchain-ai/langchain/issues/11755
1,941,385,504
11,755
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I am trying to build an agent with multiple tools. Based on the query agent may need to execute single or multiple tools. Some of the tools may ask clarification question to the user. In that case agent should add the question to the final answer along with other tools response. But it wont work as expected. Also It executes the same tool again and again with similar kind of input if the tools response not satisfactory to it. My agent has 4 tools. I have added portion of the code and a screenshot for better understanding. I have also tried to modify the prefix, suffix and format_instruction in so many ways but no luck. Also I know return_direct=true will return the tools response to the user directly but this is not my case as this flag won't execute multiple tools. **Expectation:** Agent should return the tools clarification question to the user along with other tools response instead of answering on it's own and execute same tool for similar kind of input. Also Agent should reply proper answer if the Query not related to tools. Eg: If the query is just "Hello" it produce error. ``` FORMAT_INSTRUCTIONS = """Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [{tool_names}] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question""" PREFIX_PROMPT = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:""" SUFFIX_PROMPT = """Begin! Previous conversation history: {chat_history} Question: {input} Thought:{agent_scratchpad}""" class ChatService(RequestAccessorService): def __init__(self, model: SenseChatModel, request: Request): super().__init__(request) self.model = model async def execute_async(self): tools = [ QATool(self.model, self), RecordCreateTool(self.model, self), RecordSearchTool(self.model, self), AppCreateTool(self.model, self) ] llm = ChatOpenAI(openai_api_key=self.model.openai_api_key, temperature=0) prompt = ZeroShotAgent.create_prompt( tools, prefix=PREFIX_PROMPT, suffix=SUFFIX_PROMPT, format_instructions=FORMAT_INSTRUCTIONS, input_variables=["input", "chat_history", "agent_scratchpad"], ) llm_chain = LLMChain(llm=llm, prompt=prompt) agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True) agent_chain = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True ) print(agent.llm_chain.prompt.template) result = await agent_chain.arun(input=self.model.query) return result ``` **Query:** Give me the all product in the inventory and create a product named dell in inventory **Output**: The output of the query. Please see the Action, Action Input and observation to understand the whole scenario. ![Screenshot 2023-10-13 115759](https://github.com/langchain-ai/langchain/assets/94289705/31352a82-0a2f-43ae-8c27-a73ad0481b03) ### Suggestion: _No response_
Question (Urgent): Agent with multiple tools doesn't work as expected and also caused infinite loop.
https://api.github.com/repos/langchain-ai/langchain/issues/11751/comments
3
2023-10-13T06:26:38Z
2024-03-29T16:06:25Z
https://github.com/langchain-ai/langchain/issues/11751
1,941,321,112
11,751
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. _No response_ ### Suggestion: _No response_
langchain agent如何ηŸ₯道选择ε“ͺδΈͺTool?
https://api.github.com/repos/langchain-ai/langchain/issues/11750/comments
2
2023-10-13T05:25:20Z
2024-02-08T16:19:20Z
https://github.com/langchain-ai/langchain/issues/11750
1,941,264,138
11,750
[ "hwchase17", "langchain" ]
### System Info ```sh Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.8/dist-packages/langchain/__init__.py", line 8, in <module> from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain File "/usr/local/lib/python3.8/dist-packages/langchain/agents/__init__.py", line 2, in <module> from langchain.agents.agent import Agent File "/usr/local/lib/python3.8/dist-packages/langchain/agents/agent.py", line 10, in <module> from langchain.chains.base import Chain File "/usr/local/lib/python3.8/dist-packages/langchain/chains/__init__.py", line 2, in <module> from langchain.chains.conversation.base import ConversationChain File "/usr/local/lib/python3.8/dist-packages/langchain/chains/conversation/base.py", line 7, in <module> from langchain.chains.conversation.memory import ConversationBufferMemory File "/usr/local/lib/python3.8/dist-packages/langchain/chains/conversation/memory.py", line 7, in <module> from langchain.chains.conversation.prompt import SUMMARY_PROMPT File "/usr/local/lib/python3.8/dist-packages/langchain/chains/conversation/prompt.py", line 2, in <module> from langchain.prompts.prompt import PromptTemplate File "/usr/local/lib/python3.8/dist-packages/langchain/prompts/__init__.py", line 2, in <module> from langchain.prompts.base import BasePromptTemplate File "/usr/local/lib/python3.8/dist-packages/langchain/prompts/base.py", line 35, in <module> class BasePromptTemplate(BaseModel, ABC): File "/usr/local/lib/python3.8/dist-packages/langchain/prompts/base.py", line 41, in BasePromptTemplate @root_validator() File "/usr/local/lib/python3.8/dist-packages/pydantic/deprecated/class_validators.py", line 228, in root_validator raise PydanticUserError( pydantic.errors.PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`. ``` ### Who can help? @agola11 ```python import langchain ``` ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Install langchain latest version on Linux 2. import langchain ### Expected behavior Nothing.
@root_validator error
https://api.github.com/repos/langchain-ai/langchain/issues/11749/comments
3
2023-10-13T04:51:51Z
2023-10-18T18:52:26Z
https://github.com/langchain-ai/langchain/issues/11749
1,941,233,288
11,749
[ "hwchase17", "langchain" ]
### Feature request We would like to add Slack toolkit, which for a slack user, he can: 1. fetch information of workplaces (name, user type, time joined, etc.) he is a member of 2. fetch chat history across workplaces he is a member, and further process/analyze such data with LLM 3. fetch interaction (reply, mention, DM, etc) with other users across workplaces. 4. fetch a list of new activities across workplaces since last time login to slack platform, 5. and react to these activities. Also, supports filters (by time, keyword etc) as appropriate for all the above functions ### Motivation Currently lang-chain provides limited support for Slack, specifically, only supports loader of chat history zip file. On the other hand, slack platform analytics interface lacks aggregated information across workplaces; also most analytics info if provides are limited to owner/admin user type. Therefore, we want to implement the above feature: 1. Add-on current langchain toolkit: slack toolkit would be the one of the few, if not the first social media related toolkit. 2. Expose slack interface tools' functions to more user types: Only specific slack user type (workspace owner and/or admin) has access to downloading file option, along with most analytics tools embedded in slack user interface, which calls for a toolkit for other user types to access such information. 3. Better Info Aggregation: We notice that a user can be in multiple workspaces yet currently there lacks a way for user to get/compare information (real-time and logs) across workspaces they are member of. It would be nice for lang chain to provide a toolkit that provides user with such information upon request, so that the user can have all his/her user info across workspace at one place. ### Your contribution We have started working on the feature and intend to submit PR in mid November.
Add Slack Toolkit
https://api.github.com/repos/langchain-ai/langchain/issues/11747/comments
4
2023-10-13T01:32:38Z
2024-02-11T16:11:36Z
https://github.com/langchain-ai/langchain/issues/11747
1,941,067,816
11,747
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. _No response_ ### Suggestion: _No response_
ζ€ŽδΉˆζŽ§εˆΆlangchain agentηš„ζ„ε›Ύεˆ†η±»ηš„η±»εˆ«ζ•°οΌŸ
https://api.github.com/repos/langchain-ai/langchain/issues/11746/comments
3
2023-10-13T00:48:54Z
2024-02-08T16:19:30Z
https://github.com/langchain-ai/langchain/issues/11746
1,941,036,248
11,746
[ "hwchase17", "langchain" ]
### System Info Hi, I've implemented my own MultiRouteChain ### Who can help? @UmerHA @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction This is the implementation of mycallbackhandler, taken from this documentation: https://github.com/langchain-ai/langchain/issues/5433 class MyCallbackHandler(BaseCallbackHandler): def on_llm_new_token(self, token, **kwargs) -> None: # print every token on a new line print(f"#{token}#") This is part of the implementation of my classs: class NestedRouterMultiType(MultiRouteChain): @classmethod def from_prompts_and_retrievers( cls, llm: BaseLanguageModel, prompt_infos: List[Dict[str, str]], retrieval_infos: List[Dict[str, Any]], router_infos: List[Dict[str, Dict[str, List[Dict[str, Any]]]]], default_chain: Optional[Chain] = None, #We can include default retrievers, but for now it is useful to just use GPT as the default chain return_source_documents: bool = False, verbose: bool = False, **kwargs: Any, ) -> NestedRouterMultiType: """Convenience constructor for instantiating from destination prompts and retrievers.""" # Combine prompt and retriever infos for destinations destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos + retrieval_infos + router_infos] destinations_str = "\n".join(destinations) router_template = MULTI_TYPE_ROUTER_TEMPLATE_TEST.format( destinations=destinations_str ) router_prompt = PromptTemplate( template=router_template, input_variables=["input", "context"], output_parser=RouterOutputParserConversational(), ) router_chain = LLMRouterChainConversational.from_llm(llm, router_prompt) destination_chains = {} for p_info in prompt_infos: name = p_info["name"] prompt_template = p_info["prompt_template"] input_variables = cls._extract_variables_from_template(prompt_template) # Include here input variables input and history and test whether history is being passed correctly #Deprecated line of code: #prompt = PromptTemplate(template=prompt_template, input_variables=["question", "chat_history"]) prompt = PromptTemplate(template=prompt_template, input_variables=input_variables) chain = UnifiedLLMChain( prompt=prompt, llm=llm, verbose=verbose, callbacks=[MyCallbackHandler()], ) destination_chains[name] = chain for r_info in retrieval_infos: # refactor this to pass all arguments in PromptFactory name = r_info["name"] prompt = r_info.get("prompt") #try to retrieve a prompt, if empty, use None (not sure yet what this prompt is for) retriever = r_info["retriever"] condense_prompt_template = r_info["condense_prompt_template"] prompt_template = r_info["prompt_template"] # Here's where we create a Router that can route to ConversationalRetrievalChains chain = ConversationalRetrievalChainHistory.from_llm( llm, retriever=retriever, combine_docs_chain_kwargs={"prompt": prompt_template}, condense_question_prompt=condense_prompt_template, return_source_documents = return_source_documents, verbose=verbose, callbacks=[MyCallbackHandler()], ) destination_chains[name] = chain ### Expected behavior I expect to print all tokens on each chain. The output is nonexistent. Either I'm missing something or it is a bug. The streaming parameter is set to True on BaseLanguageModel.
BaseCallbackHandler on_llm_new_token not working with custom MultiRouteChain
https://api.github.com/repos/langchain-ai/langchain/issues/11739/comments
4
2023-10-12T22:41:51Z
2024-02-10T16:13:27Z
https://github.com/langchain-ai/langchain/issues/11739
1,940,902,588
11,739
[ "hwchase17", "langchain" ]
### System Info Platform: Windows Server 2022 Python: 3.11.6 Langchain version: 0.0.306 ### Who can help? @agola11 @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ### Creating a test tool ```py from langchain.agents import Tool tools = [ Tool( name="test_tool", func=print, description="This is a test tool" ) ] tools ``` ``` [Tool(name='test_tool', description='This is a test tool', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, tags=None, metadata=None, handle_tool_error=False, func=<built-in function print>, coroutine=None)] ``` ### Setting up the pandas_dataframe_agent ```py from langchain.agents import create_pandas_dataframe_agent from langchain.llms import HuggingFacePipeline import pandas as pd llm = HuggingFacePipeline.from_model_id( model_id="google/flan-t5-small", task="text2text-generation", device=0 ) agent = create_pandas_dataframe_agent(llm, pd.DataFrame(), verbose=True, extra_tools=tools) agent.tools ``` ``` [PythonAstREPLTool(name='python_repl_ast', description='A Python shell. Use this to execute python commands. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer.', args_schema=<class 'langchain.tools.python.tool.PythonInputs'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, tags=None, metadata=None, handle_tool_error=False, globals={}, locals={'df': Empty DataFrame Columns: [] Index: []}, sanitize_input=True), Tool(name='test_tool', description='This is a test tool', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, tags=None, metadata=None, handle_tool_error=False, func=<built-in function print>, coroutine=None)] ``` ### Executing agent with debugging enabled ```py import langchain langchain.debug = True agent.run('What is 2+2?') ``` ``` [chain/start] [1:chain:AgentExecutor] Entering Chain run with input: { "input": "What is 2+2?" } [chain/start] [1:chain:AgentExecutor > 2:chain:LLMChain] Entering Chain run with input: { "input": "What is 2+2?", "agent_scratchpad": "", "stop": [ "\nObservation:", "\n\tObservation:" ] } [llm/start] [1:chain:AgentExecutor > 2:chain:LLMChain > 3:llm:HuggingFacePipeline] Entering LLM run with input: { "prompts": [ "You are working with a pandas dataframe in Python. The name of the dataframe is `df`.\nYou should use the tools below to answer the question posed of you:\n\npython_repl_ast: A Python shell. Use this to execute python commands. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [python_repl_ast]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\n\nThis is the result of `print(df.head())`:\n\n\nBegin!\nQuestion: What is 2+2?" ] } ``` ### The prompt from the above log ``` You are working with a pandas dataframe in Python. The name of the dataframe is `df`. You should use the tools below to answer the question posed of you: python_repl_ast: A Python shell. Use this to execute python commands. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [python_repl_ast] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question This is the result of `print(df.head())`: Begin! Question: What is 2+2? ``` ### Expected behavior Where did my custom tool `test_tool` disappear? I expected it to show up after python_repl_ast?
`extra_tools` argument in `create_pandas_dataframe_agent()` doesn't seem to be working
https://api.github.com/repos/langchain-ai/langchain/issues/11737/comments
3
2023-10-12T22:22:09Z
2024-02-05T15:14:43Z
https://github.com/langchain-ai/langchain/issues/11737
1,940,885,814
11,737
[ "hwchase17", "langchain" ]
### Feature request There is an option to specify the prompt sent to LLM and also the CONDENSE_QUESTION prompt. But there is no way to specify the prompt that is being used by SelfQueryRetriever. This can be helpful to provide additional context to the LLM in order to construct a proper query. ### Motivation For my use case, the default prompt sent to LLM to generate a query doesn't work. Although the question I am asking the LLM is good, it is not able to figure the correct filter it should use. I think if I was able to modify the query, it could solve the issue and start returning correct queries. ### Your contribution Not really.
Add option to specify prompt for SelfQueryRetriever
https://api.github.com/repos/langchain-ai/langchain/issues/11735/comments
5
2023-10-12T21:49:59Z
2024-05-07T16:05:58Z
https://github.com/langchain-ai/langchain/issues/11735
1,940,842,588
11,735
[ "hwchase17", "langchain" ]
### System Info Langchain version: 0.0.312 Python: 3.10.11 Platform: Ubuntu ### Who can help? @hwchase17 @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [X] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` from langchain.memory import ConversationTokenBufferMemory max_token_limit = 0 memory = ConversationTokenBufferMemory( llm=llm, # Azure OpenAI GPT3.5 max_token_limit=max_token_limit, return_messages=True, ) context_pair = { "inputs": {"input": "Example input message."}, "outputs": {"outputs": "Example output message."} } memory.save_context(**context_pair) ``` ### Expected behavior The above code produces an `IndexError: pop from empty list` Due to some token budgeting we're doing, sometimes the space for the memory gets reduced to 0 (specifically when a user has a long message, we prioritize tokens for their message and the AI response). So we create memory with `max_token_limit=0`, and **ideally we'd like the memory to be pruned such that the buffer is empty.** However, we instead get the above error. ### Where the bug seems to be The culprit seems to be in `memory.save_context()` , which we call directly in some cases or sometimes it happens internally like: ``` chat_chain = ConversationChain( memory=memory, llm=llm, prompt=prompt_template ) ai_response = chat_chain.predict(input=user_message) # internal call to save_context() ``` Within `save_context`, there's a loop: ``` while curr_buffer_length > self.max_token_limit: pruned_memory.append(buffer.pop(0)) curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer) ``` Oddly, when the buffer is an empty list, `curr_buffer_length` is 3 when it should be 0. So the loop continues and tries to pop from the empty buffer list and we get the above error. ![langchain_bug](https://github.com/langchain-ai/langchain/assets/67058165/7c76afc1-56eb-4497-ae27-3e8ae8d4ddf3)
IndexError: pop from empty list - bug in ConversationTokenBufferMemory
https://api.github.com/repos/langchain-ai/langchain/issues/11734/comments
6
2023-10-12T21:11:56Z
2024-02-11T16:11:42Z
https://github.com/langchain-ai/langchain/issues/11734
1,940,800,530
11,734
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I am following the notebook in this github to create two agents with tools: https://github.com/langchain-ai/langchain/blob/master/cookbook/two_agent_debate_tools.ipynb I am also using AzureOpenAI instead of OpenAI directly because of corporate firewalls. I am getting this error: TypeError Traceback (most recent call last) Cell In[25], line 34 24 agent_description = AzureChatOpenAI( 25 openai_api_base=BASE_URL, 26 openai_api_version="2023-05-15", (...) 29 openai_api_type="azure", 30 )(temperature=1.0)(agent_specifier_prompt).content 31 return agent_description ---> 34 agent_descriptions = {name: generate_agent_description(name) for name in names} Cell In[25], line 34, in (.0) 24 agent_description = AzureChatOpenAI( 25 openai_api_base=BASE_URL, 26 openai_api_version="2023-05-15", (...) 29 openai_api_type="azure", 30 )(temperature=1.0)(agent_specifier_prompt).content 31 return agent_description ---> 34 agent_descriptions = {name: generate_agent_description(name) for name in names} Cell In[25], line 24, in generate_agent_description(name) 13 def generate_agent_description(name): 14 agent_specifier_prompt = [ ... 29 openai_api_type="azure", 30 )(temperature=1.0)(agent_specifier_prompt).content 31 return agent_description TypeError: BaseChatModel.__call__() missing 1 required positional argument: 'messages' ### Suggestion: _No response_
TypeError: BaseChatModel.__call__() missing 1 required positional argument: 'messages'
https://api.github.com/repos/langchain-ai/langchain/issues/11733/comments
3
2023-10-12T20:45:25Z
2024-07-14T15:27:49Z
https://github.com/langchain-ai/langchain/issues/11733
1,940,767,532
11,733
[ "hwchase17", "langchain" ]
### Feature request Hello everyone! Please when will we have SQLDatabaseChain async? I believe it's crucial for all Chains to be async so that we can have a serverless service serving multiple clients. Thank you! ### Motivation I believe it's crucial for all Chains to be async so that we can have a serverless service serving multiple clients. ### Your contribution I dont know how to contribute. Maybe I coult test your code.
async SQLDatabaseChain
https://api.github.com/repos/langchain-ai/langchain/issues/11732/comments
5
2023-10-12T20:25:16Z
2024-03-28T16:06:32Z
https://github.com/langchain-ai/langchain/issues/11732
1,940,740,863
11,732
[ "hwchase17", "langchain" ]
### Feature request The proposal is to enhance our Langchain by integrating support for querying The Movie Database (TMDb) using a Tool to retrieve detailed information about movies. [https://developer.themoviedb.org/docs/getting-started](url) [https://github.com/celiao/tmdbsimple/](url) ### Motivation Allowing users to query The Movie Database (TMDb) would allow users to ask information regarding moives, and this tool will give the llms better/more reliable information to work with. ### Your contribution We intend to submit a pull request for this issue at some point in November for a school project.
Add support for themoviedb querying to get movie info
https://api.github.com/repos/langchain-ai/langchain/issues/11728/comments
3
2023-10-12T19:15:07Z
2023-10-16T04:21:47Z
https://github.com/langchain-ai/langchain/issues/11728
1,940,635,026
11,728
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. ![image](https://github.com/langchain-ai/langchain/assets/142630126/63a7c9c3-ac95-4145-9342-838e8a9b0ba1) The retriever object in the VectorStoreRetrieverMemory class is a VectorStoreRetriever and not AzureSearchVectorStoreRetriever. The VectorStoreRetriever and AzureSearchVectorStoreRetriever are two different classes and doesn't support same methods. As i want to use vector store backed memory retriever which has option for filtering with user_id metadata to maintain mutliple chat histories based on user. I require help in two things. 1. Support for AzureSearchVectorStoreRetriever object in VectorStoreRetrieverMemory 2. How to initialize the memory object that accepts azure vector store (with id, user_id, conversation, conversation_vector as fields in my case), I want to save_context with respect to user_id, the following example given in the [](https://python.langchain.com/docs/modules/memory/types/vectorstore_retriever_memory) memory.save_context({"input": "My favorite food is pizza"}, {"output": "that's good to know"}) doesn't say how. similarly for the memory.load_memory_variables({"prompt": "what sport should i watch?"})["history"] ### Suggestion: As most organizations use cases would be at scale, examples that include how to maintain memory for different users (user_id being metadata) would help. Add AzureSearchVectorStoreRetriever object to the VectorStoreRetrieverMemory class would also help. If i am wrong in the above cases, please correct me and help me pointing out the resources to solve this.
Issue: <VectorStoreRetrieverMemory class doesn't support AzureSearchVectorStoreRetriever retriever>
https://api.github.com/repos/langchain-ai/langchain/issues/11720/comments
2
2023-10-12T16:49:35Z
2024-02-06T16:22:36Z
https://github.com/langchain-ai/langchain/issues/11720
1,940,398,645
11,720
[ "hwchase17", "langchain" ]
### System Info LangChain 0.0.240 but code on latest master has same issue ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [X] Callbacks/Tracing - [ ] Async ### Reproduction Use a fine-tuned OpenAI model, named like "ft:gpt-2.5-turbo-0613:orgname@12abCDEF" `with get_openai_callback() as openai_callback:` `openai_callback.total_cost` The cost is zero. ### Expected behavior The cost is returned. The issue is in https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/callbacks/openai_info.py#L67 which expects fine-tuned models' names' to start with "ft-" (dash, should be colon)
Cost calculation is always zero with fine-tuned OpenAI models
https://api.github.com/repos/langchain-ai/langchain/issues/11715/comments
5
2023-10-12T14:46:31Z
2024-02-04T19:41:20Z
https://github.com/langchain-ai/langchain/issues/11715
1,940,160,863
11,715
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Having issue with cypherPrompt in GraphCypherQAChain . I am trying to use GraphCypherQAChain and Bedrock to generate an answer from Neo4j database. I am using local Neo4j Movies database and the code example from: https://js.langchain.com/docs/modules/chains/additional/cypher_chain Here is the code: ``` const { PromptTemplate } = require("langchain/prompts"); const { GraphCypherQAChain } = require("langchain/chains/graph_qa/cypher"); const { Neo4jGraph } = require("langchain/graphs/neo4j_graph"); const { Bedrock } = require("langchain/llms/bedrock"); const url = "bolt://localhost:7687"; const username = "neo4j"; const password = "***"; const aiModelId = "ai21.j2-mid-v1"; module.exports.chainRun = async (event) => { const graph = await Neo4jGraph.initialize({ url, username, password }); const model = new Bedrock({ model: aiModelId, region: "***", maxTokens: 200, temperature: 0.5, stopSequences: ["\n\nHuman:"], credentials: { accessKeyId: "***", secretAccessKey: "***", }, }); await graph.query( "CREATE (a:Actor {name:'Bruce Willis'})" + "-[:ACTED_IN]->(:Movie {title: 'Pulp Fiction'})" ); const cypherTemplate = `Task:Generate Cypher statement to query a graph database. Instructions: Use only the provided relationship types and properties in the schema. Do not use any other relationship types or properties that are not provided. Schema: {schema} Note: Do not include any explanations or apologies in your responses. Do not respond to any questions that might ask anything else than for you to construct a Cypher statement. Do not include any text except the generated Cypher statement. Follow these Cypher example when Generating Cypher statements: # How many actors played in Top Gun? MATCH (m:Movie {title:"Top Gun"})<-[:ACTED_IN]-() RETURN count(*) AS result The question is: {question}`; const cypherPrompt = new PromptTemplate({ template: cypherTemplate, inputVariables: ["schema", "question"], }); const chain = GraphCypherQAChain.fromLLM({ llm: model, graph, cypherPrompt, }); const res = await chain.run("Who played in Pulp Fiction?"); return { statusCode: 200, body: JSON.stringify(res), }; }; ``` The error that I am getting: <img width="926" alt="Screenshot 2023-10-12 at 2 08 52 PM" src="https://github.com/langchain-ai/langchain/assets/42175061/3f40f1e1-09fa-4ebf-ae29-35f6dc28e0b3"> If I run that query directly it returns result, also if I remove that part from cypher prompt there are no issues. Can you help me to find what is the issue? ### Suggestion: _No response_
Issue: Invalid prompt schema: Missing value for input [GraphCypherQAChain][cypherPrompt]
https://api.github.com/repos/langchain-ai/langchain/issues/11706/comments
2
2023-10-12T12:12:12Z
2023-10-16T11:55:48Z
https://github.com/langchain-ai/langchain/issues/11706
1,939,853,669
11,706
[ "hwchase17", "langchain" ]
### Feature request I propose a feature that allows the **API key** to be set as an optional **parameter** when changing the `openai_api_base` in LangChain's implementation of OpenAI. This feature would make it possible to set the API key to an empty string or any arbitrary string, similar to how it's done in **LiteLLM**. Relevant resources: - LangChain OpenAI: [GitHub Repo](https://github.com/langchain-ai/langchain) - LiteLLM Proxy: [Documentation](https://docs.litellm.ai/docs/proxy_server#create-a-proxy-for-multiple-llms) ### Motivation The motivation for this proposal is to facilitate testing and development scenarios where the `openai_api_base` is pointed to a local or mock server that doesn't require an API key. Currently, when the API key is set to empty or an arbitrary string, LangChain OpenAI throws an `AuthenticationError`. This behavior is frustrating when trying to test or develop without a valid OpenAI API key. ### Your contribution I am willing to help implement this feature and submit a PR, but I would appreciate guidance from the maintainers or community to ensure the changes are made correctly and in line with the project's standards and practices.
Allow API_KEY to be optional when API_BASE is selected for OpenAI.
https://api.github.com/repos/langchain-ai/langchain/issues/11700/comments
4
2023-10-12T10:47:50Z
2024-02-10T16:13:37Z
https://github.com/langchain-ai/langchain/issues/11700
1,939,721,293
11,700
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. ![run](https://github.com/langchain-ai/langchain/assets/129041904/931e32c0-ef57-470f-9834-81d2da6f2be4) ε¦‚δ½•θŽ·ε–ζ‰“ε°ηš„ζ—₯εΏ—ε‘’οΌŸδΈηŸ₯ι“ζ€ŽδΉˆε…³ι—­θΏ™δΊ›ζ‰“ε°ηš„ζ—₯εΏ— ### Suggestion: _No response_
Agent stopped due to iteration limit or time limit.
https://api.github.com/repos/langchain-ai/langchain/issues/11699/comments
5
2023-10-12T09:38:36Z
2024-02-10T16:13:42Z
https://github.com/langchain-ai/langchain/issues/11699
1,939,606,027
11,699
[ "hwchase17", "langchain" ]
### System Info Current master, all python versions ### Who can help? @hwchase17 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The LocalAI embeddings class required the openai api key to be set even though this might not be required by the locally hosted server: https://github.com/langchain-ai/langchain/blob/44da27c07b2bd0ccac355c8236a3ab1dd26870eb/libs/langchain/langchain/embeddings/localai.py#L200 Initializing the class like this fails: ``` from langchain.embeddings.localai import LocalAIEmbeddings LocalAIEmbeddings(openai_api_key=None) # Did not find openai_api_key, please add an environment variable `OPENAI_API_KEY` which contains it, or pass `openai_api_key` as a named parameter. (type=value_error) ``` ### Expected behavior it should work fine, the key is not required for the class to function.
LocalAI embeddings require OpenAI key
https://api.github.com/repos/langchain-ai/langchain/issues/11698/comments
1
2023-10-12T08:58:04Z
2024-02-06T16:22:51Z
https://github.com/langchain-ai/langchain/issues/11698
1,939,531,951
11,698
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Dear Langchain Developers, Thank you very much for developing Langchain. I am writing to seek clarification on a few aspects of Langchain that I find intriguing. Specifically, I am interested in understanding more about the role and functionality of 'metadata' and 'page_content' within the system. Could you kindly explain what the purpose of these two components is within Langchain? I am curious to know how they contribute to the overall functionality of the system. Additionally, I am interested in understanding how these elements are stored in vectors. Furthermore, when a user question is used to match the most suitable text chunk, how does the system process this? After the user question is converted into a vector, does the system search the 'metadata' or the 'page_content'? Your insights would be greatly appreciated. I am sure that understanding these aspects will enhance my appreciation of Langchain even more. Thank you in advance for your time and assistance. Best regards, ### Suggestion: _No response_
Understanding the Role and Functionality of Metadata and Page_Content in Langchain
https://api.github.com/repos/langchain-ai/langchain/issues/11697/comments
2
2023-10-12T08:31:39Z
2024-02-09T16:16:58Z
https://github.com/langchain-ai/langchain/issues/11697
1,939,487,505
11,697
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. How to get the same values of the Float numbers generated as embeddings - 1/ Am comparing values generated from OpenAI - from langchain.embeddings.openai import OpenAIEmbeddings embedding = OpenAIEmbeddings() 2/ comparing with the values generated from -- gpt4all from langchain.embeddings import GPT4AllEmbeddings gpt4all_embd = GPT4AllEmbeddings() ```python def get_embeddings_gpt4all(): """ """ from langchain.embeddings import GPT4AllEmbeddings gpt4all_embd = GPT4AllEmbeddings() #text = "This is a test document." text = "i like dogs" query_result = gpt4all_embd.embed_query(text) print("--query_result-",type(query_result)) print("--query_result-",len(query_result)) print("--query_result-",query_result[0:10]) print("--query_result-",query_result[11]) print("--query_result-",query_result[15]) doc_result = gpt4all_embd.embed_documents([text]) print("--doc_result-",type(doc_result)) # List of LIST -- print("--doc_result-",len(doc_result)) # 1 print("--doc_result-",type(doc_result[0])) # List of Floats -- print("--doc_result-",len(doc_result[0])) # 384 ``` - terminal prints as below -- ``` Found model file at C:\\\\Users\\\\rohit.dhankar\\\\.cache\\\\gpt4all\\ggml-all-MiniLM-L6-v2-f16.bin --query_result- <class 'list'> --query_result- 384 --query_result- [-0.04856949672102928, -0.04622294753789902, 0.060556311160326004, 0.043546244502067566, -0.05463598296046257, -0.004457559902220964, 0.05100328475236893, -0.02644169144332409, 0.08410052955150604, 0.05480821430683136] --query_result- -0.07515878230333328 --query_result- 0.05370383337140083 --doc_result- <class 'list'> --doc_result- 1 --doc_result- <class 'list'> --doc_result- 384 ``` ### Suggestion: _No response_
Issue: GPT4AllEmbeddings - dont compare with - OpenAIEmbeddings
https://api.github.com/repos/langchain-ai/langchain/issues/11696/comments
2
2023-10-12T07:12:54Z
2024-02-09T16:17:03Z
https://github.com/langchain-ai/langchain/issues/11696
1,939,352,699
11,696
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. _No response_ ### Suggestion: _No response_
agentζ˜―ε¦ζ”―ζŒllama2η³»εˆ—εŠε…Άθ‘η”Ÿζ¨‘εž‹
https://api.github.com/repos/langchain-ai/langchain/issues/11695/comments
2
2023-10-12T07:07:11Z
2023-10-12T07:44:50Z
https://github.com/langchain-ai/langchain/issues/11695
1,939,343,638
11,695
[ "hwchase17", "langchain" ]
### System Info APIError: HTTP code 200 from API ("{\"rate_limit_usage\": {\) File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 541, in _run_script exec(code, module.__dict__) File "[...]/gitlab/run_finQA_webUI.py", line 158, in <module> generate_answer(clicked) File "[...]/gitlab/run_finQA_webUI.py", line 123, in generate_answer response = rq_agent.finQAinConversation(user_query, callbacks=[stream_handler]) File "[...]/gitlab/report_query/report_query.py", line 187, in finQAinConversation response = self._convR_qa(query, callbacks=callbacks) File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/langchain/chains/base.py", line 306, in __call__ raise e File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/langchain/chains/base.py", line 300, in __call__ self._call(inputs, run_manager=run_manager) File "[...]/gitlab/report_query/custom_intention_chain.py", line 131, in _call answer = self.combine_docs_chain.run( File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/langchain/chains/base.py", line 506, in run return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[ File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/langchain/chains/base.py", line 306, in __call__ raise e File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/langchain/chains/base.py", line 300, in __call__ self._call(inputs, run_manager=run_manager) File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py", line 119, in _call output, extra_return_dict = self.combine_docs( File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py", line 171, in combine_docs return self.llm_chain.predict(callbacks=callbacks, **inputs), {} File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/langchain/chains/llm.py", line 257, in predict return self(kwargs, callbacks=callbacks)[self.output_key] File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/langchain/chains/base.py", line 306, in __call__ raise e File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/langchain/chains/base.py", line 300, in __call__ self._call(inputs, run_manager=run_manager) File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/langchain/chains/llm.py", line 93, in _call response = self.generate([inputs], run_manager=run_manager) File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/langchain/chains/llm.py", line 103, in generate return self.llm.generate_prompt( File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/langchain/chat_models/base.py", line 469, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/langchain/chat_models/base.py", line 359, in generate raise e File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/langchain/chat_models/base.py", line 349, in generate self._generate_with_cache( File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/langchain/chat_models/base.py", line 501, in _generate_with_cache return self._generate( File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 357, in _generate return _generate_from_stream(stream_iter) File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/langchain/chat_models/base.py", line 60, in _generate_from_stream for chunk in stream: File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 326, in _stream for chunk in self.completion_with_retry( File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 166, in <genexpr> return ( File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/openai/api_requestor.py", line 693, in <genexpr> self._interpret_response_line( File "[...]/anaconda3/envs/sjj_env/lib/python3.10/site-packages/openai/api_requestor.py", line 755, in _interpret_response_line raise error.APIError( Seems like this error just appears starting from oct 12. The answer can be directly generated with an HTTP code 200 in Openai response, but an error appears at the end of generation. @hwchase17 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction answer = self.combine_docs_chain.run( input_documents=docs, callbacks=_run_manager.get_child(), **new_inputs ) ### Expected behavior No error happens.
APIError: HTTP code 200 from API ("{\"rate_limit_usage\": {\)
https://api.github.com/repos/langchain-ai/langchain/issues/11694/comments
7
2023-10-12T04:24:29Z
2023-10-12T07:43:45Z
https://github.com/langchain-ai/langchain/issues/11694
1,939,149,824
11,694
[ "hwchase17", "langchain" ]
### Feature request Would it make sense to use an llm to split a large document into smaller documents that have complete context? ### Motivation My best guess is that most of the existing splitters may drop context ? ### Your contribution Could try
text split by llm?
https://api.github.com/repos/langchain-ai/langchain/issues/11691/comments
5
2023-10-12T01:29:23Z
2024-02-26T16:07:38Z
https://github.com/langchain-ai/langchain/issues/11691
1,939,011,196
11,691
[ "hwchase17", "langchain" ]
### Feature request At the moment the agent spends additional rounds of agent <> LLM to format the JSON data I request to be in a sentence format. Is there a way to avoid this? ### Motivation If the user is asking for JSON the agent should provide JSON and not a properly formatted sentence(that costs more token>money) ### Your contribution Test, Deploy to production workload
A way to use csv_agent to simply return JSON
https://api.github.com/repos/langchain-ai/langchain/issues/11683/comments
1
2023-10-11T21:31:49Z
2024-01-30T00:26:49Z
https://github.com/langchain-ai/langchain/issues/11683
1,938,775,462
11,683
[ "hwchase17", "langchain" ]
### System Info Version `0.0.312` ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction # Description The parameter `include_df_in_prompt` on the method `create_pandas_dataframe_agent` is a boolean value which should be eigther `True` or `False`. If you specify your own suffix in the create_pandas_dataframe_agent method it results in an error saying that the variable include_df_in_prompt should be False. ## For example ```python agent = create_pandas_dataframe_agent(ChatOpenAI(temperature=0),df, AgentType.OPENAI_FUNCTIONS, suffix=sufix) ``` Results in ``` ValueError: If suffix is specified, include_df_in_prompt should not be. ``` This is also the correct behavior. However, if you set `include_df_in_prompt` to `False`, the error still occurs. ## For example ```python agent = create_pandas_dataframe_agent(ChatOpenAI(temperature=0),df, AgentType.OPENAI_FUNCTIONS, suffix=sufix, include_df_in_prompt=False) ``` But everything just works fine if I use `None` instade of `False` ## For example ```python agent = create_pandas_dataframe_agent(ChatOpenAI(temperature=0),df, AgentType.OPENAI_FUNCTIONS, suffix=sufix, include_df_in_prompt=None) `` ### Expected behavior It should also work when I use `False` instade of `None`
include_df_in_prompt should be compared as boolean value and not None
https://api.github.com/repos/langchain-ai/langchain/issues/11677/comments
2
2023-10-11T20:47:09Z
2024-06-28T16:30:12Z
https://github.com/langchain-ai/langchain/issues/11677
1,938,701,583
11,677
[ "hwchase17", "langchain" ]
### Feature request Similar to the discussion in this thread https://github.com/langchain-ai/langchain/issues/7239, I would like to have a way to show inline citations in QA tasks. Specially, I haven't had a whole lot of success integrating `create_citation_fuzzy_match_chain` with `ConversationalRetrievalChain` ### Motivation The point of this change is to enable QA based chains (e.g. ConversationalRetrievalChain) to easily do inline citations, and surface the citations as part of metadata fields ### Your contribution TBD
Integrating `create_citation_fuzzy_match_chain` with `ConversationalRetrievalChain`
https://api.github.com/repos/langchain-ai/langchain/issues/11673/comments
18
2023-10-11T19:24:42Z
2023-12-15T10:43:03Z
https://github.com/langchain-ai/langchain/issues/11673
1,938,566,368
11,673
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Similar to the discussion in this thread https://github.com/langchain-ai/langchain/issues/7239, I would like to have a way to show inline citations in QA tasks. Specially, I haven't had a whole lot of success integrating `create_citation_fuzzy_match_chain` with `ConversationalRetrievalChain` ### Suggestion: _No response_
DELETE
https://api.github.com/repos/langchain-ai/langchain/issues/11671/comments
0
2023-10-11T19:16:40Z
2023-10-24T01:07:38Z
https://github.com/langchain-ai/langchain/issues/11671
1,938,552,713
11,671
[ "hwchase17", "langchain" ]
### Feature request Currently, the Bedrock `chat_model` does not support async streaming, which is odd, as the previous [PR](https://github.com/langchain-ai/langchain/pull/10393/files#diff-9874347f7fa335df661ff4089b0922b3214e08a92e9879610424522f806358f7R62) included it; seems that the async method was not added properly. ### Motivation Without async streaming you cannot pretty much build chatbot infra. ### Your contribution Not sure, possibly just copying-pasting what was [here](https://github.com/langchain-ai/langchain/pull/10393/files#diff-9874347f7fa335df661ff4089b0922b3214e08a92e9879610424522f806358f7R62) already?
Add async streaming for Bedrock
https://api.github.com/repos/langchain-ai/langchain/issues/11668/comments
6
2023-10-11T16:06:06Z
2024-06-18T08:41:48Z
https://github.com/langchain-ai/langchain/issues/11668
1,938,174,193
11,668
[ "hwchase17", "langchain" ]
### System Info on Ubuntu with Python 3.11.5, Langchain 0.0.307 ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction What my code basically does is returns an answer to a given question using conversational retrieval chain with a qdrant vectorstore that stores publications. I want the chain to use only the documents published in specified years. 1. Create vectorstore qdrant_client = QdrantClient(QDRANT_HOST, port=int(QDRANT_PORT)) retriever = Qdrant(client=qdrant_client, collection_name=QDRANT_COLLECTION, embeddings=get_encoder(ENCODER)) 2. Create chain: rag_chain = ConversationalRetrievalChain.from_llm(llm=llm, retriever = retriever.as_retriever(search_kwargs={"filter": {"year": ["2009","2010"]}), return_source_documents=return_source_documents, verbose=verbose) 3. When I run the chain it retrieves no documents even though it should. ### Expected behavior The chain should return documents related to a given question. When I checked the implementation of qdrant.py under langchain vectorstores I saw that _qdrant_filter_from_dict function makes a "must" match; however in my case I need a "should" match since no document can have more than one publication date.
ConversationalRetrievalChain Vectorstore cannot apply filter to a key with given multiple values
https://api.github.com/repos/langchain-ai/langchain/issues/11666/comments
5
2023-10-11T15:29:24Z
2024-02-12T16:11:33Z
https://github.com/langchain-ai/langchain/issues/11666
1,938,100,043
11,666
[ "hwchase17", "langchain" ]
### System Info langchain==0.0.306 chromadb==0.4.13 Python 3.11.5 ### Who can help? @hwchase17 @atroyn ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python from langchain.vectorstores import Chroma from langchain.docstore.document import Document from langchain.embeddings import GPT4AllEmbeddings docs = [ Document(page_content="This is a demo document"), Document(page_content="This is another demo document"), ] other_docs = [ Document(page_content="Unrelated document"), Document(page_content="Another unrelated document"), ] # Create two (independent) vector stores vectorstore = Chroma.from_documents(documents=docs, embedding=GPT4AllEmbeddings()) other_vectorstore = Chroma.from_documents(documents=other_docs, embedding=GPT4AllEmbeddings()) # Expect two stores to be independent assert len(vectorstore.similarity_search("document", k=999)) == len(docs) # OK assert len(other_vectorstore.similarity_search("document", k=999)) == len(other_docs) # FAILS # Documents from the first vector store are actually retrieved in the `other_vectorstore` print(other_vectorstore.similarity_search("document")) # Actually the two vector stores contain the union of `docs` and `other_docs` assert len(vectorstore.get()["documents"]) == 2 # FAILS assert len(other_vectorstore.get()["documents"]) == 2 # FAILS ``` ### Expected behavior When I initialize two separate Chroma in-memory vectorstores with `Chroma.from_documents` using two distinct groups of documents, the vector store that was initialized last will contain the union of the two groups of documents. In summary, running this code: ```python vectorstore = Chroma.from_documents(documents=documents, …) other_vectorstore = Chroma.from_documents(documents=other_documents, …) ``` the `other_vectorstore` will contain both `documents` and `other_documents`. --- It seems that the two vectorstores are sharing the same chroma client & collection. This could be a side effect of commit https://github.com/langchain-ai/langchain/commit/822cdb161b6631d8255e341f3321d8f0b9e28e8a which made the `chromadb.Client` shared among all instances of the `Chroma` class. One can avoid the bug by explicitly passing two different `collection_name` values when initializing the vector store, but I thought this would have been the default behaviour (or at least I expected a warning telling that the new vectorstore was reusing an existing collection). Moreover, it is not clear to me why the first vector store will return the documents it was initialized with if I use `vectorstore.similarity_search(…)`, but shows the union of all documents if I use `vectorstore.get()`. If you confirm what is the intended behaviour I am available to to work a PR to fix this.
`Chroma.from_documents` adds docs to same collection creating duplicates
https://api.github.com/repos/langchain-ai/langchain/issues/11665/comments
5
2023-10-11T14:05:47Z
2023-10-11T20:53:10Z
https://github.com/langchain-ai/langchain/issues/11665
1,937,873,604
11,665
[ "hwchase17", "langchain" ]
### System Info 0.0.3 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.faiss.FAISS.html ### Expected behavior get concrete docs's index or content
db = FAISS.from_documents(docs, embeddings) ;how to get docs's index or content from db?
https://api.github.com/repos/langchain-ai/langchain/issues/11661/comments
6
2023-10-11T13:20:45Z
2023-10-11T14:46:52Z
https://github.com/langchain-ai/langchain/issues/11661
1,937,769,538
11,661
[ "hwchase17", "langchain" ]
![image](https://github.com/langchain-ai/langchain/assets/99576727/2197b06d-c3ff-41a5-b974-6a05a4cbaf8d) ![image](https://github.com/langchain-ai/langchain/assets/99576727/ef268717-044c-475b-8e0d-75cff72e18c5) ![image](https://github.com/langchain-ai/langchain/assets/99576727/ad84e4a7-775e-4793-8f3a-3d7eee863e2e) I am using the ConversationalRetrievalChain along with DirectoryLoader from LangChain to load PDF files only and answer questions based on the different files in the directory along with returning the source of the answer from the document it got it from. As shown in the 2 pictures, in the first picture I asked it to answer in 1 single sentence but it kept answering and answering then repeated the answer. In the second picture, it rewrote the question again and then answered along with repeating the answers. In the 3rd picture, it answered a question thats not in any of the files in the Directory and it then shows the error of exceeding the rate limit. I'd like help on how can I make it "listen" to me and how do I fix the rate limit error? I am using OpenAI embeddings for my Q&A chatbot. Does anyone recommend other embeddings that are not OpenAI which I could use for this use case? I plan on later moving my embeddings to the cloud and deploy it. Thanks in advance
Chat model answering incorrectly and weirdly
https://api.github.com/repos/langchain-ai/langchain/issues/11657/comments
3
2023-10-11T11:39:43Z
2024-02-08T16:20:16Z
https://github.com/langchain-ai/langchain/issues/11657
1,937,562,333
11,657
[ "hwchase17", "langchain" ]
I am trying to use a prompt with my chain but I am getting an error which I'm not sure how to solve. Also, I was wondering is it possible to use a constitutional principle to apply it to the ConversationalRetrievalChain? If so, can someone guide me on how to do so? Sorry for beginner questions but I am new to all of this including coding. Thanks in advance! Code: from dotenv import load_dotenv import csv import PyPDF2 from PyPDF2 import PdfReader from langchain.document_loaders import DirectoryLoader, PyPDFLoader from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain.embeddings.openai import OpenAIEmbeddings from langchain.callbacks import get_openai_callback from langchain.chat_models import ChatOpenAI from langchain.chains import ConversationalRetrievalChain from langchain.memory import ConversationBufferMemory from langchain.prompts import PromptTemplate import time from langchain.chains.qa_with_sources import load_qa_with_sources_chain from langchain.vectorstores import Chroma from langchain.callbacks import StreamingStdOutCallbackHandler import pandas as pd from docx import Document from nltk.tokenize import sent_tokenize, word_tokenize from collections import Counter from nltk.corpus import stopwords import os def print_letter_by_letter(text): for char in text: print(char, end='', flush=True) time.sleep(0.02) def main(): load_dotenv() # directory_path = input("Copy your directory path here or upload a file: ") directory_path = "C:\\Users\\Asus\\OneDrive\\Documents\\Vendolista" pdf_loader = DirectoryLoader(directory_path, glob="**/*.pdf", show_progress=True, use_multithreading=True, silent_errors=True, loader_cls = PyPDFLoader) documents = pdf_loader.load() print(str(len(documents))+ " documents loaded") llm = ChatOpenAI(temperature = 0, model_name='gpt-3.5-turbo', callbacks=[StreamingStdOutCallbackHandler()], streaming = True) # Split into chunks text_splitter = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=200, ) chunks = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() persist_directory = "C:\\Users\\Asus\\OneDrive\\Documents\\Vendolista" knowledge_base = Chroma.from_documents(chunks, embeddings, persist_directory = persist_directory) #save to disk knowledge_base.persist() #To delete the DB we created at first so that we can be sure that we will load from disk as fresh db knowledge_base = None new_knowledge_base = Chroma(persist_directory = persist_directory, embedding_function = embeddings) custom_template = (""" Help the users with answering questions based on the document. """ ) # prompt = PromptTemplate(input_variables=[], template=custom_template) prompt = PromptTemplate.from_template(custom_template) memory = ConversationBufferMemory(memory_key = "chat_history", return_messages = True) model = ConversationalRetrievalChain.from_llm(llm = llm, chain_type = "refine", retriever = new_knowledge_base.as_retriever(), memory = memory, combine_docs_chain_kwargs = {"prompt":prompt} ) while True: question = input("Ask me anything about the files (type 'exit' to quit): ") if question.lower() in ["exit"] and len(question) == 4: end_chat = "Thank you for visiting us! Have a nice day" print_letter_by_letter(end_chat) break if question: chat_history = [] # with get_openai_callback() as cb: response = model({"question": question}, return_only_outputs = True) chat_history.append(('user', question)) chat_history.append(('AI', response)) print("AI:", response) if __name__ == '__main__': main() Error: 86 documents loaded Traceback (most recent call last): File "C:\Users\Asus\Documents\Vendolista\app.py", line 100, in <module> main() File "C:\Users\Asus\Documents\Vendolista\app.py", line 73, in main model = ConversationalRetrievalChain.from_llm(llm = llm, File "C:\Users\Asus\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\conversational_retrieval\base.py", line 341, in from_llm doc_chain = load_qa_chain( File "C:\Users\Asus\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\question_answering\__init__.py", line 249, in load_qa_chain return loader_mapping[chain_type]( File "C:\Users\Asus\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\question_answering\__init__.py", line 206, in _load_refine_chain return RefineDocumentsChain( File "C:\Users\Asus\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\load\serializable.py", line 97, in __init__ super().__init__(**kwargs) File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for RefineDocumentsChain prompt extra fields not permitted (type=value_error.extra)
Prompt invalid with ConversationalRetrievalChain
https://api.github.com/repos/langchain-ai/langchain/issues/11656/comments
5
2023-10-11T11:29:12Z
2024-03-13T19:57:58Z
https://github.com/langchain-ai/langchain/issues/11656
1,937,545,931
11,656
[ "hwchase17", "langchain" ]
### Feature request I want to make similar implementation of SQLAlchemyCache that can index prompts by md5 hash. ### Motivation Postgres have limitation for 2704 bytes on index. Processing on simple index can be faster. ### Your contribution I have prepared cache implementation – I will add it in contributing PR
Using hash for indexing prompts SQLAlchemyCache
https://api.github.com/repos/langchain-ai/langchain/issues/11655/comments
3
2023-10-11T10:56:25Z
2023-10-23T05:39:08Z
https://github.com/langchain-ai/langchain/issues/11655
1,937,471,577
11,655
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. In creating an app, ideally, you would like to let's say db = SQLDatabase.from_uri(DATABASE_URI) in one endpoint, and use the db in other endpoints, The second endpoint could be what users get access to, so in getting responses, it should be quick, hence only accessing the already loaded db but I cannot serialize db from the first endpoint, hence i have to include db = SQLDatabase.from_uri(DATABASE_URI) in the user endpoint, meaning anytime they hit the endpoint, db is run which delays significantly, and doesn't make for a good user experience, how do I solve this issue ### Suggestion: _No response_
Issue: Inability to serialize langchain modules such as conversational retrieval chain, SQL database
https://api.github.com/repos/langchain-ai/langchain/issues/11654/comments
3
2023-10-11T10:35:10Z
2024-02-08T16:20:20Z
https://github.com/langchain-ai/langchain/issues/11654
1,937,424,892
11,654
[ "hwchase17", "langchain" ]
### System Info IDE==PyCharm Python==3.11.4 langchain==0.0.306 pinecone-client==2.2.2 ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [X] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following function in my code with **ConversationalRetrievalChain** runs perfectly. It responds to the question "Hello" appropriately. It answers the questions related to the data in the vector_store: ```python def get_conversation_chain(vector_store): llm = ChatOpenAI() memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True) conversation_chain = ConversationalRetrievalChain.from_llm( llm=llm, retriever=vector_store.as_retriever(), memory=memory ) return conversation_chain ``` When the chain is changed to **RetrievalQAWithSourcesChain** with new arguments then the bot responds to the user prompt "hello" with statements about Michael Jackson. It also does not answer questions to data that is contained within the vector store: ```python def get_conversation_chain(vector_store): llm = ChatOpenAI() memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True, output_key="answer") conversation_chain = RetrievalQAWithSourcesChain.from_chain_type( llm=llm, chain_type="stuff", retriever=vector_store.as_retriever(), # return_source_documents=True, memory=memory ) return conversation_chain ``` Everything else in my code remains the same when switching between the 2 functions. The issue only arises when I use the RetrievalQAWithSourcesChain function above. Also when the return_source_documents=True argument is active, it returns all documents within vector_store instead of only the documents being referenced in the answer. ### Expected behavior I would expect that the answers from the RetrievalQAWithSourcesChain are the same as with ConversationalRetrievalChain: - responds appropriately to the user prompt "hello" - provides an answer referencing data that is contained within the vector_store - plus returning relevant document sources related to answer.
Hallucinations, ignoring data in vector store and returning all documents as sources
https://api.github.com/repos/langchain-ai/langchain/issues/11653/comments
2
2023-10-11T10:27:20Z
2024-02-08T16:20:25Z
https://github.com/langchain-ai/langchain/issues/11653
1,937,406,691
11,653
[ "hwchase17", "langchain" ]
### Issue with current documentation: Hello folks, I am not quite sure if my issue is an issue regarding the docs or rather the implementation. ### What is the differnece between HuggingFaceEmbeddings and HuggingFaceHubEmbeddings? Currently in [v0.0.312](https://github.com/langchain-ai/langchain/releases/tag/v0.0.312), there are two pretty similar modules in the `langchain` package, namely `libs/langchain/langchain/embeddings/huggingface.py` and `libs/langchain/langchain/embeddings/huggingface_hub.py` (compare https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/embeddings/huggingface.py and https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/embeddings/huggingface_hub.py). Moreover, these modules provide pretty similar classes: `HuggingFaceEmbeddings` and `HuggingFaceHubEmbeddings`. When should I use which one? This question arises as a consequence of the described example below. ### Relevant Example Considering the task of using a LLM with additional documents, you could consider the following code: ``` from langchain.llms.huggingface_hub import HuggingFaceHub from langchain.prompts import PromptTemplate from langchain.vectorstores.faiss import FAISS from langchain.chains import LLMChain from langchain.embeddings import HuggingFaceHubEmbeddings, HuggingFaceEmbeddings from langchain.text_splitter import RecursiveCharacterTextSplitter model = HuggingFaceHub(repo_id=llm, model_kwargs={"temperature": 0.2, "max_length": 64, "raw_response": True}, huggingfacehub_api_token=HUGGINGFACEHUB_API_KEY) prompt = PromptTemplate(input_variables=["question", "docs"], template=""" Answer the following question: {question} By searching the following documents: {docs} If you feel like you don't have enough information to answer the question, say "I don't know". Your answers should be verbose and detailed. """) chain = LLMChain(llm=model, prompt=prompt) docs = "bla bli blub ..." text_splitter = RecursiveCharacterTextSplitter( chunk_size=5, chunk_overlap=0 ) documents = text_splitter.split_text(document) # A) embeddings = HuggingFaceEmbeddings() # B) alternatively # embeddings = HuggingFaceHubEmbeddings(huggingfacehub_api_token=self.huggingfacehub_api_key) self.documents_db = FAISS.from_texts(documents, embeddings) docs = self.documents_db.similarity_search(prompt, k=4) docs_page_content = " ".join([d.page_content for d in docs]) response = self.llm_chain.run(question=prompt, docs=docs_page_content) response = response.replace("\n", "") ``` ### Observed Behaviour #### A) Everything seems to work reliable and smooth without any errors or unexpected outputs. #### B) I did not conduct a study, but in my empirical testings I could observe the following. Sometimes it works like A) and everything seems to work smooth and reliable, However, sometimes (approximately 50% of the time i am running this script), I am getting one of the following two different errors: ##### KeyError ``` You're using a different task than the one specified in the repository. Be sure to know what you're doing :) Traceback (most recent call last): File "/qa_docs.py", line 59, in <module> qa.add_document_to_context(document) File "/qa.py", line 79, in add_document_to_context self.documents_db = FAISS.from_texts(documents, embeddings) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/lib/python3.11/site-packages/langchain/vectorstores/faiss.py", line 603, in from_texts return cls.__from( ^^^^^^^^^^^ File "/lib/python3.11/site-packages/langchain/vectorstores/faiss.py", line 562, in __from index = faiss.IndexFlatL2(len(embeddings[0])) ~~~~~~~~~~^^^ KeyError: 0 ``` ##### ChunkedEncodingError ``` You're using a different task than the one specified in the repository. Be sure to know what you're doing :) Traceback (most recent call last): File "/lib/python3.11/site-packages/urllib3/response.py", line 710, in _error_catcher yield File "/lib/python3.11/site-packages/urllib3/response.py", line 1073, in read_chunked self._update_chunk_length() File "/lib/python3.11/site-packages/urllib3/response.py", line 1008, in _update_chunk_length raise InvalidChunkLength(self, line) from None urllib3.exceptions.InvalidChunkLength: InvalidChunkLength(got length b'', 0 bytes read) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/lib/python3.11/site-packages/requests/models.py", line 816, in generate yield from self.raw.stream(chunk_size, decode_content=True) File "/lib/python3.11/site-packages/urllib3/response.py", line 933, in stream yield from self.read_chunked(amt, decode_content=decode_content) File "/lib/python3.11/site-packages/urllib3/response.py", line 1061, in read_chunked with self._error_catcher(): File "/lib/python3.11/contextlib.py", line 155, in __exit__ self.gen.throw(typ, value, traceback) File "/lib/python3.11/site-packages/urllib3/response.py", line 727, in _error_catcher raise ProtocolError(f"Connection broken: {e!r}", e) from e urllib3.exceptions.ProtocolError: ("Connection broken: InvalidChunkLength(got length b'', 0 bytes read)", InvalidChunkLength(got length b'', 0 bytes read)) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "qa_docs.py", line 59, in <module> qa.add_document_to_context(document) File "qa.py", line 79, in add_document_to_context self.documents_db = FAISS.from_texts(documents, embeddings) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/lib/python3.11/site-packages/langchain/vectorstores/faiss.py", line 602, in from_texts embeddings = embedding.embed_documents(texts) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/lib/python3.11/site-packages/langchain/embeddings/huggingface_hub.py", line 90, in embed_documents responses = self.client(inputs=texts, params=_model_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/lib/python3.11/site-packages/huggingface_hub/inference_api.py", line 190, in __call__ response = get_session().post(self.api_url, headers=self.headers, json=payload, data=data) File "/lib/python3.11/site-packages/requests/sessions.py", line 637, in post return self.request("POST", url, data=data, json=json, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/lib/python3.11/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/lib/python3.11/site-packages/requests/sessions.py", line 747, in send r.content File "/lib/python3.11/site-packages/requests/models.py", line 899, in content self._content = b"".join(self.iter_content(CONTENT_CHUNK_SIZE)) or b"" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/lib/python3.11/site-packages/requests/models.py", line 818, in generate raise ChunkedEncodingError(e) requests.exceptions.ChunkedEncodingError: ("Connection broken: InvalidChunkLength(got length b'', 0 bytes read)", InvalidChunkLength(got length b'', 0 bytes read)) ``` ### Remarks about the documents Important note: the value of param `documents` is calculated by another script and is not fixed. It is a textual inference of another model. I already thought about that being the cause of the errors I encounter in some script executions but I have no idea how this could be. I can assure that there is always at least 100 characters that are being the value of `documents`. However, I think that this problem already may be answered if I fully understand the differences between the two classes `HuggingFaceHubEmbeddings` and `HuggingFaceEmbeddings`. ### Related langchain documentation - https://python.langchain.com/docs/integrations/vectorstores/faiss - https://python.langchain.com/docs/integrations/providers/huggingface - https://python.langchain.com/docs/integrations/llms/huggingface_hub.html - https://python.langchain.com/docs/integrations/text_embedding/huggingfacehub.html If there is more information needed, feel free to ask. I am looking forward for every help I can get! ### Idea or request for content: When to use HuggingFaceEmbeddings vs. when to use HuggingFaceHubEmbeddings.
DOC: HuggingFaceEmbeddings vs. HuggingFaceHubEmbeddings
https://api.github.com/repos/langchain-ai/langchain/issues/11651/comments
4
2023-10-11T09:37:15Z
2024-02-11T16:11:51Z
https://github.com/langchain-ai/langchain/issues/11651
1,937,320,086
11,651
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I've tried building a Bot and now I see the following issue. Architecture: 1. Cloud Run service 1 ; Streamlit app which accepts the user_input (question) on the topic and sends it to Flask API part of service 2. 2. Cloud Run service 2: Flask code which calls the LLM code and returns the response to streamlit. I'm using qa as below. qa = ConversationalRetrievalChain.from_llm( llm, retriever, condense_question_prompt=CONDENSE_QUESTION_PROMPT, verbose=False, memory=memory, condense_question_llm =llm, return_generated_question= True, combine_docs_chain_kwargs={"prompt": promptHist}, return_source_documents=True, ) Issue: If different Users are asking questions through various streamlit sessions, it is using the same chat_history buffer and mixing responses. Question: Is there a way to separate the "qa" sessions specific to User and avoid common memory/chat buffer being used ? Is there a utility in Langchain for this? ### Suggestion: _No response_
ConversationalRetrievalChain + Memory ( New issue for me)
https://api.github.com/repos/langchain-ai/langchain/issues/11650/comments
9
2023-10-11T08:41:59Z
2024-02-13T16:10:52Z
https://github.com/langchain-ai/langchain/issues/11650
1,937,199,873
11,650
[ "hwchase17", "langchain" ]
### System Info Langchain version : 0.0.312 python version: 3.11.4 conda env platform: osx-arm64 / x86 system : macos M1 13.2.1 (22D68) ### Who can help? @hwchase17 @agola11 ### Information - [x] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) local_path = ("/Users/langchain-usecase/04-gpt4all/llama-2-7b-chat.ggmlv3.q4_0.bin") # Callbacks support token-wise streaming callbacks = [StreamingStdOutCallbackHandler()] # Verbose is required to pass to the callback manager llm = GPT4All(model=local_path, callbacks=callbacks, verbose=True) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Bieber was born?" llm_chain.run(question) ``` bash output ``` bash Traceback (most recent call last): File "/Users/mac/EasyGithub/test/gpt4all.py", line 45, in <module> llm = GPT4All(model=local_path, callbacks=callbacks, verbose=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/mac/miniforge3/envs/langchain/lib/python3.11/site-packages/langchain/load/serializable.py", line 97, in __init__ super().__init__(**kwargs) File "/Users/mac/miniforge3/envs/langchain/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__ raise validation_error pydantic.v1.error_wrappers.ValidationError: 1 validation error for GPT4All __root__ -> __root__ Serializable.__init__() takes 1 positional argument but 2 were given (type=type_error) ``` ### Expected behavior The same code runs on my other M2 MacMini and generates a reply. I don't know what caused the error. ![image](https://github.com/langchain-ai/langchain/assets/50387804/a4ea3302-765d-4172-89d2-aabe8fcb713b) M2 MacMini env info: python==3.11.4 langchain==0.0.312 conda env platform: osx-arm64 macos==13.4.1
GPT4All Model Serializable.__init__ Error
https://api.github.com/repos/langchain-ai/langchain/issues/11649/comments
1
2023-10-11T07:41:44Z
2023-10-11T07:53:52Z
https://github.com/langchain-ai/langchain/issues/11649
1,937,072,756
11,649
[ "hwchase17", "langchain" ]
### System Info langchain=0.0.312 ### code: from langchain.llms import Minimax minimax = Minimax(minimax_api_key=minimax_api_key, minimax_group_id=minimax_group_id) print(minimax("What is the difference between panda and bear?")) ### or: from langchain.chat_models import MiniMaxChat from langchain.schema import HumanMessage chat = MiniMaxChat() res = chat( [ HumanMessage( content="Translate this sentence from English to French. I love programming." ) ] ) print(res) ### error log: File "E:\AI\Projects\AIProject\AIApiTest\langchain_minimax.py", line 43, in <module> llm = Minimax() File "D:\anaconda3\envs\ai\lib\site-packages\langchain\llms\minimax.py", line 113, in __init__ self._client = _MinimaxEndpointClient( File "pydantic\main.py", line 357, in pydantic.main.BaseModel.__setattr__ ValueError: "Minimax" object has no field "_client" ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ### code: from langchain.llms import Minimax minimax = Minimax(minimax_api_key=minimax_api_key, minimax_group_id=minimax_group_id) print(minimax("What is the difference between panda and bear?")) ### or: from langchain.chat_models import MiniMaxChat from langchain.schema import HumanMessage chat = MiniMaxChat() res = chat( [ HumanMessage( content="Translate this sentence from English to French. I love programming." ) ] ) print(res) ### Expected behavior Fix this bug
ValueError: "Minimax" object has no field "_client"
https://api.github.com/repos/langchain-ai/langchain/issues/11648/comments
2
2023-10-11T06:52:49Z
2024-02-08T16:20:41Z
https://github.com/langchain-ai/langchain/issues/11648
1,936,992,404
11,648
[ "hwchase17", "langchain" ]
### System Info langchain==0.0.311 python==3.9 I am trying to find out the similarity search score. but I got the score is abnormal. I hope the score is between 0 and 1. ![image](https://github.com/langchain-ai/langchain/assets/42560553/cbbf8328-29b5-423f-84e4-7d40f4baf280) ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python from torch import cuda from langchain.embeddings.huggingface import HuggingFaceEmbeddings from langchain.vectorstores import (FAISS, Chroma) from langchain.document_loaders import CSVLoader embed_model_id = 'E:/modelscope/model/hggingface/text2vec-large-chinese' device = f'cuda:{cuda.current_device()}' if cuda.is_available() else 'cpu' embedding = HuggingFaceEmbeddings( model_name=embed_model_id, model_kwargs={'device': device}, encode_kwargs={'device': device, 'batch_size': 8} ) def save_faiss_index(file_path: str = './label_remark_openai.csv', encoding: str = 'utf-8', index_name: str = 'label_remark_openai_index', save_path: str = "E:/pythonProject/information_demo/faiss_data"): document = CSVLoader(file_path=file_path, encoding=encoding).load() FAISS.normalize_L2 = True db = FAISS.from_documents(documents=document, embedding=embedding, normalize_L2=True) db.save_local(index_name=index_name, folder_path=save_path) def get_faiss_index(index_name: str = 'label_remark_openai_index', folder_path: str = "E:/pythonProject/information_demo/faiss_data"): local_db = FAISS.load_local(index_name=index_name, embeddings=embedding, folder_path=folder_path, normalize_L2=True) return local_db if __name__ == '__main__': index = get_faiss_index() docs_and_scores = index.similarity_search_with_score("query text", k=10) for doc, score in docs_and_scores: print(f"Content: {doc.page_content}, Metadata: {doc.metadata}, Score: {score}") ``` ### Expected behavior I hope the score is between 0 and 1.
FAISS similarity search with score
https://api.github.com/repos/langchain-ai/langchain/issues/11645/comments
2
2023-10-11T03:21:59Z
2024-02-13T16:10:58Z
https://github.com/langchain-ai/langchain/issues/11645
1,936,721,958
11,645
[ "hwchase17", "langchain" ]
### System Info Python version - Python 3 Langchain version - I am not completely sure, but I think 0.0.196 ### Who can help? @eyurtsev - can you help with the above issue as I see you look after DataLoader Abstractions? ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Code I wrote: !pip install arxiv !pip install pymupdf from langchain.document_loaders import ArxivLoader docs = ArxivLoader(query="2303.10130", load_max_docs=5, load_all_available_meta='FALSE').load() ### Expected behavior Problem: The problem is that I expected 'docs' variable to have at least 5 pages (as the article has 20+ pages) given i have passed the parameter load_max_docs = 5, but it only has 1. I tried with different numbers as well, but to no avail.
ArxivLoader always returns 1 document
https://api.github.com/repos/langchain-ai/langchain/issues/11644/comments
5
2023-10-11T02:24:01Z
2023-10-14T04:35:52Z
https://github.com/langchain-ai/langchain/issues/11644
1,936,650,375
11,644
[ "hwchase17", "langchain" ]
### System Info Flask==3.0.0 Flask-Cors==4.0.0 langchain==0.0.312 openai==0.28.1 pandas==2.1.1 tabulate==0.9.0 numpy==1.26.0 ### Who can help? @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [X] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ```python agent = create_csv_agent( ChatOpenAI(temperature=0, model="gpt-4", openai_api_key=api_key, verbose=True), ["https://loyp2qzippf1q4hd.public.blob.vercel-storage.com/Mock%20Data%20_%20Dumping%20Costs%20-%20Dumping%20Cost-SIU4feVnqtCwoYg2g5Rgy9K5ejfKx0.csv"], verbose=True, agent_type=AgentType.OPENAI_FUNCTIONS, return_intermediate_steps=True, agent_executor_kwargs={"handle_parsing_errors": True}, ) ``` ### Expected behavior The CSV file should be loaded accordingly
create_csv_agent cannot read files hosted on Vercel blob
https://api.github.com/repos/langchain-ai/langchain/issues/11637/comments
4
2023-10-11T00:00:18Z
2023-10-11T03:26:55Z
https://github.com/langchain-ai/langchain/issues/11637
1,936,494,704
11,637
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I submitted a PR. Vercel 'deployment' failed. There is no other info. I dont seem to have access for this PR on vercel. How do i debug this? ### Suggestion: _No response_
How to debug vercel failures?
https://api.github.com/repos/langchain-ai/langchain/issues/11635/comments
7
2023-10-10T23:10:20Z
2024-05-14T20:03:32Z
https://github.com/langchain-ai/langchain/issues/11635
1,936,432,030
11,635
[ "hwchase17", "langchain" ]
### Feature request Currently, the `save` method for `ChatPromptTemplate` is not implemented, preventing developers from saving templates as files. ### Motivation This is a core feature that should be supported by all templates. ### Your contribution If I have time I can make a PR.
feat: implement ChatPromptTemplate.save
https://api.github.com/repos/langchain-ai/langchain/issues/11631/comments
3
2023-10-10T22:20:50Z
2024-02-09T16:17:29Z
https://github.com/langchain-ai/langchain/issues/11631
1,936,362,500
11,631
[ "hwchase17", "langchain" ]
### System Info langchain version: 0.0.268 python version: 3.11 MacOS: Ventura 13.6 ### Who can help? @agola11 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Using a Chroma vector store's from_documents method through an AWS API Gateway. The response size appears to be greater than 6MB. Is there a way to send a request whos response becomes smaller than 6MB? Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 10.0 seconds as it raised APIError: HTTP code 501 from API (Response size larger than Lambda 6MB Limit). Traceback (most recent call last): File "/Users/john_ramos/mckinsey-git/policy-y/.venv/lib/python3.11/site-packages/openai/api_requestor.py", line 753, in _interpret_response_line data = json.loads(rbody) ^^^^^^^^^^^^^^^^^ File "/usr/local/Cellar/[email protected]/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/__init__.py", line 346, in loads return _default_decoder.decode(s) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/Cellar/[email protected]/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/Cellar/[email protected]/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) ### Expected behavior Embeddings are stored in the data store after the response comes back through API gateway.
Issue as a result of Chroma.from_documents, OpenAI and AWS API Gateway
https://api.github.com/repos/langchain-ai/langchain/issues/11629/comments
2
2023-10-10T22:04:33Z
2024-02-06T16:23:46Z
https://github.com/langchain-ai/langchain/issues/11629
1,936,346,884
11,629
[ "hwchase17", "langchain" ]
### Feature request ### Feature request Azure Cosmos DB for MongoDB vCore enables users to efficiently store, index, and query high dimensional vector data stored directly in Azure Cosmos DB for MongoDB vCore. It contains similarity measures such as COS (cosine distance), L2 (Euclidean distance) or IP (inner product) which measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically and retrieved during query time. The accompanying PR would add support for Langchain Python users to store vectors from document embeddings generated from APIs such as Azure OpenAI Embeddings or Hugging Face on Azure. [Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/vector-search) ### Motivation This capability described in the feature request is currently not available for Langchain Python. ### Your contribution I will be submitting a PR for this feature request.
Add AzureCosmosDBVectorSearch VectorStore
https://api.github.com/repos/langchain-ai/langchain/issues/11627/comments
1
2023-10-10T20:55:53Z
2023-10-11T20:56:47Z
https://github.com/langchain-ai/langchain/issues/11627
1,936,230,465
11,627
[ "hwchase17", "langchain" ]
### System Info I'm using Langchain's Human Tool as part of my application. However, I am having difficulty integrating it into streamlit. I'm mainly trying to change the input_func argument in the function: `Human_Tool =load_tools(["human"], lm=llm, input_func=get_input)` I've tried many different approaches but have not had success yet. **I want to ask what might be a `get_input` function for langchains' Human Tool that integrates successfully with streamlit.** ### Who can help? @hwchase17 @agola11 ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [X] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Here are some of the things I've tried: ``` import streamlit as st def get_input() -> str: st.write("Insert your text below.") # Updated this line to include a label and hide it using label_visibility user_input = st.text_area(label="Your Text", value="", height=100, key=None, help=None, on_change=None, args=None, kwargs=None, label_visibility="hidden") submit_button = st.button("Submit") if submit_button: return user_input.strip() # This will return the text entered by the user if not submit_button: with st.empty(): for remaining in range(3600, 0, -1): st.write(f"You have {remaining//60} minutes and {remaining%60} seconds to enter the text.") time.sleep(1) st.empty() return "" # Returning empty if no input after an hour or not submitted ``` The code above opens a streamlit box that and accepts the user's input but it does not pass the users input to the agent. import streamlit as st import uuid def get_input() -> str: unique_key = str(uuid.uuid4()) user_input = st.text_area( "Insert your text below:", key=f"uniqueTextAreaKey_{unique_key}", value="", height=200, ) if st.button("Submit", key=f"uniqueButtonKey_{unique_key}"): if user_input: # check if user input is not empty st.session_state['user_input'] = user_input st.session_state['submitted'] = True st.success("Input submitted successfully!") return user_input.strip() elif 'submitted' in st.session_state and st.session_state['submitted']: st.success("Input already submitted.") return st.session_state['user_input'].strip() else: st.warning("Please enter text and press submit.") return "" This code also opens up a streamlit text box but gets stuck in an infinite loop. ### Expected behavior I want the human tool to integrate with streamlit.
How to integrate Langchain's Human Tool into Streamlit
https://api.github.com/repos/langchain-ai/langchain/issues/11626/comments
9
2023-10-10T20:26:57Z
2024-06-19T06:09:17Z
https://github.com/langchain-ai/langchain/issues/11626
1,936,153,248
11,626
[ "hwchase17", "langchain" ]
### System Info The method signatures of similaritySearchWithScore and similaritySearch are identical in Javascript. I assume this is a mistake. ``` export declare abstract class VectorStore extends Serializable { ... similaritySearch(query: string, k?: number, filter?: this["FilterType"] | undefined, _callbacks?: Callbacks | undefined): Promise<Document[]>; similaritySearchWithScore(query: string, k?: number, filter?: this["FilterType"] | undefined, _callbacks?: Callbacks | undefined): Promise<[Document, number][]>; ``` ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Open the base.d.ts file and search for the abstract class VectorStore. It contains these methods. ### Expected behavior I would assume the signature for similaritySearchWithScore should contain some kind of Score parameter that we could pass to it to only return documents with a certain score.
similaritySearchWithScore and similaritySearch method signatures are identical in javascript
https://api.github.com/repos/langchain-ai/langchain/issues/11625/comments
4
2023-10-10T20:24:24Z
2024-02-09T16:17:34Z
https://github.com/langchain-ai/langchain/issues/11625
1,936,149,678
11,625
[ "hwchase17", "langchain" ]
### System Info https://github.com/langchain-ai/langchain/releases/tag/v0.0.304 Reverting to 3.0.3 or under fixes it. The error looks something like (truncated confidential parts of the trace) ``` Traceback (most recent call last): ... File "/Users/sid.gupta/.pyenv/versions/3.11.3/lib/python3.11/site-packages/ddtrace/_monkey.py", line 205, in patch_all patch(raise_errors=False, **modules) File "/Users/sid.gupta/.pyenv/versions/3.11.3/lib/python3.11/site-packages/ddtrace/_monkey.py", line 245, in patch when_imported(module)( File "/Users/sid.gupta/.pyenv/versions/3.11.3/lib/python3.11/site-packages/ddtrace/vendor/wrapt/importer.py", line 284, in register register_post_import_hook(hook, name) File "/Users/sid.gupta/.pyenv/versions/3.11.3/lib/python3.11/site-packages/ddtrace/vendor/wrapt/decorators.py", line 470, in _synchronized return wrapped(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sid.gupta/.pyenv/versions/3.11.3/lib/python3.11/site-packages/ddtrace/vendor/wrapt/importer.py", line 82, in register_post_import_hook hook(module) File "/Users/sid.gupta/.pyenv/versions/3.11.3/lib/python3.11/site-packages/ddtrace/_monkey.py", line 168, in on_import imported_module.patch() File "/Users/sid.gupta/.pyenv/versions/3.11.3/lib/python3.11/site-packages/ddtrace/contrib/langchain/patch.py", line 763, in patch if hasattr(langchain.embeddings, text_embedding_model): ^^^^^^^^^^^^^^^^^^^^ File "/Users/sid.gupta/.pyenv/versions/3.11.3/lib/python3.11/site-packages/langchain/__init__.py", line 332, in __getattr__ raise AttributeError(f"Could not find: {name}") AttributeError: Could not find: embeddings ``` ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Be on `ddtrace 1.17.6` and a langchain version >= 3.0.4. Running an webapp on hypercorn and fastAPI. ### Expected behavior Don't throw this trace and block the application.
Release 3.0.4 breaks compatibility with ddtrace.
https://api.github.com/repos/langchain-ai/langchain/issues/11624/comments
3
2023-10-10T20:12:25Z
2024-02-06T16:23:56Z
https://github.com/langchain-ai/langchain/issues/11624
1,936,133,212
11,624
[ "hwchase17", "langchain" ]
### System Info from langchain.docstore.document import Document from typing import Dict from langchain import PromptTemplate, SagemakerEndpoint from langchain.llms.sagemaker_endpoint import LLMContentHandler from langchain.chains.question_answering import load_qa_chain import json example_doc_1 = """ Peter and Elizabeth took a taxi to attend the night party in the city. While in the party, Elizabeth collapsed and was rushed to the hospital. Since she was diagnosed with a brain injury, the doctor told Peter to stay besides her until she gets well. Therefore, Peter stayed with her at the hospital for 3 days without leaving. """ docs = [ Document( page_content=example_doc_1, ) ] query = """How long was Elizabeth hospitalized? """ prompt_template = """Use the following pieces of context to answer the question at the end. {context} Question: {question} Answer:""" PROMPT = PromptTemplate( template=prompt_template, input_variables=["context", "question"] ) class ContentHandler(LLMContentHandler): content_type = "application/json" accepts = "application/json" def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes: input_dict = {"inputs": prompt, "parameters": model_kwargs} return json.dumps(input_dict).encode('utf-8') def transform_output(self, output: bytes) -> str: response_json = json.loads(output.read().decode("utf-8")) return response_json[0]["generated_text"] content_handler = ContentHandler() chain = load_qa_chain( llm=SagemakerEndpoint( endpoint_name="endpoint", region_name="ap-southeast-2", credentials_profile_name="dev", model_kwargs={"temperature": 1e-10}, endpoint_kwargs={"CustomAttributes": 'accept_eula=true'}, content_handler=ContentHandler(), ), prompt=PROMPT, ) chain({"input_documents": docs, "question": query}, return_only_outputs=True) Stacktrace : ValueError: Error raised by inference endpoint: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (424) from primary with message "{ "code":424, "message":"prediction failure", "error":"string indices must be integers" ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction run the same python code to reproduce. ### Expected behavior I am expecting a response generated by the chain.
AWS Sagemaker "code":424,
https://api.github.com/repos/langchain-ai/langchain/issues/11618/comments
3
2023-10-10T17:47:53Z
2024-02-10T16:14:07Z
https://github.com/langchain-ai/langchain/issues/11618
1,935,901,944
11,618
[ "hwchase17", "langchain" ]
### Feature request Add self-querying capability for MongoDBAtlasVectorSearch so that any natural language query could be converted to a MongoDB query and then be used to search the MongoDB vector store ### Motivation I was testing self-quering using MongoDBAtlasVectorSearch when I got an error saying that self-querying is not supported. I think it will be useful to have that functionality since MongoDB vector store is well used. ### Your contribution We will work on this feature and submit a pull request in the coming 1-2 months.
Self-querying retriever for MongoDBAtlasVectorSearch
https://api.github.com/repos/langchain-ai/langchain/issues/11616/comments
2
2023-10-10T16:37:06Z
2024-01-30T18:27:37Z
https://github.com/langchain-ai/langchain/issues/11616
1,935,763,708
11,616
[ "hwchase17", "langchain" ]
### Feature request [Infino callback handler](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/callbacks/infino_callback.py) as of this writing does not support ChatOpenAI models, as it does not support `on_chat_model_start` callback. Adding this callback will enable users to track latency, errors and token usage for ChatOpenAI models (in addition to existing support for OpenAI and other non-chat models). ### Motivation Infino customers have requested for this integration, as this increases Infino callback handler's coverage to OpenAI chat models. Customer request GitHub issue for Infino - https://github.com/infinohq/infino/issues/93 ### Your contribution I have a working code change for this issue, and will submit a PR shortly.
Support ChatOpenAI models in Infino callback manager
https://api.github.com/repos/langchain-ai/langchain/issues/11607/comments
5
2023-10-10T14:32:26Z
2024-02-03T07:12:26Z
https://github.com/langchain-ai/langchain/issues/11607
1,935,507,768
11,607
[ "hwchase17", "langchain" ]
### System Info openai==0.27.6 urllib3==1.26.15 pandas==2.0.1 slack-sdk==3.21.3 pydantic==2.4.2 langchain==0.0.311 SQLAlchemy==2.0.11 mysqlclient==2.2.0 pymysql==1.1.0 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction i am using AWS lambda ### Expected behavior it should run fine without MariaDB
langchain is not working on AWS , It is always giving "ImportError: libmariadb.so.3: cannot open shared object file: No such file or directory" , no where i am using mariadb
https://api.github.com/repos/langchain-ai/langchain/issues/11606/comments
3
2023-10-10T14:13:21Z
2024-02-08T16:21:06Z
https://github.com/langchain-ai/langchain/issues/11606
1,935,467,502
11,606
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I built a self-hosted LLM and applied Langchain's HuggingFaceTextGenInference for use in an offline environment, but an error occurred because the tokenizer was forced to call online in the code using the map_reduce type of summarize_chain. I would like to solve this problem to use self-hosted LLM in an offline environment The error message is as follows: OSError: Can't load tokenizer for 'gpt2'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'gpt2' is the correct path to a directory containing all relevant files for a GPT2TokenizerFast tokenizer. Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?8199cda9-1c5c-4995-ba50-f58f326273c3) or open in a [text editor](command:workbench.action.openLargeOutput?8199cda9-1c5c-4995-ba50-f58f326273c3). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)... ### Suggestion: I would like to know how to change the tokenizer used online in HuggingFaceTextGenInference to offline or how to utilize self-hosted LLM without using HuggingFaceTextGenInference.
Using a self-hosted LLM in an offline environment
https://api.github.com/repos/langchain-ai/langchain/issues/11599/comments
4
2023-10-10T11:22:02Z
2024-02-09T16:17:44Z
https://github.com/langchain-ai/langchain/issues/11599
1,935,118,678
11,599
[ "hwchase17", "langchain" ]
### Feature request Add optional multithreading support for `TextSplitter`, e.g for the loop in `TextSplitter.create_documents`: https://github.com/langchain-ai/langchain/blob/e2a9072b806b1a45b0e4c107b30dddd0f67a453f/libs/langchain/langchain/text_splitter.py#L138-L153 Question: Is there anything opposing this idea / preventing it from a technical perspective? ### Motivation Text splitting can take up significant time and resources if a custom length function is used to measure chunk length (e.g. based on a huggingface tokenizer's encode method), especially for the `RecursiveCharacterTextSplitter`. Therefore we want to introduce multithreading support on a document level. ### Your contribution Feature Request: https://github.com/langchain-ai/langchain/issues/11595 PR: https://github.com/langchain-ai/langchain/pull/11598
Multithreading support for TextSplitter
https://api.github.com/repos/langchain-ai/langchain/issues/11595/comments
2
2023-10-10T10:02:42Z
2024-02-08T16:21:16Z
https://github.com/langchain-ai/langchain/issues/11595
1,934,953,718
11,595
[ "hwchase17", "langchain" ]
### Feature request Hey hi hello :) Some organizations expose Azure OpenAI endpoints via some proxies that need additional HTTP headers. Currently AzureOpenAI class doesn't expose (at least I wasn't able to find one) a capability to set this. Similarly to what is possible in OpenAIEmbeddings ([https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.openai.OpenAIEmbeddings.html#langchain.embeddings.openai.OpenAIEmbeddings.headers](https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.openai.OpenAIEmbeddings.html#langchain.embeddings.openai.OpenAIEmbeddings.headers)). Something similar was already discussed ([https://github.com/langchain-ai/langchain/issues/2120](https://github.com/langchain-ai/langchain/issues/2120)) and embeddings have that already solved. It would be great to have that also for AzureOpenAI class. ### Motivation I need this to get via proxy that needs additional HTTP headers. ### Your contribution I don't think I can contribute to this.
AzureOpenAI doesn't expose parameter to set custom HTTP headers.
https://api.github.com/repos/langchain-ai/langchain/issues/11593/comments
3
2023-10-10T07:56:04Z
2024-02-06T16:24:21Z
https://github.com/langchain-ai/langchain/issues/11593
1,934,656,142
11,593
[ "hwchase17", "langchain" ]
### Feature request I want to get the ids of the document returned when performing `similarity_search()` or `similarity_search_with_score()`. The id should be present in the metadata = {"id": id} ### Motivation Want to update the metadata of the documents that are returned in the similarity search. This update can only be done if the the documents returned has the id in its metadata. When adding the documents to the vectordb, I am not adding the ids, as they are automatically generated if not passed. ### Your contribution No contributions but below is the changes that can be made: ``` def _results_to_docs_and_scores(results: Any) -> List[Tuple[Document, float]]: return [ # TODO: Chroma can do batch querying, # we shouldn't hard code to the 1st result (Document(page_content=result[0], metadata=(result[1] | {'id': result[3]}) or {}), result[2]) for result in zip( results["documents"][0], results["metadatas"][0], results["distances"][0], results["ids"][0] ) ] ```
Return ids for the document returned from the Similarity Search.
https://api.github.com/repos/langchain-ai/langchain/issues/11592/comments
3
2023-10-10T07:14:45Z
2024-02-15T16:08:35Z
https://github.com/langchain-ai/langchain/issues/11592
1,934,574,489
11,592
[ "hwchase17", "langchain" ]
### System Info langchain==0.0.311 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` from langchain.document_loaders import WebBaseLoader loader = WebBaseLoader("https://academyselfdefense.com/") data = loader.load() raw_text = data[0].page_content print(raw_text) ``` ### Expected behavior load proper content in website instead of` Just a moment...Enable JavaScript and cookies to continue`
Getting error "Just a moment...Enable JavaScript and cookies to continue" when loading website using WebBaseLoader
https://api.github.com/repos/langchain-ai/langchain/issues/11590/comments
4
2023-10-10T05:53:55Z
2024-03-02T12:50:18Z
https://github.com/langchain-ai/langchain/issues/11590
1,934,432,018
11,590
[ "hwchase17", "langchain" ]
### Feature request Is there any plan for supporting BedrockChat async calls? ### Motivation I am using this api in a fastapi backend, while receiving data from bedrock I have to send data back to frontend. But for now the network is totally blocked, and streaming is no possible in this case. ### Your contribution no
Request for BedrockChat async functions(BedrockChat.agenerate).
https://api.github.com/repos/langchain-ai/langchain/issues/11589/comments
3
2023-10-10T04:36:10Z
2024-02-05T23:26:22Z
https://github.com/langchain-ai/langchain/issues/11589
1,934,334,394
11,589
[ "hwchase17", "langchain" ]
### System Info Langchain Version: 0.0.311 Python: 3.10.9 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction I loaded the my vector store from **pinecone** and try to find top-3 most similar documents with the relevance score ``` vectorstore = Pinecone(index, embed.embed_query, text_field) answers = vectorstore.similarity_search_with_score(query, 3) for item in answers: print(item[1]) # print out score ``` By running above code, I can get relevance score for those 3 documents: ``` 0.851780415 0.851505935 0.850369573 ``` It looks as expected But when I try to use `similarity_search_with_relevance_scores` to find out similar docs: ``` answers= vectorstore.similarity_search_with_relevance_scores(query, score_threshold=0.8) ``` From my understanding it should return me at least 3 docs since we do have docs similar higher than 0.85, but I got warning `No relevant docs were retrieved using the relevance score threshold 0.8` with an empty return And I tried vectorstore retriever as well and got same warning and empty return: ``` retriever = vectorstore.as_retriever( search_type="similarity_score_threshold", search_kwargs={"score_threshold": 0.8} ) answer = retriever.get_relevant_documents(query) print(answer) ``` Did I use the score function incorrectly ? If not, is there any other ways to query with score_threshold? Thanks ### Expected behavior When using ``` answers= vectorstore.similarity_search_with_relevance_scores(query, score_threshold=0.8) ``` or ``` retriever = vectorstore.as_retriever( search_type="similarity_score_threshold", search_kwargs={"score_threshold": 0.8} ) answer = retriever.get_relevant_documents(query) print(answer) ``` It should return me few of doc that have >= 0.8 similarity.
similarity_search_with_relevance_scores is not working properly with Pinecone
https://api.github.com/repos/langchain-ai/langchain/issues/11587/comments
3
2023-10-10T02:51:40Z
2023-10-17T03:19:33Z
https://github.com/langchain-ai/langchain/issues/11587
1,934,209,959
11,587
[ "hwchase17", "langchain" ]
### Feature request Introduce a comprehensive Discord Integration Toolkit to Langchain. This would allow a more seamless and direct interaction with the Discord API through an Agent interface. It should encompass capabilities like message dispatch, channel navigation, user role management, and channel administration. ### Motivation At present, Langchain offers a limited set of functionalities for interfacing with the Discord API. Specifically, the available method is the DiscordChatLoader, necessitating manual data downloading and uploading in a CSV format. This approach not only lacks versatility but is cumbersome. Furthermore, there's an absence of functions that would empower an LLM Agent to undertake tasks like messaging, channel searches, role assignments, and channel handling on Discord. ### Your contribution We are initiating the development phase for this proposal and intend to submit a PR once the feature reaches completion.
Discord Integration Toolkit
https://api.github.com/repos/langchain-ai/langchain/issues/11584/comments
1
2023-10-10T01:39:47Z
2023-12-01T05:29:28Z
https://github.com/langchain-ai/langchain/issues/11584
1,934,118,870
11,584
[ "hwchase17", "langchain" ]
### Feature request As of today, it's not possible to use Amazon API Gateway for exposing embeddings model and use it as part of a chain (e.g. ConversationalRetrievalChain). As of today, AmazonAPIGateway can be used only as LLM for text generation, but you cannot use it as embeddings for text embeddings generation (e.g. as part of ConversationalRetrievalChain) ### Motivation Amazon API Gateway can be adopted for both Text generation and Text Embeddings. Amazon Bedrock provides different type of models (LLMs and Embeddings models). In this way, developers can use Amazon API Gateway for Retrieval Augmented Generation solutions ### Your contribution The class can be defined as following: ``` from langchain.embeddings.bedrock import Embeddings import requests from typing import List class AmazonAPIGatewayEmbeddings(Embeddings): def __init__(self, api_url, headers): self.api_url = api_url self.headers = headers def embed_documents(self, texts: List[str]) -> List[List[float]]: results = [] for text in texts: response = requests.post( self.api_url, json={"inputs": text}, headers=self.headers ) results.append(response.json()[0]["embedding"]) return results def embed_query(self, text: str) -> List[float]: response = requests.post( self.api_url, json={"inputs": text}, headers=self.headers ) return response.json()[0]["embedding"] --- embeddings = AmazonAPIGatewayEmbeddings( api_url=f"{api_url}/invoke_model?model_id={model_id}", headers={ ... # Required headers for the API invocation } ) embeddings.embed_query("Hello, how are you?") ```
AmazonAPIGatewayEmbeddings class for text embeddings
https://api.github.com/repos/langchain-ai/langchain/issues/11580/comments
1
2023-10-09T22:10:30Z
2024-02-06T16:24:26Z
https://github.com/langchain-ai/langchain/issues/11580
1,933,898,140
11,580
[ "hwchase17", "langchain" ]
### Feature request I propose to add the Python client for Arcee.ai as an LLM and retriever. `arcee.py` under [langchain/utilities](https://github.com/langchain-ai/langchain/tree/master/libs/langchain/langchain/utilities) `arcee.py` under [langchain/llms](https://github.com/langchain-ai/langchain/tree/master/libs/langchain/langchain/llms) `arcee.py` under [langchain/retrievers](https://github.com/langchain-ai/langchain/tree/master/libs/langchain/langchain/retrievers) ```python class Arcee(LLM): # client for Arcee's Domain Adapted Language Models (DALMs) # ``` ```python class ArceeRetriever(BaseRetriever): # retriever for Arcee's DALMs # ``` See: Client docs https://github.com/arcee-ai/arcee-python ### Motivation Arcee.ai offers seamless Domain Adaptation with its Specialized Domain Adapted Language Model system. I want to utilize these adapted language models on https://arcee.ai and build applications with LangChain. ### Your contribution Discussions - https://github.com/arcee-ai/arcee-python/issues/15 PR - https://github.com/langchain-ai/langchain/pull/11579
Support of Arcee.ai LLM and Retrievers
https://api.github.com/repos/langchain-ai/langchain/issues/11578/comments
2
2023-10-09T21:26:51Z
2023-10-10T19:43:10Z
https://github.com/langchain-ai/langchain/issues/11578
1,933,846,716
11,578
[ "hwchase17", "langchain" ]
### Issue with current documentation: End-to-end Example: [GPT+WolframAlpha](https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain) This link leads to a 404. A quick google search does not find a working page. ### Idea or request for content: _No response_
DOC: Wolfram Agent Link broken on README
https://api.github.com/repos/langchain-ai/langchain/issues/11574/comments
1
2023-10-09T19:15:40Z
2024-02-06T16:24:31Z
https://github.com/langchain-ai/langchain/issues/11574
1,933,650,569
11,574
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. To get documents from collections (vector db) there is a method: ```python class Collection(BaseModel): name: str id: UUID metadata: Optional[CollectionMetadata] = None _client: "API" = PrivateAttr() _embedding_function: Optional[EmbeddingFunction] = PrivateAttr() def __init__( self, client: "API", name: str, id: UUID, embedding_function: Optional[EmbeddingFunction] = ef.DefaultEmbeddingFunction(), metadata: Optional[CollectionMetadata] = None, ): self._client = client self._embedding_function = embedding_function super().__init__(name=name, metadata=metadata, id=id) ''' ''' def get( self, ids: Optional[OneOrMany[ID]] = None, where: Optional[Where] = None, limit: Optional[int] = None, offset: Optional[int] = None, where_document: Optional[WhereDocument] = None, include: Include = ["metadatas", "documents"], ) -> GetResult: """Get embeddings and their associate data from the data store. If no ids or where filter is provided returns all embeddings up to limit starting at offset. Args: ids: The ids of the embeddings to get. Optional. where: A Where type dict used to filter results by. E.g. `{"$and": ["color" : "red", "price": {"$gte": 4.20}]}`. Optional. limit: The number of documents to return. Optional. offset: The offset to start returning results from. Useful for paging results with limit. Optional. where_document: A WhereDocument type dict used to filter by the documents. E.g. `{$contains: {"text": "hello"}}`. Optional. include: A list of what to include in the results. Can contain `"embeddings"`, `"metadatas"`, `"documents"`. Ids are always included. Defaults to `["metadatas", "documents"]`. Optional. Returns: GetResult: A GetResult object containing the results. """ where = validate_where(where) if where else None where_document = ( validate_where_document(where_document) if where_document else None ) ids = validate_ids(maybe_cast_one_to_many(ids)) if ids else None include = validate_include(include, allow_distances=False) return self._client._get( self.id, ids, where, None, limit, offset, where_document=where_document, include=include, ) ``` To query documents by searching for a particular term in the document `where_document = {"$contains": "langchain"}` can be passed to the `get()` method. This value for the key/operator `$contains` is case sensitive. How to search for keywords irrespective of case? ### Suggestion: I want to extract entities from a sentence and pass it get documents from the chromadb that contains that words. But this entities need to be case sensitive. If the LLM outputs a different case like for example, the document contains the keyword: "Langchain", but asking llm to extract the entity from the sentence it isnt always sure that it will generate "Langchain", it can output: "langchain". This can be handled if the first letter of the entities generated is capitalized. But this may not work for keywords like OpenCV.
Support for case insensitive query search something like "$regex" instead of "$contains"
https://api.github.com/repos/langchain-ai/langchain/issues/11571/comments
15
2023-10-09T18:43:18Z
2023-10-10T20:31:04Z
https://github.com/langchain-ai/langchain/issues/11571
1,933,607,837
11,571
[ "hwchase17", "langchain" ]
### Issue with current documentation: The current Docusaurus default settings display a link icon in the footer of the website. This icon is a small hyperlink symbol that appears next to external links in the footer. While this may be a helpful feature for some websites, it may not align with the design or functional requirements of our documentation site. ### Idea or request for content: The goal of this issue is to remove the link icon from the footer of our Docusaurus-powered website. This will result in a cleaner and more minimalistic footer design. Expected Behavior: The link icon should be removed from the footer, ensuring that external links are presented without the additional icon. Attachments: Current Footer: ![Screenshot 2023-10-09 at 11 27 21β€―PM](https://github.com/langchain-ai/langchain/assets/41548480/4bde2e20-89be-478f-8f2c-e81b580c5eba)
DOC: Remove the Link Icon in the footer due to the docusaurus default settings
https://api.github.com/repos/langchain-ai/langchain/issues/11565/comments
5
2023-10-09T17:57:51Z
2024-05-19T16:06:53Z
https://github.com/langchain-ai/langchain/issues/11565
1,933,542,876
11,565
[ "hwchase17", "langchain" ]
### System Info LangChain v0.0.304 ### Who can help? @hwchase17 @ag ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction Use a combine documents chain with Bedrock/anthropic.claude-v2 LLM: ```python from langchain.chains.question_answering import load_qa_chain from langchain.llms import Bedrock llm = Bedrock(model_id="anthropic.claude-v2") chain = load_qa_chain(llm, chain_type="map_reduce", verbose=verbose, **kwargs) chain.run(...) ``` ### Expected behavior The chain runs correctly. Instead, a series of warnings are raised about missing `transformers` library, about too long context passed to tokenizer, etc. etc., and at last the chain fails to run. ## Background The default implementation of `get_token_ids` and `get_num_tokens` in `BaseLanguageModel` uses a GPT-2 based tokenizer. The `Bedrock` LLM implementation does not override these methods, so it tries to tokenize the text with the incorrect tokenizer. In the particular case when using `anthropic.claude-v2` model, this causes incorrect token counting, requires an otherwise unnecessary dynamic dependencies (the transformers library), and emits a series of warnings about too long input text passed for tokenizer.
Incorrect token counting in Bedrock LLMs
https://api.github.com/repos/langchain-ai/langchain/issues/11560/comments
2
2023-10-09T16:37:33Z
2024-02-06T16:24:36Z
https://github.com/langchain-ai/langchain/issues/11560
1,933,438,442
11,560
[ "hwchase17", "langchain" ]
### Issue with current documentation: On the class description page there is no list with links to class methods The class: langchain.chains.retrieval_qa.base.RetrievalQA https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.RetrievalQA.html#langchain.chains.retrieval_qa.base.RetrievalQA List as on this page: https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_stdout.StreamingStdOutCallbackHandler.html#langchain.callbacks.streaming_stdout.StreamingStdOutCallbackHandler ### Idea or request for content: _No response_
On the class description page there is no list with links to class methodsDOC: <Please write a comprehensive title after the 'DOC: ' prefix>
https://api.github.com/repos/langchain-ai/langchain/issues/11554/comments
1
2023-10-09T15:02:05Z
2024-02-06T16:24:41Z
https://github.com/langchain-ai/langchain/issues/11554
1,933,283,622
11,554
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. I am getting this error while setting a RetrievalQA using FAISS, the code is as follows: Using the following version on a MAC : python 3.10, Faiss_cpu 1.7.4, langchain 0.0.310 embeddings_model_name = os.environ.get('EMBEDDINGS_MODEL_NAME') faiss_index = os.environ.get('FAISS_INDEX') embeddings = HuggingFaceEmbeddings(model_name=embeddings_model_name) self.db = FAISS.load_local(faiss_index, embeddings) retriever = self.db.as_retriever( search_type="mmr", search_kwargs={'k': 5, 'fetch_k': 10} ) prompt = [ ("human", "Hello"), ("assistant", "Hi there!"), ] qa = RetrievalQA.from_chain_type( llm=self.llm, chain_type="refine", retriever=retriever, return_source_documents=True, chain_type_kwargs={"prompt": prompt} ) res = qa({ "query": query, "prompt": prompt, "context": "You are helpful" }) The error is as follows: ([ErrorWrapper(exc=ExtraError(), loc=('prompt',))], <class 'langchain.chains.combine_documents.refine.RefineDocumentsChain'>) The console show the following: rompt : [('human', 'Hello'), ('assistant', 'Hi there!')] self.llm : LlamaCpp Params: {'model_path': 'models/llama-2-7b-chat.Q8_0.gguf', 'suffix': None, 'max_tokens': 256, 'temperature': 0.8, 'top_p': 0.9, 'logprobs': None, 'echo': False, 'stop_sequences': [], 'repeat_penalty': 1.1, 'top_k': 40} Failed while retrieve documents: ([ErrorWrapper(exc=ExtraError(), loc=('prompt',))], <class 'langchain.chains.combine_documents.refine.RefineDocumentsChain'>) Query response None ### Suggestion: _No response_
Issue: <Getting error while setting up a RetrievalQA Conversation with Faiss_cpu prefix>
https://api.github.com/repos/langchain-ai/langchain/issues/11548/comments
8
2023-10-09T12:24:24Z
2024-02-13T16:11:08Z
https://github.com/langchain-ai/langchain/issues/11548
1,932,951,692
11,548
[ "hwchase17", "langchain" ]
### System Info max_marginal_relevance_search is mentioned in the ElasticSearchStore python documentation, but when calling the referenced API with langchain 0.0.310 and Python 3.9 I receive a NotImplementedError. https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.elasticsearch.ElasticsearchStore.html#langchain.vectorstores.elasticsearch.ElasticsearchStore.max_marginal_relevance_search ### Who can help? _No response_ ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ``` vector_store = ElasticsearchStore( embedding=OpenAIEmbeddings(), index_name="{INDEX_NAME}", es_cloud_id="{CLOUD_ID}", es_user="{USER}", es_password="{PASSWORD}" ) pages = vector_store.max_marginal_relevance_search("test query") ``` ### Expected behavior max_marginal_relevance_search functions as per the documentation.
Support max_marginal_relevance_search for ElasticSearchStore
https://api.github.com/repos/langchain-ai/langchain/issues/11547/comments
3
2023-10-09T12:02:23Z
2024-02-09T16:17:58Z
https://github.com/langchain-ai/langchain/issues/11547
1,932,912,761
11,547
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. https://arxiv.org/abs/2305.10601 ### Suggestion: https://arxiv.org/abs/2305.10601
Issue: Add Examples of implementing Tree of Thoughts using Langchain
https://api.github.com/repos/langchain-ai/langchain/issues/11546/comments
4
2023-10-09T11:54:58Z
2024-02-01T18:48:53Z
https://github.com/langchain-ai/langchain/issues/11546
1,932,899,628
11,546
[ "hwchase17", "langchain" ]
### Issue you'd like to raise. Hi, I am trying to use Hugging Face model for langchain. However, I got this error : "FutureWarning: '__init__' (from 'huggingface_hub.inference_api') is deprecated and will be removed from version '0.19.0'. `InferenceApi` client is deprecated in favor of the more feature-complete `InferenceClient`. Check out this guide to learn how to convert your script to use it: https://huggingface.co/docs/huggingface_hub/guides/inference#legacy-inferenceapi-client." This is my code : ` os.environ["HUGGINGFACEHUB_API_TOKEN"] = huggingFace_api_key llm_repo_id = "google/flan-t5-xxl" def generate_pet_name(animal_type,pet_color): llm = HuggingFaceHub( repo_id=llm_repo_id, model_kwargs={"temperature": 0.5}) prompt_template = PromptTemplate( input_variables = ['animal_type','animal_color'], template = "I have an {animal_type} with {animal_color} in color. Suggest me five name which sound" + "appropriate with the color for my {animal_type}") name_chain = LLMChain(llm=llm,prompt=prompt_template,output_key="pet_name") response = name_chain({'animal_type':animal_type,'animal_color':pet_color}) return response ` How do I get rid of this warning. ### Suggestion: _No response_
Issue: Inference Client in Lang chain
https://api.github.com/repos/langchain-ai/langchain/issues/11545/comments
5
2023-10-09T10:15:14Z
2024-02-12T16:11:59Z
https://github.com/langchain-ai/langchain/issues/11545
1,932,738,201
11,545
[ "hwchase17", "langchain" ]
### Feature request Based on the discussion in : https://github.com/langchain-ai/langchain/issues/11540 The WebBaseLoader in LangChain has a default User-Agent set in the session headers, and this could be a good enhancement for the RecursiveUrlLoader as well. Here's a potential solution: class RecursiveUrlLoader(BaseLoader): """Load all child links from a URL page.""" def __init__( self, url: str, max_depth: Optional[int] = 2, use_async: Optional[bool] = None, extractor: Optional[Callable[[str], str]] = None, metadata_extractor: Optional[Callable[[str, str], str]] = None, exclude_dirs: Optional[Sequence[str]] = (), timeout: Optional[int] = 10, prevent_outside: Optional[bool] = True, link_regex: Union[str, re.Pattern, None] = None, headers: Optional[dict] = None, check_response_status: bool = False, ) -> None: ... self.headers = headers if headers is not None else {"User-Agent": "Mozilla/5.0"} ... ### Motivation The RecursiveUrlLoader need to have an implicit user-Agent defined with the session like in WebBaseLoader(https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/document_loaders/web_base.py) else some of the websites we are trying to scrape gives a forbidden error. Troubleshooting the error took a lot of time and finally we realised that it was due to the lack of appropriate headers. loader = RecursiveUrlLoader(url=web_page,max_depth=1,extractor=lambda x: Soup(x, "html.parser").text) docs = loader.load() docs[0] Document(page_content='\n403 Forbidden\n\n403 Forbidden\nnginx\n\n\n', metadata={'source': '.....', 'title': '403 Forbidden', 'language': None}) from fake_useragent import UserAgent header_template = {} header_template["User-Agent"] = UserAgent().random loader = RecursiveUrlLoader(url=web_page,max_depth=1,headers=header_template,extractor=lambda x: Soup(x, "html.parser").text) docs = loader.load() docs[0] Document(page_content="Hello and Welcome to....) ### Your contribution yes
User-Agent needs to be set for RecursiveUrlLoader
https://api.github.com/repos/langchain-ai/langchain/issues/11541/comments
2
2023-10-09T06:37:51Z
2024-02-06T16:24:56Z
https://github.com/langchain-ai/langchain/issues/11541
1,932,397,687
11,541
[ "hwchase17", "langchain" ]
### System Info Python: 3.10 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [X] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction >>> loader = RecursiveUrlLoader(url=web_page,max_depth=1,extractor=lambda x: Soup(x, "html.parser").text) >>> docs = loader.load() >>> docs[0] Document(page_content='\n403 Forbidden\n\n403 Forbidden\nnginx\n\n\n', metadata={'source': '.....', 'title': '403 Forbidden', 'language': None}) >>> from fake_useragent import UserAgent >>> header_template = {} >>> header_template["User-Agent"] = UserAgent().random >>> loader = RecursiveUrlLoader(url=web_page,max_depth=1,headers=header_template,extractor=lambda x: Soup(x, "html.parser").text) >>> docs = loader.load() >>> docs[0] Document(page_content="Hello and Welcome to....) ### Expected behavior The RecursiveUrlLoader need to have an implicit user-Agent defined with the session like in WebBaseLoader(https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/document_loaders/web_base.py) else some of the websites we are trying to scrape gives a forbidden error. Troubleshooting the error took a lot of time and finally we realised that it was due to the lack of appropriate headers.
Recursive URL doesn't work on some websites until User-Agent is added
https://api.github.com/repos/langchain-ai/langchain/issues/11540/comments
2
2023-10-09T05:57:59Z
2024-02-09T16:18:08Z
https://github.com/langchain-ai/langchain/issues/11540
1,932,355,135
11,540
[ "hwchase17", "langchain" ]
### System Info langchain==0.0.310 ### Who can help? @joemcelroy :It seems that the method _select_relevance_score_fn was forgotten to be implemented in vectorstores-elasticsearch. ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction elastic_vector_search = ElasticsearchStore( index_name='langchain-demo', es_connection=es, embedding=embedding, ) retriever = elastic_vector_search.as_retriever( search_type="similarity_score_threshold", search_kwargs={'score_threshold': 0.8}) error log: line 268, in similarity_search_with_relevance_scores docs_and_similarities = self._similarity_search_with_relevance_scores( File "/usr/local/lib/python3.10/site-packages/langchain/schema/vectorstore.py", line 242, in _similarity_search_with_relevance_scores relevance_score_fn = self._select_relevance_score_fn() File "/usr/local/lib/python3.10/site-packages/langchain/schema/vectorstore.py", line 211, in _select_relevance_score_fn raise NotImplementedError ### Expected behavior I hope to implement the method and submit the changes as soon as possible. Thank you for your understanding.
The ElasticsearchStore implementation is not correct.
https://api.github.com/repos/langchain-ai/langchain/issues/11539/comments
8
2023-10-09T03:43:54Z
2024-01-22T18:52:22Z
https://github.com/langchain-ai/langchain/issues/11539
1,932,250,317
11,539
[ "hwchase17", "langchain" ]
### Feature request We are seeking to add a tool to Langchain for Twitter API (v1.1). Integrating the API as a tool will allow agents to search for tweets and timelines using a specific search query that filters by users, locations, hashtags, etc. to respond to prompts. ### Motivation Although Langchain currently has TwitterTweetLoader, we have noticed a plethora of more parameters the Twitter API provides that are not integrated into Langchain. TwitterTweetLoader currently only allows us to specify a list of users to return tweets from and the maximum number of tweets. It would be beneficial if we had more options to specify [different operators in search queries](https://developer.twitter.com/en/docs/twitter-api/v1/rules-and-filtering/search-operators). A tool also allows an agent to actively use the API to respond to prompts, without the user having to manually create their own custom tool or load tweets manually. ### Your contribution We have a small team of developers who will be working on this feature request, and we will submit a pull request later in 1-2 months which implements it. We will do our best to follow the guidelines for contributions, as stated in contributing.md.
Integrating Twitter search API as a tool
https://api.github.com/repos/langchain-ai/langchain/issues/11538/comments
3
2023-10-09T02:28:26Z
2023-12-11T03:16:11Z
https://github.com/langchain-ai/langchain/issues/11538
1,932,202,481
11,538
[ "hwchase17", "langchain" ]
### Feature request Create a natural language interface with DGraph, enabling LangChain to connect more effectively with clients using DGraph, and the DGraph community overall. ### Motivation LangChain already boasts a range of graph integrations with databases like Neo4j and AWS Neptune. Including DGraph in LangChain's roster of graph database integrations will significantly enhance LangChain's compatibility with a wide array of software engineering projects that rely on DGraph, strengthening LangChain's position as a versatile LLM solution's provider for the DGraph-powered ecosystem. ### Your contribution We will be looking to submit a pull request by the end of November that will contain the required code additions along with the requirements as per CONTRIBUTING.MD (involving adding a demo notebook in docs/modules and adding unit and integration tests).
Add DGraqh integration with LangChain
https://api.github.com/repos/langchain-ai/langchain/issues/11533/comments
2
2023-10-08T19:46:26Z
2024-02-12T16:12:03Z
https://github.com/langchain-ai/langchain/issues/11533
1,932,042,979
11,533
[ "hwchase17", "langchain" ]
### Feature request Integrating Pandas DataFrame as an output parser in LangChain AI would offer users a specific and robust tool for data analysis and manipulation. This addition would enable users to receive AI-generated data in a structured tabular format, simplifying tasks like data cleaning, transformation, and visualization while streamlining the process of extracting insights. ### Motivation Pandas is undeniably one of the most popular and powerful libraries in Python for data manipulation and analysis. Its widespread adoption in the data science and analytics community speaks to its versatility and efficiency. Pandas simplifies tasks such as data cleaning, transformation, and exploration with easy-to-use data structures and functions. Given its frequent use in various data-related tasks, integrating Pandas DataFrames as an output parser in LangChain AI would benefit users immensely. It would provide a familiar and reliable tool for processing and interpreting data, enhancing the utility and accessibility of data-driven workflows. ### Your contribution We will be looking to submit a pull request by the end of November that will contain the required code additions along with the requirements as per CONTRIBUTING.MD (involving adding a demo notebook in docs/modules and adding unit tests).
Add support for a Pandas DataFrame OutputParser
https://api.github.com/repos/langchain-ai/langchain/issues/11532/comments
1
2023-10-08T19:22:47Z
2023-11-30T03:46:06Z
https://github.com/langchain-ai/langchain/issues/11532
1,932,028,190
11,532
[ "hwchase17", "langchain" ]
### System Info latest ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [X] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction from langchain.llms import LlamaCpp from langchain.prompts import PromptTemplate from langchain.chains import ConversationChain from langchain.memory import ConversationBufferMemory template = """SYSTEM:Please give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually. Current conversation: {history} USER: {input} ASSISTANT: """ PROMPT = PromptTemplate(template=template, input_variables=["history","input"]) memory = ConversationBufferMemory(memory_key='history', ai_prefix="ASSISTANT:", return_messages=True) llmChain = ConversationChain(llm=llm, prompt=PROMPT,verbose=True, memory=memory) AI_response = llmChain.predict(input=prompt) print(AI_response) #### OUTPUT: Entering new ConversationChain chain... Prompt after formatting: SYSTEM: Please give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually. Current conversation: *** [HumanMessage(content='hello'), AIMessage(content=' Hello! How can I assist you today?')] *** USER: who was president in 1920? ASSISTANT: RETURNING JSON OBJECT instead of formatted string. ### Expected behavior https://python.langchain.com/docs/modules/memory/conversational_customization > Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Friend: Hi there! AI: Hi there! It's nice to meet you. How can I help you today? Friend: What's the weather? AI: > Finished ConversationChain chain.
ConversationBufferMemory returning json not formatted when chain is run
https://api.github.com/repos/langchain-ai/langchain/issues/11531/comments
3
2023-10-08T16:41:12Z
2024-02-10T16:14:32Z
https://github.com/langchain-ai/langchain/issues/11531
1,931,904,754
11,531