status
stringclasses 1
value | repo_name
stringlengths 9
24
| repo_url
stringlengths 28
43
| issue_id
int64 1
104k
| updated_files
stringlengths 8
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[ns, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 12,039 | ["libs/langchain/langchain/agents/load_tools.py", "libs/langchain/langchain/tools/__init__.py", "libs/langchain/langchain/tools/merriam_webster/__init__.py", "libs/langchain/langchain/tools/merriam_webster/tool.py", "libs/langchain/langchain/utilities/__init__.py", "libs/langchain/langchain/utilities/merriam_webster.py", "libs/langchain/tests/integration_tests/utilities/test_merriam_webster_api.py", "libs/langchain/tests/unit_tests/tools/test_imports.py", "libs/langchain/tests/unit_tests/tools/test_public_api.py", "libs/langchain/tests/unit_tests/utilities/test_imports.py"] | Tools for Dictionary APIs | ### Feature request
It would be nice to have agents that could access dictionary APIs such as the Merriam-Webster API or Urban Dictionary API (for slang).
### Motivation
It can be useful to be able to look up definitions for words using a dictionary to provide additional context. With no current dictionary tools available, it would be beneficial for there to be an implemented dictionary tool available at all.
### Your contribution
We will open a PR that adds a new tool for accessing the Merriam-Webster Collegiate Dictionary API (https://dictionaryapi.com/products/api-collegiate-dictionary[/](https://www.dictionaryapi.com/)), which provides definitions for English words, as soon as possible. In the future this could be extended to support other Merriam-Webster APIs such as their Medical Dictionary API (https://dictionaryapi.com/products/api-medical-dictionary) or Spanish-English Dictionary API (https://dictionaryapi.com/products/api-spanish-dictionary).
We may also open another PR for Urban Dictionary API integration. | https://github.com/langchain-ai/langchain/issues/12039 | https://github.com/langchain-ai/langchain/pull/12044 | f3dd4a10cffd507a1300abf0f7729e95072f44eb | c2e3963da4b7c6650fc37acfa8ea39a355e7dae9 | 2023-10-19T18:31:45Z | python | 2023-11-30T01:28:29Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,814 | ["docs/docs/integrations/retrievers/outline.ipynb", "libs/langchain/langchain/retrievers/__init__.py", "libs/langchain/langchain/retrievers/outline.py", "libs/langchain/langchain/utilities/__init__.py", "libs/langchain/langchain/utilities/outline.py", "libs/langchain/tests/integration_tests/utilities/test_outline.py", "libs/langchain/tests/unit_tests/retrievers/test_imports.py", "libs/langchain/tests/unit_tests/utilities/test_imports.py"] | Create retriever for Outline to ask questions on knowledge base | ### Feature request
A retriever for documents from [Outline](https://github.com/outline/outline).
The API has a search endpoint which allows this to be possible: https://www.getoutline.com/developers#tag/Documents/paths/~1documents.search/post
The implementation will be similar to the Wikipedia retriever:
https://python.langchain.com/docs/integrations/retrievers/wikipedia
### Motivation
Outline is an open source project that let's you create a knowledge base, like a wiki. Creating a retriever for Outline will let your team interact with your knowledge base using an LLM.
### Your contribution
PR will be coming soon. | https://github.com/langchain-ai/langchain/issues/11814 | https://github.com/langchain-ai/langchain/pull/13889 | f2af82058f4904b20ae95c6d17d2b65666bf882a | 935f78c9449c40473541666a8b0a0dc61873b0eb | 2023-10-15T01:58:24Z | python | 2023-11-27T02:56:12Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,809 | ["libs/langchain/langchain/llms/huggingface_pipeline.py"] | AttributeError: 'LlamaForCausalLM' object has no attribute 'is_quantized' | ### System Info
LangChain: langchain-0.0.314
Python: Anaconda Python 3.9.18
X86
RTX3080 Laptop (16G)
CUDA 11.8
cuDNN 8.9.5
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1.git clone https://github.com/ymcui/Chinese-LLaMA-Alpaca-2.git
2.cd Chinese-LLaMA-Alpaca-2/scripts/langchain
3.python langchain_sum.py --model_path chinese-alpaca-2-7b-hf --file_path doc.txt --chain_type refine
(langchain) zhanghui@zhanghui-OMEN-by-HP-Laptop-17-ck0xxx:~/Chinese-LLaMA-Alpaca-2/scripts/langchain$ python langchain_sum.py --model_path chinese-alpaca-2-7b-hf --file_path doc.txt --chain_type refine
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/__init__.py:39: UserWarning: Importing HuggingFacePipeline from langchain root module is no longer supported.
warnings.warn(
loading LLM...
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 1.86s/it]
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:362: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
warnings.warn(
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:367: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.6` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
warnings.warn(
Traceback (most recent call last):
File "/home/zhanghui/Chinese-LLaMA-Alpaca-2/scripts/langchain/langchain_sum.py", line 50, in <module>
model = HuggingFacePipeline.from_model_id(model_id=model_path,
File "/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/llms/huggingface_pipeline.py", line 112, in from_model_id
model.is_quantized
File "/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'LlamaForCausalLM' object has no attribute 'is_quantized'
### Expected behavior

| https://github.com/langchain-ai/langchain/issues/11809 | https://github.com/langchain-ai/langchain/pull/11891 | efa9ef75c098e23f00f95be73c39ae66fdb1c082 | 5019f59724b2b6adf840b78019f2581546cb390d | 2023-10-14T13:46:33Z | python | 2023-10-16T23:54:20Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,737 | ["libs/experimental/langchain_experimental/agents/agent_toolkits/pandas/base.py"] | `extra_tools` argument in `create_pandas_dataframe_agent()` doesn't seem to be working | ### System Info
Platform: Windows Server 2022
Python: 3.11.6
Langchain version: 0.0.306
### Who can help?
@agola11 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
### Creating a test tool
```py
from langchain.agents import Tool
tools = [
Tool(
name="test_tool",
func=print,
description="This is a test tool"
)
]
tools
```
```
[Tool(name='test_tool', description='This is a test tool', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, tags=None, metadata=None, handle_tool_error=False, func=<built-in function print>, coroutine=None)]
```
### Setting up the pandas_dataframe_agent
```py
from langchain.agents import create_pandas_dataframe_agent
from langchain.llms import HuggingFacePipeline
import pandas as pd
llm = HuggingFacePipeline.from_model_id(
model_id="google/flan-t5-small", task="text2text-generation", device=0
)
agent = create_pandas_dataframe_agent(llm, pd.DataFrame(), verbose=True, extra_tools=tools)
agent.tools
```
```
[PythonAstREPLTool(name='python_repl_ast', description='A Python shell. Use this to execute python commands. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer.', args_schema=<class 'langchain.tools.python.tool.PythonInputs'>, return_direct=False, verbose=False, callbacks=None, callback_manager=None, tags=None, metadata=None, handle_tool_error=False, globals={}, locals={'df': Empty DataFrame
Columns: []
Index: []}, sanitize_input=True),
Tool(name='test_tool', description='This is a test tool', args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, tags=None, metadata=None, handle_tool_error=False, func=<built-in function print>, coroutine=None)]
```
### Executing agent with debugging enabled
```py
import langchain
langchain.debug = True
agent.run('What is 2+2?')
```
```
[chain/start] [1:chain:AgentExecutor] Entering Chain run with input:
{
"input": "What is 2+2?"
}
[chain/start] [1:chain:AgentExecutor > 2:chain:LLMChain] Entering Chain run with input:
{
"input": "What is 2+2?",
"agent_scratchpad": "",
"stop": [
"\nObservation:",
"\n\tObservation:"
]
}
[llm/start] [1:chain:AgentExecutor > 2:chain:LLMChain > 3:llm:HuggingFacePipeline] Entering LLM run with input:
{
"prompts": [
"You are working with a pandas dataframe in Python. The name of the dataframe is `df`.\nYou should use the tools below to answer the question posed of you:\n\npython_repl_ast: A Python shell. Use this to execute python commands. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [python_repl_ast]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\n\nThis is the result of `print(df.head())`:\n\n\nBegin!\nQuestion: What is 2+2?"
]
}
```
### The prompt from the above log
```
You are working with a pandas dataframe in Python. The name of the dataframe is `df`.
You should use the tools below to answer the question posed of you:
python_repl_ast: A Python shell. Use this to execute python commands. Input should be a valid python command. When using this tool, sometimes output is abbreviated - make sure it does not look abbreviated before using it in your answer.
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [python_repl_ast]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
This is the result of `print(df.head())`:
Begin!
Question: What is 2+2?
```
### Expected behavior
Where did my custom tool `test_tool` disappear? I expected it to show up after python_repl_ast? | https://github.com/langchain-ai/langchain/issues/11737 | https://github.com/langchain-ai/langchain/pull/13203 | 77a15fa9888a3e81a014895a6ec3f1b34c016d06 | f758c8adc43ebbbdb3a13caa5a022a2d043229cc | 2023-10-12T22:22:09Z | python | 2023-12-05T04:54:08Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,627 | ["docs/docs/integrations/platforms/microsoft.mdx", "docs/docs/integrations/vectorstores/azure_cosmos_db.ipynb", "docs/docs/integrations/vectorstores/azuresearch.ipynb", "libs/langchain/langchain/vectorstores/__init__.py", "libs/langchain/langchain/vectorstores/azure_cosmos_db.py", "libs/langchain/tests/integration_tests/vectorstores/test_azure_cosmos_db.py"] | Add AzureCosmosDBVectorSearch VectorStore | ### Feature request
### Feature request
Azure Cosmos DB for MongoDB vCore enables users to efficiently store, index, and query high dimensional vector data stored directly in Azure Cosmos DB for MongoDB vCore. It contains similarity measures such as COS (cosine distance), L2 (Euclidean distance) or IP (inner product) which measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically and retrieved during query time. The accompanying PR would add support for Langchain Python users to store vectors from document embeddings generated from APIs such as Azure OpenAI Embeddings or Hugging Face on Azure.
[Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/vector-search)
### Motivation
This capability described in the feature request is currently not available for Langchain Python.
### Your contribution
I will be submitting a PR for this feature request. | https://github.com/langchain-ai/langchain/issues/11627 | https://github.com/langchain-ai/langchain/pull/11632 | 28ee6a7c125f1eb209b6b6428d1a50040408ea9f | d0603c86b6dc559799c64033d330075a8744435e | 2023-10-10T20:55:53Z | python | 2023-10-11T20:56:46Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | ["libs/langchain/langchain/cache.py", "libs/langchain/langchain/vectorstores/redis/base.py"] | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | 2023-09-28T19:57:36Z | python | 2023-10-24T17:51:25Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,941 | ["libs/langchain/Makefile"] | Issue: `make coverage` doesn't work locally | ### Issue you'd like to raise.
When I set up the local environment and try to run `make coverage`, I get this error:
```bash
; make coverage
poetry run pytest --cov \
--cov-config=.coveragerc \
--cov-report xml \
--cov-report term-missing:skip-covered
================================================================================================================ test session starts ================================================================================================================
platform darwin -- Python 3.9.17, pytest-7.4.0, pluggy-1.2.0
rootdir: /Users/cjameson/workspace/cjcjameson/langchain/libs/langchain
configfile: pyproject.toml
plugins: asyncio-0.20.3, cov-4.1.0, vcr-1.0.2, syrupy-4.2.1, mock-3.11.1, anyio-3.7.1, dotenv-0.5.2, socket-0.6.0
asyncio: mode=strict
collected 2832 items / 1 error / 4 skipped
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/_pytest/config/__init__.py", line 1611, in getoption
INTERNALERROR> val = getattr(self.option, name)
INTERNALERROR> AttributeError: 'Namespace' object has no attribute 'only_extended'
INTERNALERROR>
INTERNALERROR> The above exception was the direct cause of the following exception:
INTERNALERROR>
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/_pytest/main.py", line 270, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/_pytest/main.py", line 323, in _main
INTERNALERROR> config.hook.pytest_collection(session=session)
INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/pluggy/_hooks.py", line 433, in __call__
INTERNALERROR> return self._hookexec(self.name, self._hookimpls, kwargs, firstresult)
INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/pluggy/_manager.py", line 112, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/pluggy/_callers.py", line 155, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/pluggy/_result.py", line 108, in get_result
INTERNALERROR> raise exc.with_traceback(exc.__traceback__)
INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/pluggy/_callers.py", line 80, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/_pytest/main.py", line 334, in pytest_collection
INTERNALERROR> session.perform_collect()
INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/_pytest/main.py", line 672, in perform_collect
INTERNALERROR> hook.pytest_collection_modifyitems(
INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/pluggy/_hooks.py", line 433, in __call__
INTERNALERROR> return self._hookexec(self.name, self._hookimpls, kwargs, firstresult)
INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/pluggy/_manager.py", line 112, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/pluggy/_callers.py", line 155, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/pluggy/_result.py", line 108, in get_result
INTERNALERROR> raise exc.with_traceback(exc.__traceback__)
INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/pluggy/_callers.py", line 80, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/libs/langchain/tests/unit_tests/conftest.py", line 43, in pytest_collection_modifyitems
INTERNALERROR> only_extended = config.getoption("--only-extended") or False
INTERNALERROR> File "/Users/cjameson/workspace/cjcjameson/langchain/venv/lib/python3.9/site-packages/_pytest/config/__init__.py", line 1622, in getoption
INTERNALERROR> raise ValueError(f"no option named {name!r}") from e
INTERNALERROR> ValueError: no option named 'only_extended'
====================================================================================================== 4 skipped, 1 warning, 1 error in 3.80s =======================================================================================================
make: *** [coverage] Error 3
```
### Suggestion:
It looks like the `pytest_addoption` in `tests/unit_tests/conftest.py` is not being found.
This stack-overflow attributes it to pytest not being able to find `conftest.py` files in nested directories. https://stackoverflow.com/a/31526934
The recommendations to create a plugin or move the conftest.py files don't seem palatable, but let me know if maybe that's the thing to do
Given the re-organization into `libs/langchain`, that could have messed up pytest local development.
I'm curious if/how it works in CI ... | https://github.com/langchain-ai/langchain/issues/10941 | https://github.com/langchain-ai/langchain/pull/10974 | 040d436b3f0ba21028850de34dc7780cf4700e46 | 05d5fcfdf89abea0993998689fb8e9a8133b7276 | 2023-09-22T15:58:24Z | python | 2023-09-23T23:03:53Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,912 | ["libs/langchain/langchain/embeddings/localai.py"] | LocalAI embeddings shouldn't require OpenAI | ### System Info
macOS Ventura 13.5.2, M1
### Who can help?
@mudler
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://github.com/langchain-ai/langchain/blob/v0.0.298/libs/langchain/langchain/embeddings/localai.py#L197
### Expected behavior
Why does LocalAI embeddings require OpenAI? I think LocalAI's embeddings has no need for OpenAI, it has a whole embeddings suite: https://localai.io/features/embeddings/
I think it should be directly usable with its [`/embeddings` endpoint](https://github.com/go-skynet/LocalAI/blob/v1.25.0/api/api.go#L190) | https://github.com/langchain-ai/langchain/issues/10912 | https://github.com/langchain-ai/langchain/pull/10946 | 2c114fcb5ecc0a9e75e8acb63d9dd5b4a6ced9a9 | b11f21c25fc6accca7a6f325c1fd3e63dd5f91ea | 2023-09-22T00:17:24Z | python | 2023-09-29T02:56:42Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | ["libs/langchain/langchain/llms/openai.py"] | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | 2023-09-19T23:26:18Z | python | 2023-09-20T00:03:16Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,742 | ["libs/langchain/langchain/tools/youtube/search.py"] | Update return parameter of YouTubeSearchTool | ### Feature request
Return the Youtube video links in full format like `https://www.youtube.com/watch?v=VIDEO_ID`
Currently the links are like `/watch?v=VIDEO_ID`
Return the links as List like `['link1, 'link2']`
Currently it is returning the whole list as string ` "['link1, 'link2']" `
### Motivation
If the links returned are exact same as **direct links to youtube in a list** rather than a string, i can avoid the hustle and bustle of processing it agian to convert to the required format
### Your contribution
I will change the code a bit and pull it. | https://github.com/langchain-ai/langchain/issues/10742 | https://github.com/langchain-ai/langchain/pull/10743 | 1dae3c383ed17b0a2e4675accf396bc73834de75 | 740eafe41da7317f42387bdfe6d0f1f521f2cafd | 2023-09-18T17:47:53Z | python | 2023-09-20T00:04:06Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,674 | ["libs/langchain/langchain/document_loaders/hugging_face_dataset.py"] | HuggingFace Data Loader fails when context is not str | ### System Info
langchain 0.0.285
python 3.11.4
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Try to load https://huggingface.co/datasets/hotpot_qa/viewer/fullwiki/validation
from langchain.document_loaders import HuggingFaceDatasetLoader
dataset_name = "hotpot_qa"
page_content_column = "context"
name = "fullwiki"
loader = HuggingFaceDatasetLoader(dataset_name, page_content_column, name)
docs = loader.load()
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
[/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb) Cell 1 line 8
[4](vscode-notebook-cell:/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb#Y125sZmlsZQ%3D%3D?line=3) name = "fullwiki"
[7](vscode-notebook-cell:/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb#Y125sZmlsZQ%3D%3D?line=6) loader = HuggingFaceDatasetLoader(dataset_name, page_content_column, name)
----> [8](vscode-notebook-cell:/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb#Y125sZmlsZQ%3D%3D?line=7) docs = loader.load()
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:87](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:87), in HuggingFaceDatasetLoader.load(self)
85 def load(self) -> List[Document]:
86 """Load documents."""
---> 87 return list(self.lazy_load())
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:76](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:76), in HuggingFaceDatasetLoader.lazy_load(self)
59 raise ImportError(
60 "Could not import datasets python package. "
61 "Please install it with `pip install datasets`."
62 )
64 dataset = load_dataset(
65 path=self.path,
66 name=self.name,
(...)
73 num_proc=self.num_proc,
74 )
---> 76 yield from (
77 Document(
78 page_content=row.pop(self.page_content_column),
79 metadata=row,
80 )
81 for key in dataset.keys()
82 for row in dataset[key]
83 )
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:77](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:77), in <genexpr>(.0)
59 raise ImportError(
60 "Could not import datasets python package. "
61 "Please install it with `pip install datasets`."
62 )
64 dataset = load_dataset(
65 path=self.path,
66 name=self.name,
(...)
73 num_proc=self.num_proc,
74 )
76 yield from (
---> 77 Document(
78 page_content=row.pop(self.page_content_column),
79 metadata=row,
80 )
81 for key in dataset.keys()
82 for row in dataset[key]
83 )
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/load/serializable.py:75](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/load/serializable.py:75), in Serializable.__init__(self, **kwargs)
74 def __init__(self, **kwargs: Any) -> None:
---> 75 super().__init__(**kwargs)
76 self._lc_kwargs = kwargs
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/pydantic/main.py:341](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/pydantic/main.py:341), in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for Document
page_content
str type expected (type=type_error.str)
### Expected behavior
Either extend class to handle more types of data or update docs. Will to do a PR to extend if your open. | https://github.com/langchain-ai/langchain/issues/10674 | https://github.com/langchain-ai/langchain/pull/13864 | 981f78f920d4e0514b7e627d9f3266afcccc9859 | 750485eaa8d6ca364f00454df4602abe2a8c9ba1 | 2023-09-16T10:49:37Z | python | 2023-11-29T03:33:16Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,589 | ["docs/docs_skeleton/docs/integrations/document_transformers/docai.ipynb", "libs/langchain/langchain/document_loaders/parsers/docai.py"] | Add Google Cloud Document AI integration | ### Feature request
Add integration for [Document AI](https://cloud.google.com/document-ai/docs/overview) from Google Cloud for intelligent document processing.
### Motivation
This product offers Optical Character Recognition, specialized processors from specific document types, and built in Generative AI processing for Document Summarization and entity extraction.
### Your contribution
I can implement this myself, I mostly want to understand where and how this could fit into the library.
Should it be a document transformer? An LLM? An output parser? A Retriever? Document AI does all of these in some capacity.
Document AI is designed as a platform that non-ML engineers can use to extract information from documents, and I could see several features being useful to Langchain (Like Document OCR to extract text and fields before sending it to an LLM) or using the Document AI Processors with Generative AI directly for the summarization/q&a output. | https://github.com/langchain-ai/langchain/issues/10589 | https://github.com/langchain-ai/langchain/pull/11413 | 628cc4cce8b4e6068dacc92836cc8045b94afa37 | 09c66fe04fe20b39d307df0419d742a7a28bab98 | 2023-09-14T16:57:14Z | python | 2023-10-09T15:04:25Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,486 | ["libs/langchain/langchain/llms/gpt4all.py"] | Add device to GPT4All | ### Feature request
Hey guys!
Thanks for the great tool you've developed.
LLama now supports device and so is GPT4All:
https://docs.gpt4all.io/gpt4all_python.html#gpt4all.gpt4all.GPT4All.__init__
Can you guys please add the device property to the file: "langchain/llms/gpt4all.py"
LN 96:
`
device: Optional[str] = Field("cpu", alias="device")
"""Device name: cpu, gpu, nvidia, intel, amd or DeviceName."""
`
Model Init:
`
values["client"] = GPT4AllModel(
model_name,
model_path=model_path or None,
model_type=values["backend"],
allow_download=values["allow_download"],
device=values["device"]
)
`
### Motivation
Necessity to use the device on GPU powered machines.
### Your contribution
None.. :( | https://github.com/langchain-ai/langchain/issues/10486 | https://github.com/langchain-ai/langchain/pull/11216 | 92683262f4a6c2db95c3aad40a6f6dfde2df43d1 | c6d7124675902e3a2628559d8a2b22c30747f75d | 2023-09-12T09:02:19Z | python | 2023-10-04T00:37:30Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | ["libs/langchain/langchain/prompts/chat.py"] | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | 2023-09-08T16:43:51Z | python | 2023-09-08T22:53:08Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,186 | ["libs/langchain/langchain/vectorstores/redis/__init__.py"] | Issue: RedisVectorStoreRetriever not accessible | ### Issue you'd like to raise.
After PR [#8612](https://github.com/langchain-ai/langchain/pull/8612), access to [RedisVectorStoreRetriever](https://github.com/langchain-ai/langchain/blob/27944cb611ee8face34fbe764c83e37841f96eb7/libs/langchain/langchain/vectorstores/redis/base.py#L1293) has been removed
### Suggestion:
Include **RedisVectorStoreRetriever** import in [redis/__init__.py](https://github.com/langchain-ai/langchain/blob/27944cb611ee8face34fbe764c83e37841f96eb7/libs/langchain/langchain/vectorstores/redis/__init__.py) on line 1
current: `from .base import Redis`
suggestion update: `from .base import Redis, RedisVectorStoreRetriever`
| https://github.com/langchain-ai/langchain/issues/10186 | https://github.com/langchain-ai/langchain/pull/10414 | d09ef9eb52466f991fc155567f234e5351f20d06 | 65e1606daa696e2190fcb410f190c6811f9f8dc3 | 2023-09-04T14:21:34Z | python | 2023-09-10T00:46:34Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | ["libs/langchain/langchain/tools/base.py"] | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | 2023-09-01T07:36:50Z | python | 2023-09-08T02:54:53Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,019 | ["libs/langchain/langchain/document_loaders/blob_loaders/file_system.py"] | fix: Loading documents from a Youtube Url | ### System Info
MacOS M2 13.4.1 (22F82)
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce behaviour:
1. Run the [tutorial](https://python.langchain.com/docs/integrations/document_loaders/youtube_audio) with the default parameters `save_dir = "~/Downloads/YouTube"`
2. After calling `docs = loader.load()` the docs will be empty
I have implemented a dummy fix for the interim.
The error is here in this file: from langchain.document_loaders.blob_loaders.youtube_audio import YoutubeAudioLoader
`YouTubeAudioLoader.yield_blobs` method
loader = FileSystemBlobLoader(self.save_dir, glob="*.m4a")
```
# This doesn't always work (MacOS)
loader = FileSystemBlobLoader(self.save_dir, glob="*.m4a")
```
The reason it doesn't work is that it's trying to use ~/Downloads/YouTube.
The fix I propose is either:
- Use the FULL file path in `save_dir` in the tutorial.
- Replace the problematic line with this, so that it finds the actual directory, even if you prefer to use `~` for specifying file paths.
```
loader = FileSystemBlobLoader(os.path.expanduser(self.save_dir), glob="*.m4a")
```
### Expected behavior
There should be documents in the loader.load() variable.
### My Fix
```
# Yield the written blobs
"""
you could fix save_dir like this...
(old)
save_dir = "~/Downloads/YouTube"
(new)
"/Users/shawnesquivel/Downloads/YouTube"
"""
# This doesn't always work (MacOS)
loader = FileSystemBlobLoader(self.save_dir, glob="*.m4a")
# This works
loader = FileSystemBlobLoader(os.path.expanduser(self.save_dir), glob="*.m4a")
```
| https://github.com/langchain-ai/langchain/issues/10019 | https://github.com/langchain-ai/langchain/pull/10133 | 31bbe807583b4a53c9fd2fa98d8b4d1fe185ba40 | e0f6ba08d6ad86226552d906e397a6a21f1904d0 | 2023-08-31T03:19:25Z | python | 2023-09-04T07:21:33Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | ["libs/langchain/langchain/vectorstores/pinecone.py", "libs/langchain/tests/integration_tests/vectorstores/test_pinecone.py", "pyproject.toml"] | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | 2023-08-28T13:09:29Z | python | 2023-09-03T22:37:41Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,696 | ["libs/langchain/langchain/vectorstores/pgvector.py"] | No way to Close an open connection in PGVector.py | ### Feature request
Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database.
Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database.
`def __del__(self):
# Close the session (and thus the connection) when the instance is destroyed.
self.session.close()`
### Motivation
The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error
### Your contribution
I guess, may be. | https://github.com/langchain-ai/langchain/issues/9696 | https://github.com/langchain-ai/langchain/pull/13232 | 85a77d2c2795b8f0463d809e459c68d4277bd080 | 1726d5dcdd495fa204c2907ce826df81527e0f14 | 2023-08-24T11:57:09Z | python | 2023-11-15T20:34:37Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,046 | ["libs/langchain/langchain/utilities/arxiv.py", "libs/langchain/tests/integration_tests/document_loaders/test_arxiv.py", "libs/langchain/tests/integration_tests/utilities/test_arxiv.py"] | ArxivLoader incorrect results | ### System Info
Latest pip versions
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I tried searching by exact title in the following way:
```python
docs = ArxivLoader(query="MetaGPT: Meta Programming for Multi-Agent Collaborative Framework", load_max_docs=1).load()
```
But the result is incorrect. The search works properly on the arxiv website.
### Expected behavior
Correct paper returned | https://github.com/langchain-ai/langchain/issues/9046 | https://github.com/langchain-ai/langchain/pull/9061 | e94a5d753fe01aff1fa1592cd59d37fa64ef24a2 | fcbbddedaed196b0aa0377ca8c78b3410f62420f | 2023-08-10T15:18:24Z | python | 2023-08-10T18:59:39Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,907 | ["libs/langchain/langchain/graphs/rdf_graph.py"] | RdfGraph schema retrieval queries for the relation types are not linked by the correct comment variable | ### System Info
langchain = 0.0.251
Python = 3.10.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create an OWL ontology called `dbpedia_sample.ttl` with the following:
``` turtle
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix dcterms: <http://purl.org/dc/terms/> .
@prefix wikidata: <http://www.wikidata.org/entity/> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix prov: <http://www.w3.org/ns/prov#> .
@prefix : <http://dbpedia.org/ontology/> .
:Actor
a owl:Class ;
rdfs:comment "An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity."@en ;
rdfs:label "actor"@en ;
rdfs:subClassOf :Artist ;
owl:equivalentClass wikidata:Q33999 ;
prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:Actor> .
:AdministrativeRegion
a owl:Class ;
rdfs:comment "A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)"@en ;
rdfs:label "administrative region"@en ;
rdfs:subClassOf :Region ;
owl:equivalentClass <http://schema.org/AdministrativeArea>, wikidata:Q3455524 ;
prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:AdministrativeRegion> .
:birthPlace
a rdf:Property, owl:ObjectProperty ;
rdfs:comment "where the person was born"@en ;
rdfs:domain :Animal ;
rdfs:label "birth place"@en ;
rdfs:range :Place ;
rdfs:subPropertyOf dul:hasLocation ;
owl:equivalentProperty <http://schema.org/birthPlace>, wikidata:P19 ;
prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyProperty:birthPlace> .
```
2. Run
``` python
from langchain.graphs import RdfGraph
graph = RdfGraph(
source_file="dbpedia_sample.ttl",
serialization="ttl",
standard="owl"
)
print(graph.get_schema)
```
3. Output
```
In the following, each IRI is followed by the local name and optionally its description in parentheses.
The OWL graph supports the following node types:
<http://dbpedia.org/ontology/Actor> (Actor, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.),
<http://dbpedia.org/ontology/AdministrativeRegion> (AdministrativeRegion, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration))
The OWL graph supports the following object properties, i.e., relationships between objects:
<http://dbpedia.org/ontology/birthPlace> (birthPlace, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.),
<http://dbpedia.org/ontology/birthPlace> (birthPlace, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)), <http://dbpedia.org/ontology/birthPlace> (birthPlace, where the person was born)
The OWL graph supports the following data properties, i.e., relationships between objects and literals:
```
### Expected behavior
The issue is that in the SPARQL queries getting the properties the `rdfs:comment` triple pattern always refers to the variable `?cls` which obviously comes from copy/paste code.
For example, getting the RDFS properties via
``` python
rel_query_rdf = prefixes["rdfs"] + (
"""SELECT DISTINCT ?rel ?com\n"""
"""WHERE { \n"""
""" ?subj ?rel ?obj . \n"""
""" OPTIONAL { ?cls rdfs:comment ?com } \n"""
"""}"""
)
```
you can see that the `OPTIONAL` clause refers to `?cls`, but it should be `?rel`.
The same holds for all other queries regarding properties.
The current status leads to a cartesian product of properties and all `rdfs:comment` vlaues in the dataset, which can be horribly large and of course leads to misleading and huge prompts (see the output of my sample in the "reproduction" part) | https://github.com/langchain-ai/langchain/issues/8907 | https://github.com/langchain-ai/langchain/pull/9136 | d9f1bcf366b5a66021d246d8e9c56e76fe60ead1 | cce132d1460b4f52541cb4a6f13219fb8fe4f907 | 2023-08-08T10:57:54Z | python | 2023-10-25T20:36:57Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,842 | ["libs/langchain/langchain/utilities/requests.py"] | TypeError Due to Duplicate 'auth' Argument in aiohttp Request when provide header to APIChain | ### System Info
Langchain version: 0.0.253
Python:3.11
### Who can help?
@agola11 @hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
1. Environment Setup:
Ensure you're using Python 3.11.
Install the necessary libraries and dependencies:
```bash
pip install fastapi uvicorn aiohttp langchai
```
2. APIChain Initialization:
Set up the APIChain utility using the provided API documentation and the chosen language model:
```python
from langchain import APIChain
chain = APIChain.from_llm_and_api_docs(api_docs=openapi.MY_API_DOCS, llm=choosen_llm, verbose=True, headers=headers)
```
3. Run the FastAPI application:
Use a tool like Uvicorn to start your FastAPI app:
```lua
uvicorn your_app_name:app --reload
```
4. Trigger the API Endpoint:
Make a request to the FastAPI endpoint that uses the APIChain utility. This could be through tools like curl, Postman, or directly from a browser, depending on how your API is set up.
Execute the Callback:
Inside the relevant endpoint, ensure you have the following snippet:
```python
with get_openai_callback() as cb:
response = await chain.arun(user_query)
```
5. Observe the Error:
You should encounter a TypeError indicating a conflict with the auth argument in the aiohttp.client.ClientSession.request() method. Because of providing header to APIChain and running it with ```arun``` method.
### Expected behavior
Request Execution:
The chain.arun(user_query) method should interact with the intended external service or API without any issues.
The auth parameter, when used in the underlying request to the external service (in aiohttp), should be correctly applied without conflicts or multiple definitions. | https://github.com/langchain-ai/langchain/issues/8842 | https://github.com/langchain-ai/langchain/pull/11010 | 88a02076affa2accd0465ee5ea9848b68d0e812b | 956ee981c03874d6e413a51eed9f7b437e52f07c | 2023-08-06T23:55:31Z | python | 2023-09-25T14:45:04Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,786 | ["libs/langchain/langchain/chains/question_answering/__init__.py"] | RetrievalQA.from_chain_type: callbacks are not called for all nested chains | ### System Info
langchain: 0.0.252
python: 3.10.12
@agola11
### Who can help?
@agola11 please take a look,
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a callback handler LogHandler for on_chain_start, on_chain_start, on_chat_model_start and log run_id, parent_run_id in each of them
2. Create a retrival chain and add this LogHandler
3. Add this LogHandler to llm as well
4. When running the chain, one of nested chain is not logged in between, because callbacks are not passed to that chain
### Expected behavior
All the nested chains should have callbacks defined.
| https://github.com/langchain-ai/langchain/issues/8786 | https://github.com/langchain-ai/langchain/pull/8787 | 5f1aab548731b53ebab00dd745a35ec7da52bf1c | 797c9e92c82f8e843b321ec2167bb1678ced03cf | 2023-08-05T06:43:10Z | python | 2023-08-06T22:11:45Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,729 | ["docs/extras/integrations/llms/vllm.ipynb", "libs/langchain/langchain/llms/__init__.py", "libs/langchain/langchain/llms/vllm.py"] | VLLM | ### Feature request
can we please get vllm support for faster inference
### Motivation
faster inference speed compared to using hugging face pipeline
### Your contribution
n/a | https://github.com/langchain-ai/langchain/issues/8729 | https://github.com/langchain-ai/langchain/pull/8806 | 100d9ce4c7b55db0c9df973a26bbc18d5ad5800c | a616e19975796ff6e3cde24597ba90eee714d57a | 2023-08-04T00:45:38Z | python | 2023-08-07T14:32:02Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | ["libs/langchain/langchain/callbacks/base.py", "libs/langchain/tests/unit_tests/callbacks/test_openai_info.py"] | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | 2023-07-31T21:01:43Z | python | 2023-08-14T23:45:17Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,477 | ["docs/docs_skeleton/vercel.json"] | DOC: Broken Links in Prompts Sub Categories Pages | ### Issue with current documentation:
The INFO: Python Guide links in both https://docs.langchain.com/docs/components/prompts/prompt-template and https://docs.langchain.com/docs/components/prompts/example-selectors are both broken (similar to #8105)
### Idea or request for content:
The pages have simply been moved from https://python.langchain.com/docs/modules/prompts/ to https://python.langchain.com/docs/modules/model_io/prompts/, so setting up corresponding redirects should fix it
I can open up a PR with the corresponding redirects myself | https://github.com/langchain-ai/langchain/issues/8477 | https://github.com/langchain-ai/langchain/pull/8478 | 08f5e6b8012f5eda2609103f33676199a3781a15 | 04ebdbe98f99624aa2adc42c9f622a9668967878 | 2023-07-30T04:41:57Z | python | 2023-07-31T02:38:52Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,472 | ["libs/langchain/langchain/schema/messages.py"] | unsupported operand type(s) for +: 'SystemMessage' and 'HumanMessage' | ### System Info
Langchain version: 0.0.247
python version: 3.11.0
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
You can reproduce this issue according following link:
https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompts_pipelining
```
from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.schema import HumanMessage, AIMessage, SystemMessage
prompt = SystemMessage(content="You are a nice pirate")
new_prompt = (
prompt
+ HumanMessage(content="hi")
+ AIMessage(content="what?")
+ "{input}"
)
```
prompy + HumanMessage(content="hi") will generate this issue
### Expected behavior
operand + for 'SystemMessage' and 'HumanMessage' should be support | https://github.com/langchain-ai/langchain/issues/8472 | https://github.com/langchain-ai/langchain/pull/8489 | f31047a3941cd389a9b8c01446b097e3bfbb1235 | 1ec0b1837971bc58c54645c4ca515dc201788a82 | 2023-07-30T02:14:01Z | python | 2023-08-02T14:51:44Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,457 | ["docs/extras/integrations/vectorstores/qdrant.ipynb"] | VectorStore.from_documents() takes 3 positional arguments but 4 were given | ### System Info
... % python --version
Python 3.11.4
... % pip show langchain | grep Version
Version: 0.0.247
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When following the langchain docs [here](https://python.langchain.com/docs/integrations/vectorstores/qdrant#qdrant-cloud), there will be an error thrown:
```py
qdrant = Qdrant.from_documents(
docs,
embeddings,
url,
prefer_grpc=True,
api_key=api_key,
collection_name="test",
)
```
error:
```
Traceback (most recent call last):
File "...myscript.py", line 29, in <module>
qdrant = Qdrant.from_documents(
^^^^^^^^^^^^^^^^^^^^^^
TypeError: VectorStore.from_documents() takes 3 positional arguments but 4 were given
```
Is it related to https://github.com/langchain-ai/langchain/pull/7910 ?
### Expected behavior
QDrant being initialized properly. | https://github.com/langchain-ai/langchain/issues/8457 | https://github.com/langchain-ai/langchain/pull/8482 | 4923cf029a36504a00368abe6b9c8b77e46aa740 | 08f5e6b8012f5eda2609103f33676199a3781a15 | 2023-07-29T10:53:33Z | python | 2023-07-30T20:24:44Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,378 | ["libs/langchain/langchain/vectorstores/matching_engine.py"] | GCP Matching Engine support for public index endpoints | ### System Info
langchain==0.0.244
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a new Matching Engine Index Endpoint that is public.
Follow the tutorial to make a similarity search:
```
vector_store = MatchingEngine.from_components(
project_id="",
region="us-central1",
gcs_bucket_name="",
index_id="",
endpoint_id="",
embedding=embeddings,
)
vector_store.similarity_search("what is a cat?", k=5)
```
Error:
```
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:1030, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
1021 def __call__(self,
1022 request: Any,
1023 timeout: Optional[float] = None,
(...)
1026 wait_for_ready: Optional[bool] = None,
1027 compression: Optional[grpc.Compression] = None) -> Any:
1028 state, call, = self._blocking(request, timeout, metadata, credentials,
1029 wait_for_ready, compression)
-> 1030 return _end_unary_response_blocking(state, call, False, None)
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:910, in _end_unary_response_blocking(state, call, with_call, deadline)
908 return state.response
909 else:
--> 910 raise _InactiveRpcError(state)
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed for :10000: unparseable host:port"
debug_error_string = "UNKNOWN:DNS resolution failed for :10000: unparseable host:port {created_time:"2023-07-27T20:12:23.727315699+00:00", grpc_status:14}"
>
```
### Expected behavior
It should be possible to do this. The VertexAI Python SDK supports it with the `endpoint.find_neighbors` function.
I think just changing [the wrapper](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/matching_engine.py#L178) from `.match` to `.find_neighbors` for when the endpoint is public should do it.
| https://github.com/langchain-ai/langchain/issues/8378 | https://github.com/langchain-ai/langchain/pull/10056 | 4f19ba306597eb753ea397d4b646dc75c2668cbe | 21b236e5e4fc5c6e22bab61967b6e56895c4fa15 | 2023-07-27T20:14:21Z | python | 2023-09-19T23:16:04Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,307 | ["libs/langchain/langchain/utilities/__init__.py", "libs/langchain/langchain/utilities/apify.py"] | ImportError: cannot import name 'ApifyWrapper' from 'langchain.utilities' | ### System Info
Hi All,
I tried to run Apify tutorial and I ran on the issue of ImportError: cannot import name 'ApifyWrapper' from 'langchain.utilities'. I checked the Utilities library under utilities/__init__.py and I couldn't find anything under the Generic integrations with third-party systems and packages.
Any thoughts or support?
### Who can help?
@hwchase17, @agola
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import os
openai.api_key = os.environ["OPEN_API_KEY"]
os.environ["APIFY_API_TOKEN"] = "apify_api_qNa00bcYGUYFwIZltWiOuhskmer7E61VE6GN"
apify = ApifyWrapper()
loader = apify.call_actor(
actor_id="apify/website-content-crawler",
run_input={"startUrls": [{"url": "https://python.langchain.com/en/latest/"}]},
dataset_mapping_function=lambda item: Document(
page_content=item["text"] or "", metadata={"source": item["url"]}
),
)
index = VectorstoreIndexCreator().from_loaders([loader])
query = "What is LangChain?"
result = index.query_with_sources(query)
print(result["answer"])
print(result["sources"])
### Expected behavior
LangChain is a standard interface through which you can interact with a variety of large language models (LLMs). It provides modules that can be used to build language model applications, and it also provides chains and agents with memory capabilities.
https://python.langchain.com/en/latest/modules/models/llms.html, https://python.langchain.com/en/latest/getting_started/getting_started.html | https://github.com/langchain-ai/langchain/issues/8307 | https://github.com/langchain-ai/langchain/pull/10067 | 02e51f4217207eed4fc9ac89735cf1f660be3f10 | 86646ec555970e01130994dc75f3a0c5d4e52de9 | 2023-07-26T18:18:22Z | python | 2023-08-31T22:47:44Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,272 | ["libs/langchain/langchain/evaluation/comparison/eval_chain.py", "libs/langchain/langchain/evaluation/criteria/eval_chain.py"] | not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs | ### System Info
platform = mac m2
python = 3.11
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
prompt_template = PromptTemplate.from_template(
"""Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label.
Consider the following factors while analyzing:
- Relevance to the input context
- Semantic similarity with the reference label
- Consistency with any specifics mentioned in the input
The DATA for this decision are as follows:
Input Context: {input}
Reference Label: {reference}
Option A: {prediction}
Option B: {prediction_b}
After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]].
---
Reasoning:
"""
)
evalutionChain = LabeledPairwiseStringEvalChain.from_llm(
llm=llm, prompt=prompt_template
)
result = evalutionChain.evaluate_string_pairs(
input=self.currentQuery,
prediction=response1,
prediction_b=response2,
reference=self.formatSourcesStructure(sourcedocs),
)
```
sometime it gives error like
```
not enough values to unpack (expected 2, got 1)
```
it like every 3-4 request, 1 request failing with this request,
and when request failed, on next request it gives the response
### Expected behavior
There will be no error, and should return valid response | https://github.com/langchain-ai/langchain/issues/8272 | https://github.com/langchain-ai/langchain/pull/8278 | 9cbefcc56cbce50e1f6d9392c17e15415d55b7ba | adf019724f095b1835040f4dd8c1ff0026cbc729 | 2023-07-26T07:20:57Z | python | 2023-07-26T08:53:22Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,207 | ["libs/langchain/langchain/vectorstores/pinecone.py"] | Issue: Pinecone retriever with Cosine Similarity is treated like Cosine Distance | ### Issue you'd like to raise.
When using document search from the existing Pinecone index that was created using Cosine **Similarity** in the `ConversationalRetrievalChain`, the `score_theshold` would eliminate most relevant documents instead of least relevant ones because the _similarity_ metric will be converted to _distance_.
In [_select_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/pinecone.py#L172) it calls the [_cosine_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L169) - which converts the similarity returned from Pinecone search to distance.
Then, [filtering the documents](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L266) based on the `score_threshold` eliminates the most relevant documents instead of least relevant ones.
### Suggestion:
Pinecone subclass should override the `_cosine_relevance_score_fn` to preserve the similarity, since it is what originally comes back from the Pinecone similarity search. | https://github.com/langchain-ai/langchain/issues/8207 | https://github.com/langchain-ai/langchain/pull/8920 | 2e42ed5de68d27fe0ce676aae0cdaae778fcf16c | ff19a62afc2c8d6d9e705bd0af5ffad426263f49 | 2023-07-24T22:23:33Z | python | 2023-11-13T19:47:38Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,201 | ["libs/langchain/langchain/utilities/__init__.py", "libs/langchain/langchain/utilities/apify.py"] | DOC: Apify integration missing | ### Issue with current documentation:
The Apify integration has been delete by @hwchase17 in commit aa0e69bc98fa9c77b01e5104f12b2b779f64fd33 and thus this documentation is not valid anymore:
https://python.langchain.com/docs/integrations/tools/apify
### Idea or request for content:
It would be highly beneficial to have information on a suitable replacement for the Apify integration. | https://github.com/langchain-ai/langchain/issues/8201 | https://github.com/langchain-ai/langchain/pull/10067 | 02e51f4217207eed4fc9ac89735cf1f660be3f10 | 86646ec555970e01130994dc75f3a0c5d4e52de9 | 2023-07-24T19:46:13Z | python | 2023-08-31T22:47:44Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,989 | ["libs/langchain/langchain/output_parsers/fix.py", "libs/langchain/langchain/output_parsers/retry.py"] | OutputFixingParser is not async | ### System Info
LangChain Python v0.0.237
Based on this code snippet it appears that OutputFixingParser doesn't support async flows.
https://github.com/hwchase17/langchain/blob/df84e1bb64d96377f909651f696f310c43c2f2c5/langchain/output_parsers/fix.py#L46-L52
It's calling the run function and not arun
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
1. Define async callback handler
2. Make LLM return output that is unparsable (invalid JSON or 2 code blocks)
3. OutputFixingParser will fail parsing the output and throw an exception, which will call the LLM via the run function which doesn't await on coroutines. Python will give the following error:
```
RuntimeWarning: coroutine 'AsyncCallbackHandler.on_chat_model_start' was never awaited
```
### Expected behavior
1. Should work with coroutines as expected | https://github.com/langchain-ai/langchain/issues/7989 | https://github.com/langchain-ai/langchain/pull/8776 | cc908d49a3c23e128fab7c89fa45d7cc4114f028 | 33cdb06b5c9d4d3e7f54d5e1e7c980dfae33923b | 2023-07-20T08:29:12Z | python | 2023-08-07T21:42:48Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,982 | ["langchain/chains/openai_functions/extraction.py"] | TypeError: create_extraction_chain() got an unexpected keyword argument 'verbose' | ### Feature request
Almost all the chains offered in langchain framework support Verbose option which helps the developers understand what prompt is being applied under the hood and plan their work accordingly. It immensely help while debugging. create_extraction_chain is a very helpful one and I found this is not accepting verbose attribute.
### Motivation
For many developers who are just following the langchain official documentation and not looking at the code used under the hood, this error will sound odd. Supporting this attribute will help in keeping things consistent and improve debugging feature of this chain
### Your contribution
I can raise the PR for this

| https://github.com/langchain-ai/langchain/issues/7982 | https://github.com/langchain-ai/langchain/pull/7984 | 812a1643db9daac573f77f7cdbce3fea90ba0507 | d6493590da3977b5077c13ff3aaad591f71637d6 | 2023-07-20T06:39:12Z | python | 2023-07-20T13:52:13Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | ["langchain/cache.py", "tests/unit_tests/test_cache.py"] | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | 2023-07-13T12:36:48Z | python | 2023-07-13T13:39:04Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,616 | ["libs/langchain/langchain/chains/openai_functions/base.py"] | _get_python_function_name does not work with classes | ### System Info
LangChain : v0.0.231
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Issue
`convert_to_openai_function` does not work as intended:
- Classes are not supported
- Any function without its source is not supported
# Reproduce
```python
from dataclasses import dataclass
from langchain.chains.openai_functions.base import (
convert_to_openai_function,
)
@dataclass
class System:
name: str
ram: int
convert_to_openai_function(System)
```
### Expected behavior
When calling `langchain.chains.openai_functions.base.convert_to_openai_function`, the subsequent call to `_get_python_function_name` fails because it tries to read source code (and cannot find it).
Something much simpler would be to access the `__name__` attribute of the callable.
| https://github.com/langchain-ai/langchain/issues/7616 | https://github.com/langchain-ai/langchain/pull/7617 | 797c9e92c82f8e843b321ec2167bb1678ced03cf | 4a7ebb7184fa5dad4cdfef49d1eab2a3e9029a2b | 2023-07-12T21:03:09Z | python | 2023-08-06T22:12:03Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,603 | ["docs/extras/integrations/vectorstores/meilisearch.ipynb", "libs/langchain/langchain/vectorstores/__init__.py", "libs/langchain/langchain/vectorstores/meilisearch.py", "libs/langchain/tests/integration_tests/vectorstores/docker-compose/meilisearch.yaml", "libs/langchain/tests/integration_tests/vectorstores/test_meilisearch.py"] | Add support for Meilisearch vector databases | ### Feature request
Add support for Meilisearch vector search.
[Meilisearch](https://www.meilisearch.com) is an open-source search engine. See [documentation](https://www.meilisearch.com/docs)
### Motivation
Meilisearch is releasing the vector search/store feature, which should be available from July 31st.
### Your contribution
I'm working on it and will submit a PR for this issue soon. | https://github.com/langchain-ai/langchain/issues/7603 | https://github.com/langchain-ai/langchain/pull/7649 | b7d6e1909cf5346a4384280fba3d732597778bae | 8ee56b9a5b3751db122bd896daeb1e0b7766def3 | 2023-07-12T15:32:23Z | python | 2023-07-29T00:06:54Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,571 | ["langchain/retrievers/kendra.py"] | AmazonKendraRetriever "Could not load credentials" error in latest release | ### System Info
LangChain version: 0.0.229
Platform: AWS Lambda execution
Python version: 3.9
I get the following error when creating the AmazonKendraRetriever using LangChain version 0.0.229.
Code to create retriever:
`retriever = AmazonKendraRetriever(index_id=kendra_index)`
Error:
```[ERROR] ValidationError: 1 validation error for AmazonKendraRetriever
__root__
Could not load credentials to authenticate with AWS client. Please check that credentials in the specified profile name are valid. (type=value_error)
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 171, in lambda_handler
retriever = AmazonKendraRetriever(index_id=kendra_index)
File "/opt/python/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__```
When using LangChain version 0.0.219 this error does not occur.
Issue also raised on aws-samples git repo with potential solution: https://github.com/aws-samples/amazon-kendra-langchain-extensions/issues/24
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Install latest version of Langchain
2. Follow instructions here: https://python.langchain.com/docs/modules/data_connection/retrievers/integrations/amazon_kendra_retriever
### Expected behavior
Error not thrown when creating AmazonKendraRetriever | https://github.com/langchain-ai/langchain/issues/7571 | https://github.com/langchain-ai/langchain/pull/7629 | 0e1d7a27c62b15fba6bcafc5f8ac996d57e0b1d3 | f11d845dee355709b41dec36dcc7c74f7b90c7ec | 2023-07-12T00:16:40Z | python | 2023-07-13T03:47:35Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,569 | ["langchain/document_loaders/notiondb.py"] | Issue: Document loader for Notion DB doesn't supports some properties | ### Issue you'd like to raise.
Current version of document loader for Notion DB doesn't supports following properties for metadata
- `unique_id`
- https://www.notion.so/help/unique-id
- `status`
- https://www.notion.so/help/guides/status-property-gives-clarity-on-tasks
- `people`
- useful property when you assign some task to assignees
### Suggestion:
I would like to make a PR to fix this issue if it's okay. | https://github.com/langchain-ai/langchain/issues/7569 | https://github.com/langchain-ai/langchain/pull/7570 | 5f17c57174c88e8c00bd71216dcf44b14fee7aaf | 3f7213586e5fc5222fe6b6c889aa50776cd1c988 | 2023-07-12T00:02:03Z | python | 2023-07-12T07:34:54Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,542 | ["langchain/requests.py"] | Issue: Passing auth object to LLMRequestsChain | ### Issue you'd like to raise.
Accessing many corporate resources requires special authentication, e.g. Kerberos.
The `requests` library supports passing an auth object, e.g.
`requests.get(url, auth=HttpNegotiateAuth(), verify=False)` to use SSPI.
We're able to pass a `requests_wrapper `to `LLMRequestsChain`, but it only allows changing headers, not the actual get method that is used.
### Suggestion:
Allow for more generic generic wrappers to be passed? Allow passing a requests-compatible auth object? | https://github.com/langchain-ai/langchain/issues/7542 | https://github.com/langchain-ai/langchain/pull/7701 | 1e40427755f3034c5c411c1d0a921cdb3e13849d | 663b0933e488383e6a9bc2a04b4b1cf866a8ea94 | 2023-07-11T13:59:38Z | python | 2023-07-14T12:38:24Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,524 | ["langchain/callbacks/file.py"] | Specific name of the current chain is not displayed | ### System Info
LangChain v0.0.229, Python v3.10.12, Ubuntu 20.04.2 LTS
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
I am encountering an issue where the specific name of the current chain is not being displayed in the console output, even though I have set 'verbose=True' in the MultiPromptChain and other Chains. When the program enters a new chain, it only prints 'Entering new chain...' without specifying the name of the chain. This makes it difficult to debug and understand which chain is currently being used. Could you please look into this issue and provide a way to display the name of the current chain in the console output? Thank you.
The output could be
```
> Entering new chain...
> Entering new chain...
lib/python3.10/site-packages/langchain/chains/llm.py:275: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.
warnings.warn(
> Finished chain.
math: {'input': 'What is the derivative of a function?'}
> Entering new chain...
Prompt after formatting:
You are a very good mathematician. You are great at answering math questions. \nYou are so good because you are able to break down hard problems into their component parts, \nanswer the component parts, and then put them together to answer the broader question.
Here is a question:
What is the derivative of a function?
> Finished chain.
> Finished chain.
```
### Expected behavior
```
> Entering new MultiPromptChain chain...
> Entering new LLMRouterChain chain...
lib/python3.10/site-packages/langchain/chains/llm.py:275: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.
warnings.warn(
> Finished chain.
math: {'input': 'What is the derivative of a function?'}
> Entering new LLMChain[math] chain...
Prompt after formatting:
You are a very good mathematician. You are great at answering math questions. \nYou are so good because you are able to break down hard problems into their component parts, \nanswer the component parts, and then put them together to answer the broader question.
Here is a question:
What is the derivative of a function?
> Finished chain.
> Finished chain.
``` | https://github.com/langchain-ai/langchain/issues/7524 | https://github.com/langchain-ai/langchain/pull/7687 | 3874bb256e09d377032ae54b1592ca3dd7cf9e4d | af6d333147db0af7d558a4a66d6c2752b6027204 | 2023-07-11T08:28:40Z | python | 2023-07-14T02:39:21Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,472 | ["langchain/vectorstores/pinecone.py", "tests/integration_tests/vectorstores/test_pinecone.py"] | Pinecone: Support starter tier | ### Feature request
Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature.
### Motivation
Indexes in upcoming Pinecone V4 won't support:
* namespaces
* `configure_index()`
* delete by metadata
* `describe_index()` with metadata filtering
* `metadata_config` parameter to `create_index()`
* `delete()` with the `deleteAll` parameter
### Your contribution
I'll do it. | https://github.com/langchain-ai/langchain/issues/7472 | https://github.com/langchain-ai/langchain/pull/7473 | 5debd5043e61d29efea661c20818b48a0f39e5a6 | 9d13dcd17c2dfab8f087bcc37e99f1181dfe5c63 | 2023-07-10T10:19:16Z | python | 2023-07-10T15:39:47Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,283 | ["langchain/llms/anthropic.py"] | anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version' | ### System Info
When I initialise ChatAnthropic(), it got the error:
anthropic_version = packaging.version.parse(version("anthropic"))
AttributeError: module 'packaging' has no attribute 'version'
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.chat_models import ChatOpenAI, ChatAnthropic
llm = ChatAnthropic()
### Expected behavior
As shown above. | https://github.com/langchain-ai/langchain/issues/7283 | https://github.com/langchain-ai/langchain/pull/7306 | d642609a23219b1037f84492c2bc56777e90397a | bac56618b43912acf4970d72d2497507eb14ceb1 | 2023-07-06T15:35:39Z | python | 2023-07-06T23:35:42Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,184 | ["libs/langchain/langchain/chains/qa_with_sources/base.py", "libs/langchain/tests/unit_tests/chains/test_qa_with_sources.py"] | Issue: RetrievalQAWithSourcesChain gives error 'too many values to unpack (expected 2)' after running. | Hello,
I'm using _langchain_ for QA with court case documents. More specifically, the RetrievalQAWithSourcesChain to retrieve the answer and document source information. However, when running the chain with embedded documents, I get the following error:
```
ValueError: too many values to unpack (expected 2)
Traceback:
response = qa({"question": pregunta}, return_only_outputs=True)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 166, in __call__
raise e
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 160, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\qa_with_sources\base.py", line 132, in _call
answer, sources = re.split(r"SOURCES:\s", answer)
```
The passed documents are the sections from the court case. I added the following **metadata** fields:
1. Source: PDF file name.
2. Section: Name of the section
3. Section_chunk: Numeral value used for identification in case the section was divided into chunks.
4. Page: Page where the section chunk starts.
The documents are passed as retriever to the chain with FAISS (FAISS.from_documents(documents, self.embeddings)).
I tried out two approaches (both resulting in the same error):
1. providing the _load_qa_chain_ as chain
2. creating it using the class method **_.from_chain_type_**
My question is why does this error ocurrs. And also, if the type of metadata used may cause the errors.
Thank you in advance! | https://github.com/langchain-ai/langchain/issues/7184 | https://github.com/langchain-ai/langchain/pull/8716 | a3c79b1909fe1cbe85394c353b0535117ef0cdf0 | 8bebc9206fb77ee22a9b0592c1efb32f27bb45db | 2023-07-05T09:49:42Z | python | 2023-08-16T20:30:15Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | ["libs/langchain/langchain/document_loaders/parsers/pdf.py", "libs/langchain/langchain/document_loaders/pdf.py"] | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | 2023-07-01T23:24:53Z | python | 2023-11-29T20:07:46Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,768 | ["langchain/chains/sequential.py", "tests/unit_tests/chains/test_sequential.py"] | Can't use memory for an internal LLMChain inside a SequentialChain | ### System Info
Langchain 0.0.214
Python 3.11.1
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a `SequentialChain` that contains 2 `LLMChain`s, and add a memory to the first one.
2. When running, you'll get a validation error:
```
Missing required input keys: {'chat_history'}, only had {'human_input'} (type=value_error)
```
### Expected behavior
You should be able to add memory to one chain, not just the Sequential Chain | https://github.com/langchain-ai/langchain/issues/6768 | https://github.com/langchain-ai/langchain/pull/6769 | 488d2d5da95a2bacdca3d1623d862ac5ab28d59e | f307ca094b0d175d71ac424eba3d9f7ef5fc44f1 | 2023-06-26T16:09:11Z | python | 2023-07-13T06:47:44Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,756 | ["langchain/agents/initialize.py", "tests/unit_tests/agents/test_initialize.py"] | Recent tags change causes AttributeError: 'str' object has no attribute 'value' on initialize_agent call | ### System Info
- Langchain: 0.0.215
- Platform: ubuntu
- Python 3.10.12
### Who can help?
@vowelparrot
https://github.com/hwchase17/langchain/blob/d84a3bcf7ab3edf8fe1d49083e066d51c9b5f621/langchain/agents/initialize.py#L54
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Fails if agent initialized as follows:
```python
agent = initialize_agent(
agent='zero-shot-react-description',
tools=tools,
llm=llm,
verbose=True,
max_iterations=30,
memory=ConversationBufferMemory(),
handle_parsing_errors=True)
```
With
```
...
lib/python3.10/site-packages/langchain/agents/initialize.py", line 54, in initialize_agent
tags_.append(agent.value)
AttributeError: 'str' object has no attribute 'value'
````
### Expected behavior
Expected to work as before where agent is specified as a string (or if this is highlighting that agent should actually be an object, it should indicate that instead of the error being shown). | https://github.com/langchain-ai/langchain/issues/6756 | https://github.com/langchain-ai/langchain/pull/6765 | ba622764cb7ccf4667878289f959857348ef8c19 | 6d30acffcbea5807835839585132d3946bb81661 | 2023-06-26T11:00:29Z | python | 2023-06-26T16:28:11Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,650 | ["libs/langchain/langchain/chat_models/azure_openai.py"] | [AzureChatOpenAI] openai_api_type can't be changed from the default 'azure' value | ### System Info
Hello,
during the development of an application that needs to authenticate to Azure services and use the wrapper [AzureChatOpenAi](https://github.com/hwchase17/langchain/blob/master/langchain/chat_models/azure_openai.py), we encountered an error due to the fact that the model could not use the 'azure_ad' type.
It seems that this class sets the openai_api_type always to the set default value of 'azure' even if we have an environment variable called 'OPENAI_API_TYPE' specifying 'azure_ad'.
Why is it so?
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
answering_llm=AzureChatOpenAI(
deployment_name=ANSWERING_MODEL_CONFIG.model_name,
model_name=ANSWERING_MODEL_CONFIG.model_type, #"gpt-3.5-turbo"
openai_api_type="azure_ad", # IF THIS IS NOT EXPLICITLY PASSED IT FAILS
openai_api_key=auth_token,
temperature=ANSWERING_MODEL_CONFIG.temperature,
max_tokens=ANSWERING_MODEL_CONFIG.max_tokens
)
### Expected behavior
We expect the wrapper to take the value of the environmental variable correctly. | https://github.com/langchain-ai/langchain/issues/6650 | https://github.com/langchain-ai/langchain/pull/8622 | 29f51055e8f7d060e6d3a5480591bef76652edae | e68a1d73d0c84503702a2bf66b52d7ae2336eb67 | 2023-06-23T14:09:47Z | python | 2023-08-04T03:21:41Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,610 | ["langchain/chat_models/vertexai.py"] | ChatVertexAI Error: _ChatSessionBase.send_message() got an unexpected keyword argument 'context' | ### System Info
langchain version: 0.0.209
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://python.langchain.com/docs/modules/model_io/models/chat/integrations/google_vertex_ai_palm
### Expected behavior
I get an error saying "TypeError: _ChatSessionBase.send_message() got an unexpected keyword argument 'context'" when I run `chat(messages)` command mentioned in https://python.langchain.com/docs/modules/model_io/models/chat/integrations/google_vertex_ai_palm.
This is probably because ChatSession.send_message does not have the argument 'context' and ChatVertexAI._generate automatically adds the context argument to params since chat-bison being a non-code model. | https://github.com/langchain-ai/langchain/issues/6610 | https://github.com/langchain-ai/langchain/pull/6652 | c2b25c17c5c8d35a7297f665f2327b9671855898 | 9e52134d30203a9125532621abcd5a102e3f2bfb | 2023-06-22T20:56:38Z | python | 2023-06-23T20:38:21Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,582 | ["langchain/vectorstores/weaviate.py"] | Typo | ### System Info
latest version
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Typo on :
https://github.com/hwchase17/langchain/blob/d50de2728f95df0ffc59c538bd67e116a8e75a53/langchain/vectorstores/weaviate.py#L49
Instal - > install
### Expected behavior
typo corrected | https://github.com/langchain-ai/langchain/issues/6582 | https://github.com/langchain-ai/langchain/pull/6595 | f6fdabd20b3b14f8728f8c74d9711322400f9369 | ba256b23f241e1669536f7e70c6365ceba7a9cfa | 2023-06-22T09:34:08Z | python | 2023-06-23T21:56:54Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,472 | ["langchain/callbacks/mlflow_callback.py"] | DOC: Incorrect type for tags parameter in MLflow callback | ### Issue with current documentation:
In the documentation the tag type is string, but in the code it's a dictionary.
The proposed fix is to change the following two lines "tags (str):" to "tags (dict):".
https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L120
https://github.com/hwchase17/langchain/blob/7414e9d19603c962063dd337cdcf3c3168d4b8be/langchain/callbacks/mlflow_callback.py#L225
### Idea or request for content:
_No response_ | https://github.com/langchain-ai/langchain/issues/6472 | https://github.com/langchain-ai/langchain/pull/6473 | 9187d2f3a97abc6d89daea9b5abfa652a425e1de | fe941cb54a80976bfc7575ce59a518ae428801ee | 2023-06-20T09:57:57Z | python | 2023-06-26T09:12:23Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,462 | ["libs/langchain/langchain/chat_models/openai.py"] | AzureChatOpenAI Streaming causes IndexError: list index out of range | ### System Info
langchain-0.0.205-py3, macos ventura, python 3.11
### Who can help?
@hwchase17 / @agola11
### Information
- [x] The official example notebooks/scripts
https://python.langchain.com/docs/modules/model_io/models/chat/how_to/streaming
### Related Components
- [X] LLMs/Chat Models
### Reproduction
### Reproduction code
```python
# test.py
from langchain.chat_models import AzureChatOpenAI
from langchain.chat_models import ChatOpenAI
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.schema import (
HumanMessage,
)
chat_1 = ChatOpenAI(streaming=True,
callbacks=[StreamingStdOutCallbackHandler()],
openai_api_key="SOME-KEY",
model='gpt-3.5-turbo',
temperature=0.7,
request_timeout=60,
max_retries=1)
chat_2 = AzureChatOpenAI(streaming=True,
callbacks=[StreamingStdOutCallbackHandler()],
openai_api_base="https://some-org-openai.openai.azure.com/",
openai_api_version="2023-06-01-preview",
openai_api_key="SOME-KEY",
deployment_name='gpt-3_5',
temperature=0.7,
request_timeout=60,
max_retries=1)
resp_1 = chat_1([HumanMessage(content="Write me a song about sparkling water.")])
resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")])
```
```shell
python test.py
```
### Output of command 1 (OpenAI)
```shell
Verse 1:
Bubbles dancing in my cup
Refreshing taste, can't get enough
Clear and crisp, it's always there
A drink that's beyond compare
Chorus:
Sparkling water, oh how you shine
You make my taste buds come alive
With every sip, I feel so fine
Sparkling water, you're one of a kind
Verse 2:
A drink that's light and calorie-free
A healthier choice, it's plain to see
A perfect thirst quencher, day or night
With sparkling water, everything's right
Chorus:
Sparkling water, oh how you shine
You make my taste buds come alive
With every sip, I feel so fine
Sparkling water, you're one of a kind
Bridge:
From the fizzy sensation to the bubbles popping
You're the drink I never want to stop sipping
Whether at a party or on my own
Sparkling water, you're always in the zone
Chorus:
Sparkling water, oh how you shine
You make my taste buds come alive
With every sip, I feel so fine
Sparkling water, you're one of a kind
Outro:
Sparkling water, you're my go-to
A drink that always feels brand new
With each sip, I'm left in awe
Sparkling water, you're the perfect beverage
```
### Output of command 2 (Azure OpenAI)
```shell
raw.Traceback (most recent call last):
File "/Users/someone/Development/test.py", line 29, in <module>
resp_2 = chat_2([HumanMessage(content="Write me a song about sparkling water.")])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 208, in __call__
generation = self.generate(
^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 102, in generate
raise e
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 94, in generate
results = [
^
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/base.py", line 95, in <listcomp>
self._generate(m, stop=stop, run_manager=run_manager, **kwargs)
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 334, in _generate
role = stream_resp["choices"][0]["delta"].get("role", role)
~~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of range
```
### Expected behavior
I can't find anything in existing issues or documentation stating that there is a known bug in the AzureOpenAI Service Streaming. | https://github.com/langchain-ai/langchain/issues/6462 | https://github.com/langchain-ai/langchain/pull/8241 | c1ea8da9bc2986532d6f1db810996ee72d5a6c1c | 0af48b06d00b23be65d0a10ff27aff4db0f6c85f | 2023-06-20T04:57:00Z | python | 2023-07-25T18:30:22Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,431 | ["langchain/prompts/chat.py", "tests/unit_tests/prompts/test_chat.py"] | ChatPromptTemplate with partial variables is giving validation error | ### System Info
langchain-0.0.205, python3.10
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Write this into Notebook cell
2. `from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
chat_prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template("Do something with {question} using {context} giving it like {formatins}")
],
input_variables=["question", "context"],
partial_variables={"formatins": "some structure"}
)
`
3. It it throwing following error:
`Error:
ValidationError: 1 validation error for ChatPromptTemplate __root__ Got mismatched input_variables. Expected: {'formatins', 'question', 'context'}. Got: ['question', 'context'] (type=value_error)`
4. This was working until 24 hours ago. Potentially related to recent commit to langchain/prompts/chat.py.
### Expected behavior
The chat_prompt should get created with the partial variables injected.
If this is expected change, can you please help with suggesting what should be the new way to use partial_variables?
Thanks | https://github.com/langchain-ai/langchain/issues/6431 | https://github.com/langchain-ai/langchain/pull/6456 | 02c0a1e77eb9636850c8c29da33885a32b4cc2eb | 6efd5fa2b9d46c7b4db6ad638097f010b745f0cc | 2023-06-19T16:15:49Z | python | 2023-06-20T05:08:15Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,380 | ["langchain/graphs/neo4j_graph.py", "tests/integration_tests/chains/test_graph_database.py"] | Neo4J schema not inferred correctly by Neo4JGraph Object | ### System Info
langchain=0.0.2
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce behaviors:
1. Push the following dataset to neo4J (say in neo4J browser)
```
CREATE (la:LabelA {property_a: 'a'})
CREATE (lb:LabelB {property_b1: 123, property_b2: 'b2'})
CREATE (lc:LabelC)
MERGE (la)-[:REL_TYPE]-> (lb)
MERGE (la)-[:REL_TYPE {rel_prop: 'abc'}]-> (lc)
```
2. Instantiate a Neo4JGraphObject, connect and refresh schema
```
from langchain.graphs import Neo4jGraph
graph = Neo4jGraph(
url=NEO4J_URL,
username=NEO4J_USERNAME,
password=NEO4J_PASSWORD,
)
graph.refresh_schema()
print(graph.get_schema)
```
You will obtain
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)']
```
### Expected behavior
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)', '(:LabelA)-[:REL_TYPE]->(:LabelC)']
``` | https://github.com/langchain-ai/langchain/issues/6380 | https://github.com/langchain-ai/langchain/pull/6381 | b0d80c4b3e128f27bd1b9df48ed4afbe17950fec | 22601b0b6323e6465f78ca9bc16152062a2b65ba | 2023-06-18T19:19:04Z | python | 2023-06-20T05:48:35Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | ["langchain/agents/agent.py"] | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | 2023-06-18T15:56:26Z | python | 2023-07-13T06:09:25Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,365 | ["langchain/chat_models/promptlayer_openai.py"] | PromptLayerChatOpenAI does not support the newest function calling feature | ### System Info
Python Version: 3.11
Langchain Version: 0.0.209
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
```
llm = PromptLayerChatOpenAI(model="gpt-3.5-turbo-0613", pl_tags=tags, return_pl_id=True)
predicted_message = self.llm.predict_messages(messages, functions=self.functions, callbacks=callbacks)
```
`predicted_message.additional_kwargs` attribute appears to have a empty dict, because the `functions` kwarg not even passed to the parent class.
### Expected behavior
Predicted AI Message should have a `function_call` key on `additional_kwargs` attribute. | https://github.com/langchain-ai/langchain/issues/6365 | https://github.com/langchain-ai/langchain/pull/6366 | e0cb3ea90c1f8ec26957ffca65c6e451d444c69d | 09acbb84101bc6df373ca5a1d6c8d212bd3f577f | 2023-06-18T13:00:32Z | python | 2023-07-06T17:16:04Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,282 | ["langchain/chains/llm_requests.py"] | LLMRequestsChain not enforcing headers when making http requests | ### System Info
LangChain version 0.0.201
### Who can help?
@hwchase17 @agola
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Based on the documentation example, run the following script:
```python
from langchain.llms import OpenAI
from langchain.chains import LLMRequestsChain, LLMChain
from langchain.prompts import PromptTemplate
template = """Here is a company website content :
----
{requests_result}
----
We want to learn more about a company's activity and the kind of
clients they target. Perform an analysis and write a short summary.
"""
PROMPT = PromptTemplate(
input_variables=["requests_result"],
template=template,
)
chain = LLMRequestsChain(llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=PROMPT))
print(chain.requests_wrapper)
```
Gives
```bash
python3 bug-langchain-requests.py
headers=None aiosession=None
```
### Expected behavior
Provided headers should be enforced
```bash
python3 bug-langchain-requests.py
headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36'} aiosession=None
``` | https://github.com/langchain-ai/langchain/issues/6282 | https://github.com/langchain-ai/langchain/pull/6283 | 23cdebddc446d14b22003819fbe66884b600c998 | 9ca11c06b73f225ff431500e174bf21fa8eb9a33 | 2023-06-16T12:44:22Z | python | 2023-06-16T23:21:01Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,234 | ["langchain/tools/gmail/send_message.py"] | Gmail toolkit cannot handle sending email to one person correctly | ### System Info
Gmail toolkit cannot handle sending email to one person correctly - if I want to send email to one person it doesn't consider that `action_input` should look like:
```
{
...
to: ["[email protected]"]
...
}
```
Instead it look like:
```
{
...
to: "[email protected]"
...
}
```
It causes error with `To` header - it provides list of letters to Gmail API - ["e", "m", ...].
Error:
```
<HttpError 400 when requesting https://gmail.googleapis.com/gmail/v1/users/me/messages/send?alt=json returned "Invalid To header". Details: "[{'message': 'Invalid To header', 'domain': 'global', 'reason': 'invalidArgument'}]">
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Ask agent to send email to person using GmailToolkit tools.
### Expected behavior
To always use list of emails in `To` header. | https://github.com/langchain-ai/langchain/issues/6234 | https://github.com/langchain-ai/langchain/pull/6242 | 94c789925798053c08ad8cc262b23f2683abd4d2 | 5d149e4d50325d2821263e59bac667f781c48f7a | 2023-06-15T15:30:50Z | python | 2023-06-21T08:25:49Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,231 | ["langchain/experimental/plan_and_execute/schema.py"] | correct the base class | https://github.com/hwchase17/langchain/blob/c7db9febb0edeba1ea108adc4423b789404ce5f2/langchain/experimental/plan_and_execute/schema.py#L31
From
`class ListStepContainer(BaseModel):`
To
`class ListStepContainer(BaseStepContainer):`
| https://github.com/langchain-ai/langchain/issues/6231 | https://github.com/langchain-ai/langchain/pull/6232 | 98e1bbfbbdffca55775e847899d2823f6232ebe7 | af3f4010155a882b8b1021b6e0de130c628dab2c | 2023-06-15T15:16:56Z | python | 2023-07-13T07:03:02Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,225 | ["langchain/chat_models/openai.py"] | OpenAI functions dont work with async streaming... | ### System Info
Version: 0.0.200
### Who can help?
@hwchase17 , @agola11
- I have a PR ready ... creating an issue so I can pair it
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
...
openai.py
async def _agenerate( ...
has different implementation than def generate...
when running the chain with `acall` >>
1. fails on
inner_completion += token # token is null, raises error
and after fix the function call was not captured...
### Expected behavior
the same as `generate` | https://github.com/langchain-ai/langchain/issues/6225 | https://github.com/langchain-ai/langchain/pull/6226 | ea6a5b03e077526896071da80530bebb94eb390b | e2f36ee6082506049419875fa4a374f8fa2a88fe | 2023-06-15T13:22:11Z | python | 2023-06-19T00:05:16Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,198 | ["langchain/vectorstores/elastic_vector_search.py"] | Elasticsearch : ElasticKnnSearch.from_texts throws AttributeError | ### System Info
Langchain version : 0.0.199
Python Version: Python 3.9.16
MacOS
@CodeDevNinja @dev2049
PR https://github.com/hwchase17/langchain/pull/5058 introduced a change to ElasticVectorSearch from_texts which broke, kind of coincidentally, ElasticKnnSearch from_texts
I discovered this issue when running docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb . I got to the following cell:
```python
# Test `add_texts` method
texts = ["Hello, world!", "Machine learning is fun.", "I love Python."]
knn_search.add_texts(texts)
# Test `from_texts` method
new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."]
knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url)
```
and it said:
```python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[10], line 7
5 # Test `from_texts` method
6 new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."]
----> 7 knn_search.from_texts(new_texts, embeddings, elasticsearch_url=elasticsearch_url)
File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:296, in ElasticVectorSearch.from_texts(cls, texts, embedding, metadatas, elasticsearch_url, index_name, refresh_indices, **kwargs)
293 index_name = index_name or uuid.uuid4().hex
294 vectorsearch = cls(
295 elasticsearch_url, index_name, embedding, **kwargs)
--> 296 vectorsearch.add_texts(
297 texts, metadatas=metadatas, refresh_indices=refresh_indices
298 )
299 return vectorsearch
File ~/dev/github/langchain/langchain/vectorstores/elastic_vector_search.py:183, in ElasticVectorSearch.add_texts(self, texts, metadatas, refresh_indices, **kwargs)
181 requests = []
182 ids = []
--> 183 embeddings = self.embedding.embed_documents(list(texts))
184 dim = len(embeddings[0])
185 mapping = _default_text_mapping(dim)
AttributeError: 'str' object has no attribute 'embed_documents'
```
which is a pretty weird error.
This is because https://github.com/cdiddy77/langchain/blob/e74733ab9e5e307fd828ea600ea929a1cb24320f/langchain/vectorstores/elastic_vector_search.py#L294 invokes the __init__ of the calling class, in this case `ElasticKnnSearch` which takes parameters in a very different order.
This calling of the wrong __init__ was always present, but the PR above added a subsequent called to add_texts, which is where the bogus embedding is referenced, causing the exception.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to repro:
1. Open docs/modules/indexes/vectorstores/examples/elasticsearch.ipynb
2. Modify as appropriate with elasticsearch_url, and further down, model_id, dims, cloud_id, username,password of elastic cloud deployment
3. Run until cell below "Test adding vectors"
### Expected behavior
Not throw exception | https://github.com/langchain-ai/langchain/issues/6198 | https://github.com/langchain-ai/langchain/pull/6199 | 854f3fe9b1ca1c3e097cb0ccd55d1406e9c04406 | 574698a5fb2adbc4b6eb20ffe11a949a4f3b0371 | 2023-06-15T04:45:12Z | python | 2023-07-13T23:55:20Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,131 | ["langchain/vectorstores/azuresearch.py"] | Azure Cognitive Search Vector Store doesn't apply search_kwargs when performing queries | ### System Info
Langchain 0.0.199
Python 3.10.11
Windows 11 (but will occur on any platform.
### Who can help?
@hwchase17
@ruoccofabrizio
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
To reproduce this issue create an AzureSearch Vector Store and a RetrievalQA with a search_kwargs, like in this sample code:
```
import os
cognitive_search_name = os.environ["AZURE_SEARCH_SERVICE_NAME"]
vector_store_address: str = f"https://{cognitive_search_name}.search.windows.net/"
index_name: str = os.environ["AZURE_SEARCH_SERVICE_INDEX_NAME"]
vector_store_password: str = os.environ["AZURE_SEARCH_SERVICE_ADMIN_KEY"]
from langchain.vectorstores.azuresearch import AzureSearch
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002", chunk_size=1, client=any)
vector_store = AzureSearch(azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query)
from langchain.chains import RetrievalQA
llm = AzureChatOpenAI(deployment_name="gpt35", model_name="gpt-3.5-turbo-0301", openai_api_version="2023-03-15-preview", temperature=temperature, client=None)
index = get_vector_store()
retriever = index.as_retriever()
retriever.search_kwargs = {'filters': "metadata eq 'something'"}
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
)
return qa
```
When you execute this code using ```qa``` the search_kwargs appear in the method ```similarity_search``` in ```azuresearch.py``` but are never passed to the methods ```vector_search```, ```hybrid_search```, and ```semantic_hybrid``` where they actually would be used.
### Expected behavior
In my example they should apply a filter to the azure cognitive search index before doing the vector search, but this is not happening because filters will always be empty when it gets to the functions where they are used. (```vector_search```, ```hybrid_search```, and ```semantic_hybrid```) | https://github.com/langchain-ai/langchain/issues/6131 | https://github.com/langchain-ai/langchain/pull/6132 | 395a2a3724507bafc7afe9e04ecbae60a7c66c7e | 22862043543e55fa0467c739714230eae3425512 | 2023-06-14T02:08:49Z | python | 2023-06-19T00:39:06Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | ["langchain/llms/openai.py"] | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | 2023-06-13T21:22:21Z | python | 2023-06-21T08:37:16Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,079 | ["docs/extras/modules/data_connection/document_loaders/integrations/web_base.ipynb"] | Issue: Can't load a public webpage | ### I want to load in the webpage below.
Hi,
Trying to extract some webpage using webbaseloader:
"""
loader = WebBaseLoader("https://researchadmin.asu.edu/)
data = loader.load()
"""
But gives the following error:
SSLError: HTTPSConnectionPool(host='researchadmin.asu.edu', port=443): Max retries exceeded with url: / (Caused by
SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get
local issuer certificate (_ssl.c:1002)')))
It is a public web page. Can anyone help?
### Suggestion:
_No response_ | https://github.com/langchain-ai/langchain/issues/6079 | https://github.com/langchain-ai/langchain/pull/6248 | 92f05a67a44c5d2a7a60280d7083cb96f3685371 | ba90e3c990d21128c67a0ca07e3261a38e579853 | 2023-06-13T05:40:52Z | python | 2023-06-19T00:47:10Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,039 | ["langchain/llms/openai.py"] | Make modelname_to_contextsize as a staticmethod to use it without create an object | ### Feature request
Make [modelname_to_contextsize](https://github.com/hwchase17/langchain/blob/289e9aeb9d122d689d68b2e77236ce3dfcd606a7/langchain/llms/openai.py#L503) as staticmethod to use it without create an object.
### Motivation
While using ChatOpenAI or AzureChatOpenAI, to use modelname_to_contextsize we need to create OpenAI or AzureOpenAI object whether we don't use it.
For example, llama-index using [modelname_to_contextsize](https://github.com/jerryjliu/llama_index/blob/f614448a045788c9c5c9a774f407a992ae1f7743/llama_index/llm_predictor/base.py#L42) to get context size, but it raise an error if we using AzureOpenAI without setting OPENAI_API_TOKEN.
### Your contribution
#6040 | https://github.com/langchain-ai/langchain/issues/6039 | https://github.com/langchain-ai/langchain/pull/6040 | 427551eabf32e0c9fa4428dcfad5fed86f99bbdf | cdd1d78bf2a383972af15921611a06e7efe53f93 | 2023-06-12T10:23:07Z | python | 2023-06-17T16:13:08Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,027 | ["langchain/utilities/arxiv.py", "tests/integration_tests/utilities/test_arxiv.py"] | ArxivAPIWrapper | The documentation says:
> It limits the Document content by doc_content_chars_max.
> Set doc_content_chars_max=None if you don't want to limit the content size.
But the claim type of int prevents this to be set as None:
https://github.com/hwchase17/langchain/blob/289e9aeb9d122d689d68b2e77236ce3dfcd606a7/langchain/utilities/arxiv.py#LL41C5-L41C38
> ValidationError: 1 validation error for ArxivAPIWrapper
> doc_content_chars_max
> none is not an allowed value (type=type_error.none.not_allowed)
Can you change that?
In addition, can you also expose this parameter to the `ArxivLoader`?
Thank you! | https://github.com/langchain-ai/langchain/issues/6027 | https://github.com/langchain-ai/langchain/pull/6063 | a9b97aa6f4f0039804014192345f93612fef93be | b01cf0dd54bf078e348471a038842b82db370d66 | 2023-06-12T05:30:46Z | python | 2023-06-16T05:16:42Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,889 | ["langchain/llms/vertexai.py", "langchain/utilities/vertexai.py"] | When inialztion VertexAI() all passed parameters got ignored | ### System Info
langchain=0.0.194
python=3.11.3
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run:
`VertexAI(project="my_project_name")`
### Expected behavior
The client will connect to the supplied project_id | https://github.com/langchain-ai/langchain/issues/5889 | https://github.com/langchain-ai/langchain/pull/5891 | 63fcf41bea5222f64b1c9a822f08cec9e55aa619 | 0eb1bc1a0245547316fe96ac8f86b0e67acb524f | 2023-06-08T16:06:31Z | python | 2023-06-09T06:15:22Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,861 | ["libs/community/langchain_community/adapters/openai.py"] | KeyError 'content' | ### System Info
Langchain version 165
Python 3.9
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I call llm with llm.generate(xxx) on my code.
We are connected to the Azure OpenAI Service, and strangely enough, in a production environment, the following error is occasionally returned:
`File \"/usr/local/lib/python3.9/site-packages/langchain/chat_models/openai.py\", line 75, in _convert_dict_to_message return AIMessage( content=_dict[\"content\"]) KeyError: 'content'`
Checked the Langchain source code, it is in this piece of code can not find the 'content' element, take the message locally and retry, the message body is normal:
``` python
def _convert_dict_to_message(_dict: dict) -> BaseMessage:
role = _dict["role"]
if role == "user":
return HumanMessage(content=_dict["content"])
elif role == "assistant":
return AIMessage(content=_dict["content"])
elif role == "system":
return SystemMessage(content=_dict["content"])
else:
return ChatMessage(content=_dict["content"], role=role)
```
Suggestions for fixing:
1、When there is an error, can the error log be more detailed?
2、whether to provide a method to return only the response, the caller to deal with their own?
### Expected behavior
should have no error | https://github.com/langchain-ai/langchain/issues/5861 | https://github.com/langchain-ai/langchain/pull/14765 | b0588774f142e00d24c6852077a57b56e3888022 | 5642132c0c615ecd0984d5e9c45ef6076ccc69d2 | 2023-06-08T03:09:03Z | python | 2023-12-20T06:17:23Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,835 | ["docs/modules/memory/examples/dynamodb_chat_message_history.ipynb", "langchain/memory/chat_message_histories/dynamodb.py"] | Support for the AWS endpoint URL in the DynamoDBChatMessageHistory | ### Feature request
I propose having the possibility of specifying the endpoint URL to AWS in the DynamoDBChatMessageHistory, so that it is possible to target not only the AWS cloud services, but also a local installation.
### Motivation
Specifying the endpoint URL, which is normally not done when addressing the cloud services, is very helpful when targeting a local instance (like [Localstack](https://localstack.cloud/)) when running local tests.
### Your contribution
I am providing this PR for the implementation: https://github.com/hwchase17/langchain/pull/5836/files | https://github.com/langchain-ai/langchain/issues/5835 | https://github.com/langchain-ai/langchain/pull/5836 | 0eb1bc1a0245547316fe96ac8f86b0e67acb524f | db7ef635c0e061fcbab2f608ccc60af15fc5585d | 2023-06-07T14:01:56Z | python | 2023-06-09T06:21:11Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,822 | ["langchain/embeddings/openai.py"] | skip openai params when embedding | ### System Info
[email protected]
I upgrade my langchain lib by execute pip install -U langchain, and the verion is 0.0.192。But i found that openai.api_base not working. I use azure openai service as openai backend, the openai.api_base is very import for me. I hava compared tag/0.0.192 and tag/0.0.191, and figure out that:

openai params is moved inside _invocation_params function,and used in some openai invoke:


but still some case not covered like:

### Who can help?
@hwchase17 i have debug langchain and make a pr, plz review the pr:https://github.com/hwchase17/langchain/pull/5821, thanks
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. pip install -U langchain
2. exeucte code next:
```python
from langchain.embeddings import OpenAIEmbeddings
def main():
embeddings = OpenAIEmbeddings(
openai_api_key="OPENAI_API_KEY", openai_api_base="OPENAI_API_BASE",
)
text = "This is a test document."
query_result = embeddings.embed_query(text)
print(query_result)
if __name__ == "__main__":
main()
```
### Expected behavior
same effect as [email protected], | https://github.com/langchain-ai/langchain/issues/5822 | https://github.com/langchain-ai/langchain/pull/5821 | b3ae6bcd3f42ec85ee65eb29c922ab22a17a0210 | 5a207cce8f026e32c93bf271f80b73570d4b2844 | 2023-06-07T08:36:23Z | python | 2023-06-07T14:32:57Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,807 | ["tests/integration_tests/vectorstores/test_faiss.py"] | Issue: Integration tests fail for faiss vector store | ### Issue you'd like to raise.
Integration tests for faiss vector store fail when run.
It appears that the tests are not in sync with the module implementation.
command: poetry run pytest tests/integration_tests/vectorstores/test_faiss.py
Results summary:
======================================================= short test summary info =======================================================
FAILED tests/integration_tests/vectorstores/test_faiss.py::test_faiss_local_save_load - FileExistsError: [Errno 17] File exists: '/var/folders/nm/q080zph50yz4mcc7_vcvdcy00000gp/T/tmpt6hov952'
FAILED tests/integration_tests/vectorstores/test_faiss.py::test_faiss_similarity_search_with_relevance_scores - TypeError: __init__() got an unexpected keyword argument 'normalize_score_fn'
FAILED tests/integration_tests/vectorstores/test_faiss.py::test_faiss_invalid_normalize_fn - TypeError: __init__() got an unexpected keyword argument 'normalize_score_fn'
FAILED tests/integration_tests/vectorstores/test_faiss.py::test_missing_normalize_score_fn - Failed: DID NOT RAISE <class 'ValueError'>
=============================================== 4 failed, 6 passed, 2 warnings in 0.70s ===============================================
### Suggestion:
Correct tests/integration_tests/vectorstores/test_faiss.py to be in sync with langchain.vectorstores.faiss | https://github.com/langchain-ai/langchain/issues/5807 | https://github.com/langchain-ai/langchain/pull/6281 | ddd518a161f85a89f5c2dc0b8f262aba11cb3869 | 6aa7b04f7978e3783e386fd6714d9e1d44b3f5a2 | 2023-06-07T03:49:08Z | python | 2023-06-19T00:25:49Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,720 | ["langchain/llms/gpt4all.py"] | AttributeError: 'GPT4All' object has no attribute 'model_type' (langchain 0.0.190) | ### System Info
Hi, this is related to #5651 but (on my machine ;) ) the issue is still there.
## Versions
* Intel Mac with latest OSX
* Python 3.11.2
* langchain 0.0.190, includes fix for #5651
* ggml-mpt-7b-instruct.bin, downloaded at June 5th from https://gpt4all.io/models/ggml-mpt-7b-instruct.bin
### Who can help?
@pakcheera @bwv988 First of all: thanks for the report and the fix :). Did this issues disappear on you machines?
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
## Error message
```shell
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /Users/christoph/src/sandstorm-labs/development-tools/ai-support-chat/gpt4all/chat.py:30 in │
│ <module> │
│ │
│ 27 │ model_name="all-mpnet-base-v2") │
│ 28 │
│ 29 # see https://gpt4all.io/models/ggml-mpt-7b-instruct.bin │
│ ❱ 30 llm = GPT4All( │
│ 31 │ model="./ggml-mpt-7b-instruct.bin", │
│ 32 │ #backend='gptj', │
│ 33 │ top_p=0.5, │
│ │
│ in pydantic.main.BaseModel.__init__:339 │
│ │
│ in pydantic.main.validate_model:1102 │
│ │
│ /Users/christoph/src/sandstorm-labs/development-tools/ai-support-chat/gpt4all/venv/lib/python3.1 │
│ 1/site-packages/langchain/llms/gpt4all.py:156 in validate_environment │
│ │
│ 153 │ │ if values["n_threads"] is not None: │
│ 154 │ │ │ # set n_threads │
│ 155 │ │ │ values["client"].model.set_thread_count(values["n_threads"]) │
│ ❱ 156 │ │ values["backend"] = values["client"].model_type │
│ 157 │ │ │
│ 158 │ │ return values │
│ 159 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
```
As you can see in _gpt4all.py:156_ contains the changed from the fix of #5651
## Code
```python
from langchain.llms import GPT4All
# see https://gpt4all.io/models/ggml-mpt-7b-instruct.bin
llm = GPT4All(
model="./ggml-mpt-7b-instruct.bin",
#backend='gptj',
top_p=0.5,
top_k=0,
temp=0.1,
repeat_penalty=0.8,
n_threads=12,
n_batch=16,
n_ctx=2048)
```
FYI I am following [this example in a blog post](https://dev.to/akshayballal/beyond-openai-harnessing-open-source-models-to-create-your-personalized-ai-companion-1npb).
### Expected behavior
I expect an instance of _GPT4All_ instead of a stacktrace. | https://github.com/langchain-ai/langchain/issues/5720 | https://github.com/langchain-ai/langchain/pull/5743 | d0d89d39efb5f292f72e70973f3b70c4ca095047 | 74f8e603d942ca22ed07bf0ea23a57ed67b36b2c | 2023-06-05T09:44:08Z | python | 2023-06-05T19:45:29Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,713 | ["langchain/llms/bedrock.py"] | Inference parameters for Bedrock titan models not working | ### System Info
LangChain version 0.0.190
Python 3.9
### Who can help?
@seanpmorgan @3coins
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Tried the following to provide the `temperature` and `maxTokenCount` parameters when using the `Bedrock` class for the `amazon.titan-tg1-large` model.
```
import boto3
import botocore
from langchain.chains import LLMChain
from langchain.llms.bedrock import Bedrock
from langchain.prompts import PromptTemplate
from langchain.embeddings import BedrockEmbeddings
prompt = PromptTemplate(
input_variables=["text"],
template="{text}",
)
llm = Bedrock(model_id="amazon.titan-tg1-large")
llmchain = LLMChain(llm=llm, prompt=prompt)
llm.model_kwargs = {'temperature': 0.3, "maxTokenCount": 512}
text = "Write a blog explaining Generative AI in ELI5 style."
response = llmchain.run(text=text)
print(f"prompt={text}\n\nresponse={response}")
```
This results in the following exception
```
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: The provided inference configurations are invalid
```
This happens because https://github.com/hwchase17/langchain/blob/d0d89d39efb5f292f72e70973f3b70c4ca095047/langchain/llms/bedrock.py#L20 passes these params as key value pairs rather than putting them in the `textgenerationConfig` structure as the Titan model expects them to be,
The proposed fix is as follows:
```
def prepare_input(
cls, provider: str, prompt: str, model_kwargs: Dict[str, Any]
) -> Dict[str, Any]:
input_body = {**model_kwargs}
if provider == "anthropic" or provider == "ai21":
input_body["prompt"] = prompt
elif provider == "amazon":
input_body = dict()
input_body["inputText"] = prompt
input_body["textGenerationConfig] = {**model_kwargs}
else:
input_body["inputText"] = prompt
if provider == "anthropic" and "max_tokens_to_sample" not in input_body:
input_body["max_tokens_to_sample"] = 50
return input_body
```
```
### Expected behavior
Support the inference config parameters. | https://github.com/langchain-ai/langchain/issues/5713 | https://github.com/langchain-ai/langchain/pull/5896 | 767fa91eae3455050d85a594fededddff3311dbe | a6ebffb69504576a805f3b9f09732ad344751b89 | 2023-06-05T06:48:57Z | python | 2023-06-08T21:16:01Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,699 | ["langchain/document_loaders/sitemap.py"] | Sitemap filters not working due to lack of stripping whitespace and newlines | https://github.com/hwchase17/langchain/blob/8d9e9e013ccfe72d839dcfa37a3f17c340a47a88/langchain/document_loaders/sitemap.py#L83
if
```
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xhtml="http://www.w3.org/1999/xhtml">
<url>
<loc>
https://tatum.com/
</loc>
<xhtml:link rel="alternate" hreflang="x-default" href="https://tatum.com/"/>
</url>
```
then
` re.match(r, loc.text) for r in self.filter_urls`
Comparison to filter here will be comparing against a value that includes those whitespaces and newlines.
What worked for me:
``` def parse_sitemap(self, soup: Any) -> List[dict]:
"""Parse sitemap xml and load into a list of dicts."""
els = []
for url in soup.find_all("url"):
loc = url.find("loc")
if not loc:
continue
loc_text = loc.text.strip()
if self.filter_urls and not any(
re.match(r, loc_text) for r in self.filter_urls
):
continue```
| https://github.com/langchain-ai/langchain/issues/5699 | https://github.com/langchain-ai/langchain/pull/5728 | 98dd6d068a67c2ac1c14785ea189c2e4c8882bf5 | 2dcda8a8aca4c427ff5716e6ac37ab0c24a7f2e5 | 2023-06-04T22:49:54Z | python | 2023-06-05T23:33:55Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,651 | ["langchain/llms/gpt4all.py"] | AttributeError: 'LLModel' object has no attribute 'model_type' (gpt4all) | ### System Info
run on docker image with python:3.11.3-bullseye in MAC m1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
My docker image
```
FROM python:3.11.3-bullseye
WORKDIR /src
COPY src /src
RUN python -m pip install --upgrade pip
RUN apt-get update -y
RUN apt install cmake -y
RUN git clone --recurse-submodules https://github.com/nomic-ai/gpt4all
RUN cd gpt4all/gpt4all-backend/ && mkdir build && cd build && cmake .. && cmake --build . --parallel
RUN cd gpt4all/gpt4all-bindings/python && pip3 install -e .
RUN pip install -r requirements.txt
RUN chmod +x app/start_app.sh
EXPOSE 8501
ENTRYPOINT ["/bin/bash"]
CMD ["app/start_app.sh"]
```
where star_app.sh is run python file that have this line
`llm = GPT4All(model=llm_path, backend='gptj', verbose=True, streaming=True, n_threads=os.cpu_count(),temp=temp)`
llm_path is path of gpt4all model
### Expected behavior
Got this error when try to use gpt4all
```
AttributeError: 'LLModel' object has no attribute 'model_type'
Traceback:
File "/src/app/utils.py", line 20, in get_chain
llm = GPT4All(model=llm_path, backend='gptj', verbose=True, streaming=True, n_threads=os.cpu_count(),temp=temp)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pydantic/main.py", line 339, in __init__
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pydantic/main.py", line 1102, in validate_model
values = validator(cls_, values)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/llms/gpt4all.py", line 156, in validate_environment
values["backend"] = values["client"].model.model_type
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``` | https://github.com/langchain-ai/langchain/issues/5651 | https://github.com/langchain-ai/langchain/pull/5657 | 6a3ceaa3771a725046af3c02cf4c15a3e18ec54a | 8fea0529c1be9c9f5308a9b5a51f8381067a269a | 2023-06-03T10:37:42Z | python | 2023-06-04T14:21:16Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,638 | ["docs/integrations/bedrock.md", "docs/modules/models/llms/integrations/bedrock.ipynb"] | DOC: "Amazon Bedrock" is not sorted in Integrations section of nav | ### Issue with current documentation:
In the left nav of the docs, "Amazon Bedrock" is alphabetized after "Beam integration for langchain" and before "Cerebrium AI", not with the rest of the A-named integrations.
<img width="254" alt="image" src="https://github.com/hwchase17/langchain/assets/93281816/20836ca0-3946-4614-8b44-4dcf67e27f7e">
### Idea or request for content:
Retitle the page to "Bedrock" so that its URL remains unchanged and the nav is properly sorted. | https://github.com/langchain-ai/langchain/issues/5638 | https://github.com/langchain-ai/langchain/pull/5639 | 6e25e650859fc86365252e0bdf8fd2223e5dec1c | 6c11f940132a26d7dc967d213d23d093ddb90b14 | 2023-06-02T23:41:12Z | python | 2023-06-04T21:39:25Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,627 | ["docs/modules/agents.rst", "docs/modules/chains.rst", "docs/modules/indexes.rst", "docs/modules/memory.rst", "docs/modules/models.rst", "docs/modules/prompts.rst", "docs/modules/prompts/chat_prompt_template.ipynb"] | DOC: repetitive parts in Modules pages | ### Issue with current documentation:
Pages in Modules: Models, Prompts, Memory, ...
They all have repeated parts. See a picture.

### Idea or request for content:
The whole "Go Deeper" section can be removed and instead, the links from removed items added to the above items. For example "Prompt Templates" link is added to the "LLM Prompt Templates" in the above text. Etc.
This significantly decreases the size of the page and improves user experience. No more repetitive items.
_No response_ | https://github.com/langchain-ai/langchain/issues/5627 | https://github.com/langchain-ai/langchain/pull/5116 | bc875a9df16d17db531f9e363c18ed8b5ebbc047 | 95c6ed0568e808626ffb2ee6490b770a4ac9c508 | 2023-06-02T18:24:00Z | python | 2023-06-03T21:44:32Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,623 | ["langchain/document_loaders/__init__.py"] | cannot import name 'FigmaFileLoader' | ### System Info
langchain==0.0.189
os:windows11
python=3.10.11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import FigmaFileLoader
### Expected behavior
expected:
load the module
error:
ImportError: cannot import name 'FigmaFileLoader' from 'langchain.document_loaders' (C:\Users\xxx\AppData\Local\miniconda3\envs\xxx\lib\site-packages\langchain\document_loaders\__init__.py)
comments:
checked the langchain\document_loaders\__init__.py and there is no reference to FigmaFileLoader
| https://github.com/langchain-ai/langchain/issues/5623 | https://github.com/langchain-ai/langchain/pull/5636 | 20ec1173f40a13cba73d79cc0efa4653d2489d65 | 9a7488a5ce65aaf727464f02a10811719b517f11 | 2023-06-02T16:39:41Z | python | 2023-06-02T21:58:41Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,614 | ["langchain/text_splitter.py", "tests/unit_tests/test_text_splitter.py"] | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2) | ### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.text_splitter import MarkdownTextSplitter
# of course this is part of a larger markdown document, but this is the minimal string to reproduce
txt = "\n\n***\n\n"
doc = Document(page_content=txt)
markdown_splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
splitted = markdown_splitter.split_documents([doc])
```
```
Traceback (most recent call last):
File "t.py", line 9, in <module>
splitted = markdown_splitter.split_documents([doc])
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 101, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 88, in create_documents
for chunk in self.split_text(text):
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 369, in split_text
return self._split_text(text, self._separators)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 346, in _split_text
splits = _split_text(text, separator, self._keep_separator)
File "/home/richard/.local/lib/python3.8/site-packages/langchain/text_splitter.py", line 37, in _split_text
_splits = re.split(f"({separator})", text)
File "/usr/lib/python3.8/re.py", line 231, in split
return _compile(pattern, flags).split(string, maxsplit)
File "/usr/lib/python3.8/re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "/usr/lib/python3.8/sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "/usr/lib/python3.8/sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 834, in _parse
p = _parse_sub(source, state, sub_verbose, nested + 1)
File "/usr/lib/python3.8/sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "/usr/lib/python3.8/sre_parse.py", line 671, in _parse
raise source.error("multiple repeat",
re.error: multiple repeat at position 4 (line 3, column 2)
```
### Expected behavior
splitted contains splitted markdown and no errors occur | https://github.com/langchain-ai/langchain/issues/5614 | https://github.com/langchain-ai/langchain/pull/5625 | 25487fa5ee38710d2f0edd0672fdd83557b3d0da | d5b160821641df77df447e6dfce21b58fbb13d75 | 2023-06-02T12:20:41Z | python | 2023-06-05T23:40:26Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,601 | ["langchain/agents/chat/output_parser.py", "langchain/agents/mrkl/output_parser.py", "tests/unit_tests/agents/test_mrkl.py"] | OutputParsers currently allows model to hallucinate the output of an action | ### System Info
The MRKL and chat output parsers currently will allow an LLM response to generate a valid action, as well as hallucinate a "final answer" based on that response.
[Logic](https://github.com/hwchase17/langchain/blob/master/langchain/agents/chat/output_parser.py#L15)
This is because the parser is returning an AgentFinish object immediately if `FINAL_ANSWER_ACTION` is in the text, rather than checking if the text also includes a valid action. I had this appear when using the Python agent, where the LLM returned a code block as the action, but simultaneously hallucinated the output and a final answer in one response. (In this case, it was quite obvious because the code block referred to a database which does not exist)
I'm not sure if there are any situations where it is desired that a response should output an action as well as an answer?
If this is not desired behaviour, it can be easily fixable by raising an exception if a response includes both a valid action, and "final answer" rather than returning immedately from either condition.
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
````py
from langchain.agents.chat.output_parser import ChatOutputParser
parser = ChatOutputParser()
valid_action = """Action:
```
{
"action": "Python REPL",
"action_input": "print(\'Hello world!\')"
}
```
final_answer = """Final Answer: Goodbye world!"""
print(parser.parse(valid_action)) # outputs an AgentFinish
print(parser.parse(final_answer)) # outputs an AgentAction
print(parser.parse(valid_action + final_answer)) # outputs an AgentFinish, should probably raise an Exception
````
### Expected behavior
An exception should likely be raised if an LLM returns a response that both includes a final answer, and a parse-able action, rather than skipping the action and returning the final answer, since it probably hallucinated an output/observation from the action. | https://github.com/langchain-ai/langchain/issues/5601 | https://github.com/langchain-ai/langchain/pull/5609 | c112d7334d6cac3296b877250d3f575fbfd46da2 | 26ec845921425d99f222b6d21bd58eda36b2f49b | 2023-06-02T08:01:50Z | python | 2023-06-04T21:40:49Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | ["langchain/vectorstores/chroma.py", "tests/integration_tests/vectorstores/test_chroma.py"] | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | 2023-06-01T23:13:30Z | python | 2023-06-02T18:12:48Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,545 | ["langchain/graphs/neo4j_graph.py"] | Issue: Improve Error Messaging When APOC Procedures Fail in Neo4jGraph | ### Issue you'd like to raise.
In the current implementation, when an APOC procedure fails, a generic error message is raised stating: "Could not use APOC procedures. Please install the APOC plugin in Neo4j." This message can lead to user confusion as it suggests the APOC plugin is not installed when in reality it may be installed but not correctly configured or permitted to run certain procedures.
This issue is encountered specifically when the refresh_schema function calls apoc.meta.data(). The function apoc.meta.data() isn't allowed to run under default configurations in the Neo4j database, thus leading to the mentioned error message.
Here is the code snippet where the issue arises:
```
# Set schema
try:
self.refresh_schema()
except neo4j.exceptions.ClientError
raise ValueError(
"Could not use APOC procedures. "
"Please install the APOC plugin in Neo4j."
)
```
### Suggestion:
To improve the user experience, I propose that the error message should be made more specific. Instead of merely advising users to install the APOC plugin, it would be beneficial to indicate that certain procedures may not be configured or whitelisted to run by default and to guide the users to check their configurations.
I believe this will save users time when troubleshooting and will reduce the potential for confusion. | https://github.com/langchain-ai/langchain/issues/5545 | https://github.com/langchain-ai/langchain/pull/5547 | 33ea606f455f195d74f09ac654e03da8850ecb9b | 3e45b8306555a48b5838ed7dd33b1a4c615bdd18 | 2023-06-01T08:04:16Z | python | 2023-06-03T23:56:39Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,535 | ["docs/modules/indexes/vectorstores/examples/tigris.ipynb", "langchain/vectorstores/__init__.py", "langchain/vectorstores/tigris.py", "poetry.lock", "pyproject.toml"] | Add Tigris vectorstore for vector search | ### Feature request
Support Tigris as a vector search backend
### Motivation
Tigris is a Serverless NoSQL Database and Search Platform and have their [vector search](https://www.tigrisdata.com/docs/concepts/vector-search/python/) product. It will be great option for users to use an integrated database and search product.
### Your contribution
I can submit a a PR | https://github.com/langchain-ai/langchain/issues/5535 | https://github.com/langchain-ai/langchain/pull/5703 | 38dabdbb3a900ae60e4b503cd48c26903b2d4673 | 233b52735e77121849b0fc9f8eaf6170222f0ac7 | 2023-06-01T03:18:00Z | python | 2023-06-06T03:39:16Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,483 | ["langchain/document_loaders/web_base.py"] | [SSL: CERTIFICATE_VERIFY_FAILED] while load from SitemapLoader | ### System Info
langchain: 0.0.181
platform: windows
python: 3.11.3
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
site_loader = SitemapLoader(web_path="https://help.glueup.com/sitemap_index.xml")
docs = site_loader.load()
print(docs[0])
# ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1002)
```
### Expected behavior
print the frist doc | https://github.com/langchain-ai/langchain/issues/5483 | https://github.com/langchain-ai/langchain/pull/6256 | 10bff4ecc420317a86043a8f0287363618be77e6 | b2b9ded12facf3ae205eb4b1cbb455eca6af8977 | 2023-05-31T07:52:33Z | python | 2023-06-19T01:34:18Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,465 | ["langchain/document_loaders/bigquery.py", "poetry.lock", "pyproject.toml"] | Google BigQuery Loader doesn't take credentials | ### Feature request
I would like to be able to provide credentials to the bigquery.client object
### Motivation
I cannot access protected datasets without use of a service account or other credentials
### Your contribution
I will submit a PR. | https://github.com/langchain-ai/langchain/issues/5465 | https://github.com/langchain-ai/langchain/pull/5466 | eab4b4ccd7e1ca4dcfdf4c400250494e4503fcb1 | 199cc700a344a2b15dff3a8924746a5ceb1aad7e | 2023-05-30T21:18:13Z | python | 2023-05-30T23:25:22Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | ["langchain/tools/base.py", "tests/unit_tests/tools/test_base.py"] | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | 2023-05-30T17:09:02Z | python | 2023-06-23T08:48:27Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,433 | ["docs/modules/agents/streaming_stdout_final_only.ipynb", "langchain/callbacks/streaming_stdout_final_only.py"] | FinalStreamingStdOutCallbackHandler not working with ChatOpenAI LLM | ### System Info
Hi :)
I tested the new callback stream handler `FinalStreamingStdOutCallbackHandler` and noticed an issue with it.
I copied the code from the documentation and made just one change - use `ChatOpenAI` instead of `OpenAI`
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`llm = ChatOpenAI(streaming=True, callbacks=[FinalStreamingStdOutCallbackHandler()], temperature=0)` here is my only change
`tools = load_tools(["wikipedia", "llm-math"], llm=llm)`
`agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=False)`
`agent.run("It's 2023 now. How many years ago did Konrad Adenauer become Chancellor of Germany.")`
### Expected behavior
The code above returns the response from the agent but does not stream it. In my project, I must use the `ChatOpenAI` LLM, so I would appreciate it if someone could fix this issue, please. | https://github.com/langchain-ai/langchain/issues/5433 | https://github.com/langchain-ai/langchain/pull/5497 | 1f4abb265a9fd6c520835c3bebe8243b077495b5 | 44ad9628c9828e220540dd77680611741a6ed087 | 2023-05-30T10:51:06Z | python | 2023-06-03T22:05:58Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,423 | ["langchain/agents/mrkl/output_parser.py", "tests/unit_tests/agents/test_mrkl.py"] | SQLDatabaseToolkit doesn't work well with Postgresql, it will truncate the last double quotation marks in the SQL | ### System Info
Langchain: 0.0.184
Python: 3.10.9
Platform: Windows 10 with Jupyter lab
### Who can help?
@vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
SQLDatabaseToolkit works well if the SQL doesn't include the double quotation marks at the end, if there is, it will truncate the last double quotation marks, resulting in an endless loop.
Below is the initial code snapshot.

And when I executed it.

The LLM generates the correct SQL, but the toolkit truncats the last double quotation marks.
### Expected behavior
Won't truncate the last double quotation marks for PostgreSql. | https://github.com/langchain-ai/langchain/issues/5423 | https://github.com/langchain-ai/langchain/pull/5432 | c1807d84086c92d1aea2eb7be181204e72ae10d0 | 1d861dc37a63a41ae2e0983f2ee418efde968ce3 | 2023-05-30T04:02:36Z | python | 2023-05-30T19:58:47Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,400 | ["langchain/experimental/plan_and_execute/agent_executor.py", "langchain/experimental/plan_and_execute/executors/agent_executor.py"] | Add the ability to pass the prompt through to Executor Agents for enrichment during PlanAndExecute | ### Feature request
Add the ability to pass the original prompt through to the ExecutorAgent so that the original explicit context is not lost during a PlanAndExecute run.
### Motivation
PlanAndExecute agents can create a plan of steps dependent on context given in the original prompt. However, this context is lost after the plan is created and is being executed.
However, often the plan is formed in a way which refers to the prior context, losing information. For example, I gave the following prompt, and gave the agent access only to the PythonREPL tool:
```py
prompt = (
f"Task: Analyse the customer data available in the database with path '{db_path}'. Tell me the average "
"sales by month."
)
```
In the above example, `db_path` is a fully formed string which can be passed directly to `sqlalchemy.create_engine`.
The first step in the plan formed was: `Connect to the database using the given path`. This would ordinarily be fine, however, the context of the "given path" was lost, as it was not part of the reformed prompt passed to the executor. Optionally including the original prompt in the template should assist with this.
### Your contribution
I will be submitting a PR shortly with a proposed solution :) | https://github.com/langchain-ai/langchain/issues/5400 | https://github.com/langchain-ai/langchain/pull/5401 | ae2cf1f598360e1fc83839fdcd363378d663c936 | 1f4abb265a9fd6c520835c3bebe8243b077495b5 | 2023-05-29T13:19:30Z | python | 2023-06-03T21:59:09Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,361 | ["langchain/agents/agent_toolkits/openapi/planner.py"] | Validation Error importing OpenAPI planner when OpenAI credentials not in environment | ### System Info
Name: langchain, Version: 0.0.180
Name: openai, Version: 0.27.7
macOS Mojave 10.14.6
### Who can help?
@vowelparrot
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps:
1. Do _not_ load open ai key into env with the intention of wanting to pass it as a parameter when instantiating the llm
```
from dotenv import dotenv_values
openai_api_key = dotenv_values('.env')['OPENAI_API_KEY']
```
2. Load the planner:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
```
### Expected behavior
A validation error should not be raised during the importing of the module.
We should be able to pass the open api key as an argument.
That is, the following should work:
```
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
llm = OpenAI(model_name="gpt-4", temperature=0.0, open_api_key=open_api_key)
```
| https://github.com/langchain-ai/langchain/issues/5361 | https://github.com/langchain-ai/langchain/pull/5380 | 6df90ad9fd1ee6d64e112d8d58f9524ca11b0757 | 14099f1b93401a68f531fc1a55c50c5872e720fa | 2023-05-28T08:18:12Z | python | 2023-05-29T13:22:35Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,337 | ["docs/modules/indexes/vectorstores/examples/mongodb_atlas_vector_search.ipynb", "langchain/vectorstores/__init__.py", "langchain/vectorstores/mongodb_atlas.py", "poetry.lock", "pyproject.toml", "tests/integration_tests/.env.example", "tests/integration_tests/vectorstores/test_mongodb_atlas.py"] | Add MongoDBAtlasVectorSearch vectorstore | ### Feature request
MongoDB Atlas is a fully managed DBaaS, powered by the MongoDB database. It also enables Lucene (collocated with the mongod process) for full-text search - this is know as Atlas Search. The PR has to allow Langchain users from using the functionality related to the MongoDB Atlas Vector Search feature where you can store your embeddings in MongoDB documents and create a Lucene vector index to perform a KNN search.
### Motivation
There is currently no way in Langchain to connect to MongoDB Atlas and perform a KNN search.
### Your contribution
I am submitting a PR for this issue soon. | https://github.com/langchain-ai/langchain/issues/5337 | https://github.com/langchain-ai/langchain/pull/5338 | c4b502a47051f50c6e24b824d3db622748458d13 | a61b7f7e7c76ae8667e40cd29cfe30a3868d7dd8 | 2023-05-27T11:41:39Z | python | 2023-05-30T14:59:01Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,316 | ["langchain/embeddings/vertexai.py", "tests/integration_tests/embeddings/test_vertexai.py"] | VertexAIEmbeddings error when passing a list with of length greater than 5. | ### System Info
google-cloud-aiplatform==1.25.0
langchain==0.0.181
python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Any list with len > 5 will cause an error.
```python
from langchain.vectorstores import FAISS
from langchain.embeddings import VertexAIEmbeddings
text = ['text_1', 'text_2', 'text_3', 'text_4', 'text_5', 'text_6']
embeddings = VertexAIEmbeddings()
vectorstore = FAISS.from_texts(text, embeddings)
```
```python
InvalidArgument Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/google/api_core/grpc_helpers.py](https://localhost:8080/#) in error_remapped_callable(*args, **kwargs)
72 return callable_(*args, **kwargs)
73 except grpc.RpcError as exc:
---> 74 raise exceptions.from_grpc_error(exc) from exc
75
76 return error_remapped_callable
InvalidArgument: 400 5 instance(s) is allowed per prediction. Actual: 6
```
### Expected behavior
Excepted to successfully be able to vectorize a larger list of items. Maybe implement a step to | https://github.com/langchain-ai/langchain/issues/5316 | https://github.com/langchain-ai/langchain/pull/5325 | 3e164684234d3a51032b737dce2c25ba6cd3ec2d | c09f8e4ddc3be791bd0e8c8385ed1871bdd5d681 | 2023-05-26T20:31:56Z | python | 2023-05-29T13:57:41Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,304 | ["langchain/retrievers/document_compressors/cohere_rerank.py"] | CohereAPIError thrown when base retriever returns empty documents in ContextualCompressionRetriever using Cohere Rank | ### System Info
- 5.19.0-42-generic # 43~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Apr 21 16:51:08 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
- langchain==0.0.180
- Python 3.10.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Set up a retriever using any type of retriever (for example, I used Pinecone).
2. Pass it into the ContextualCompressionRetriever.
3. If the base retriever returns empty documents,
4. It throws an error: **cohere.error.CohereAPIError: invalid request: list of documents must not be empty**
> File "/workspaces/example/.venv/lib/python3.10/site-packages/langchain/retrievers/contextual_compression.py", line 37, in get_relevant_documents
> compressed_docs = self.base_compressor.compress_documents(docs, query)
> File "/workspaces/example/.venv/lib/python3.10/site-packages/langchain/retrievers/document_compressors/cohere_rerank.py", line 57, in compress_documents
> results = self.client.rerank(
> File "/workspaces/example/.venv/lib/python3.10/site-packages/cohere/client.py", line 633, in rerank
> reranking = Reranking(self._request(cohere.RERANK_URL, json=json_body))
> File "/workspaces/example/.venv/lib/python3.10/site-packages/cohere/client.py", line 692, in _request
> self._check_response(json_response, response.headers, response.status_code)
> File "/workspaces/example/.venv/lib/python3.10/site-packages/cohere/client.py", line 642, in _check_response
> raise CohereAPIError(
> **cohere.error.CohereAPIError: invalid request: list of documents must not be empty**
Code is Like
```python
retriever = vectorstore.as_retriever()
compressor = CohereRerank()
compression_retriever = ContextualCompressionRetriever(
base_compressor=compressor, base_retriever=retriever
)
return compression_retriever
```
### Expected behavior
**no error throws** and return empty list | https://github.com/langchain-ai/langchain/issues/5304 | https://github.com/langchain-ai/langchain/pull/5306 | 1366d070fc656813c0c33cb5733290ade0fddf7c | 99a1e3f3a309852da989af080ba47288dcb9a348 | 2023-05-26T16:10:47Z | python | 2023-05-28T20:19:34Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,279 | ["langchain/llms/vertexai.py"] | Issue Passing in Credential to VertexAI model | ### System Info
langchain==0.0.180
google-cloud-aiplatform==1.25.0
Have Google Cloud CLI and ran and logged in using `gcloud auth login`
Running locally and online in Google Colab
### Who can help?
@hwchase17 @hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://colab.research.google.com/drive/19QGMptiCn49fu4i5ZQ0ygfR74ktQFQlb?usp=sharing
Unexpected behavior`field "credentials" not yet prepared so type is still a ForwardRef, you might need to call VertexAI.update_forward_refs().` seems to only appear if you pass in any credenitial valid or invalid to the vertexai wrapper from langchain.
### The error
This code should not throw `field "credentials" not yet prepared so type is still a ForwardRef, you might need to call VertexAI.update_forward_refs().`. It should either not throw any errors, if the credentials, project_Id, and location are correct. Or, if there is an issue with one of params, it should throw a specific error from the `vertexai.init` call below but it doesn't seem to be reaching it if a credential is passed in.
```
vertexai.init(project=project_id,location=location,credentials=credentials,)
``` | https://github.com/langchain-ai/langchain/issues/5279 | https://github.com/langchain-ai/langchain/pull/5297 | a669abf16b3ac3dcf10629936d3c58411469bb3c | aa3c7b32715ee22b29aebae763f6183c4609be22 | 2023-05-26T04:34:54Z | python | 2023-05-26T15:31:02Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,257 | ["docs/modules/indexes/document_loaders/examples/github.ipynb", "langchain/document_loaders/__init__.py", "langchain/document_loaders/github.py", "tests/integration_tests/document_loaders/test_github.py", "tests/unit_tests/document_loaders/test_github.py"] | Github integration | ### Feature request
Would be amazing to scan and get all the contents from the Github API, such as PRs, Issues and Discussions.
### Motivation
this would allows to ask questions on the history of the project, issues that other users might have found, and much more!
### Your contribution
Not really a python developer here, would take me a while to figure out all the changes required. | https://github.com/langchain-ai/langchain/issues/5257 | https://github.com/langchain-ai/langchain/pull/5408 | 0b3e0dd1d2fb81eeca76b960bb2376bd666608cd | 8259f9b7facae95236dd5156e2a14d87a0e1f90c | 2023-05-25T16:27:21Z | python | 2023-05-30T03:11:21Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,243 | ["docs/modules/models/llms/integrations/openai.ipynb", "docs/modules/models/text_embedding/examples/openai.ipynb", "langchain/chat_models/azure_openai.py", "langchain/chat_models/openai.py", "langchain/embeddings/openai.py", "langchain/llms/openai.py"] | Add possibility to set a proxy for openai API access | ### Feature request
For a deployment behind a corporate proxy, it's useful to be able to access the API by specifying an explicit proxy.
### Motivation
Currently it's possible to do this by setting the environment variables http_proxy / https_proxy to set a proxy for the whole python interpreter. However this then prevents access to other internal servers. accessing other network resources (e.g. a vector database on a different server, corporate S3 storage etc.) should not go through the proxy. So it's important to be able to just proxy requests for externally hosted APIs. We are working with the OpenAI API and currently we cannot both access those and our qdrant database on another server.
### Your contribution
Since the openai python package supports the proxy parameter, this is relatively easy to implement for the OpenAI API. I'll submit a PR. | https://github.com/langchain-ai/langchain/issues/5243 | https://github.com/langchain-ai/langchain/pull/5246 | 9c0cb90997db9eb2e2a736df458d39fd7bec8ffb | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 2023-05-25T13:00:09Z | python | 2023-05-25T16:50:25Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,224 | ["langchain/chains/loading.py"] | PALChain loading fails | ### System Info
langchain==0.0.176
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import PALChain
from langchain import OpenAI
llm = OpenAI(temperature=0, max_tokens=512)
pal_chain = PALChain.from_math_prompt(llm, verbose=True)
question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"
pal_chain.save("/Users/liang.zhang/pal_chain.yaml")
loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
```
Error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [17], in <cell line: 1>()
----> 1 loaded_chain = load_chain("/Users/liang.zhang/pal_chain.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:234, in _load_pal_chain(config, **kwargs)
232 if "llm" in config:
233 llm_config = config.pop("llm")
--> 234 llm = load_llm_from_config(llm_config)
235 elif "llm_path" in config:
236 llm = load_llm(config.pop("llm_path"))
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/llms/loading.py:14, in load_llm_from_config(config)
12 def load_llm_from_config(config: dict) -> BaseLLM:
13 """Load LLM from Config Dict."""
---> 14 if "_type" not in config:
15 raise ValueError("Must specify an LLM Type in config")
16 config_type = config.pop("_type")
TypeError: argument of type 'NoneType' is not iterable
```
### Expected behavior
No errors should occur. | https://github.com/langchain-ai/langchain/issues/5224 | https://github.com/langchain-ai/langchain/pull/5343 | f6615cac41453a9bb3a061a3ffb29327f5e04fb2 | 642ae83d86b28b37605c9a20ca25c667ed461595 | 2023-05-25T00:58:09Z | python | 2023-05-29T13:44:47Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,212 | ["docs/modules/indexes/vectorstores/examples/opensearch.ipynb", "langchain/vectorstores/opensearch_vector_search.py"] | OpenSearch VectorStore cannot return more than 4 retrieved result. | Using the following script, I can only return maximum 4 documents. With k = 1, k= 2, k=3, k = 4, k =5, k=6, ... similarity_search_with_score returns 1, 2, 3, 4, 4, 4... docs.
opensearch_url = "xxxxxxxxx.com"
docsearch = OpenSearchVectorSearch.from_documents(docs,
embedding = HuggingFaceEmbeddings(),
opensearch_url=opensearch_url,
index_name="my_index_name")
retrieved_docs = docsearch.similarity_search_with_score(query, k=10)
This only return 4 documents even though I have len(docs) = 90+. Tried various indexes and various queries. Confirmed the issue is persistent.
Find a [related issue](https://github.com/hwchase17/langchain/issues/1946) (also max out at 4 regardless of k) for Chroma.
| https://github.com/langchain-ai/langchain/issues/5212 | https://github.com/langchain-ai/langchain/pull/5216 | 88ed8e1cd6c7f1b679efe9e80cf6f8c33e3e6217 | 3be9ba14f319bc5b92c1e516b352f9cafdf51936 | 2023-05-24T20:49:47Z | python | 2023-05-25T16:51:23Z |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,191 | ["docs/modules/indexes/document_loaders/examples/confluence.ipynb", "langchain/document_loaders/confluence.py"] | Support personal access token (PAT) in ConfluenceLoader | ### Issue you'd like to raise.
The [Atlassian API](https://atlassian-python-api.readthedocs.io/) (including Confluence) supports just passing a PAT (as token=<PAT>) to authenticate as a user, unfortunately the LangChain abstraction doesn't.
### Suggestion:
Add an optional "token" parameter to ConfluenceLoader and use it to authenticate within as an alternative to api_key/password/oauth. | https://github.com/langchain-ai/langchain/issues/5191 | https://github.com/langchain-ai/langchain/pull/5385 | b81f98b8a66999117246fbc134fc09d64a04e230 | ae2cf1f598360e1fc83839fdcd363378d663c936 | 2023-05-24T11:15:54Z | python | 2023-06-03T21:57:49Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.