issue_owner_repo
sequencelengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
There are cases where user needs to pass through variables for more than one chains for later use, but current implementation doesnt support this.
Reproducible example following on RAG langchain expression language example from https://python.langchain.com/docs/expression_language/cookbook/retrieval
`from operator import itemgetter
from langchain_community.vectorstores import FAISS
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
vectorstore = FAISS.from_texts(
["harrison worked at kensho"], embedding=OpenAIEmbeddings()
)
retriever = vectorstore.as_retriever()
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
model = ChatOpenAI()
chain = (
{"context": retriever, "question": RunnablePassthrough()}
### Only line added to the example
| {'context': itemgetter('context'), "question": itemgetter('question')}
| prompt
| model
| StrOutputParser()
)
chain.invoke("where did harrison work?")
'
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/zhengisamazing/1.python_dir/vigyan-llm-api/dev/langchain_playground.py", line 110, in <module>
chain.invoke("where did harrison work?")
File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2056, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2693, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2693, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^
File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/concurrent/futures/_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3504, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1243, in _call_with_config
context.run(
File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3378, in _invoke
output = call_func_with_variable_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/zhengisamazing/opt/anaconda3/envs/vigyan-llm-api/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
TypeError: string indices must be integers, not 'str'
### Description
There are cases where user needs to pass through variables for more than one chains for later use, but current implementation doesnt support this.
Provided reproducible example following on RAG langchain expression language example from https://python.langchain.com/docs/expression_language/cookbook/retrieval
### System Info
langchain==0.1.7
langchain-cli==0.0.21
langchain-community==0.0.20
langchain-core==0.1.27
langchain-google-genai==0.0.9
langchain-openai==0.0.6
platform: mac
python version:3.11.7 | Langchain Expression Language (LCEL) pass through does not work with two consecutive chain | https://api.github.com/repos/langchain-ai/langchain/issues/18173/comments | 2 | 2024-02-27T06:57:56Z | 2024-07-26T16:06:17Z | https://github.com/langchain-ai/langchain/issues/18173 | 2,155,814,048 | 18,173 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
def process(request: str):
raise Exception("not implemented")
model = ChatOpenAI()
add_routes(
app,
RunnableLambda(process) | model,
path="/openai",
)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=9000)
```
### Error Message and Stack Trace (if applicable)
the client side alway got:

### Description
I'd like a way can customize the error message returned to caller side.
### System Info
NONE. | Exception are all treated as 500 Internal Server Error to caller side | https://api.github.com/repos/langchain-ai/langchain/issues/18168/comments | 0 | 2024-02-27T03:36:29Z | 2024-06-08T16:12:50Z | https://github.com/langchain-ai/langchain/issues/18168 | 2,155,581,294 | 18,168 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
import os
os.environ["PREDIBASE_API_TOKEN"] = "{PREDIBASE_API_TOKEN}"
from langchain_community.llms import Predibase
model = Predibase(model = 'vicuna-13b', predibase_api_key=os.environ.get('PREDIBASE_API_TOKEN'))
response = model("Can you recommend me a nice dry wine?")
print(response)
```
### Error Message and Stack Trace (if applicable)
It says "pc.prompt is depricated"
### Description
I think it should be something like:
```python
# load model and version
llm = pc.LLM(self.model)
# Attach the adapter to the (client-side) deployment object
if self.adapter is not None:
adapter = pc.get_model(self.adapter) # Add the "adapter" as adapter: str in the class
ft_llm = llm.with_adapter(adapter)
else:
ft_llm = llm
results = ft_llm.prompt(prompt)
return results.response```
### System Info
NA | Predibase LLM uses a deprecated code | https://api.github.com/repos/langchain-ai/langchain/issues/18167/comments | 0 | 2024-02-27T03:30:04Z | 2024-06-08T16:12:45Z | https://github.com/langchain-ai/langchain/issues/18167 | 2,155,576,070 | 18,167 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.memory import ChatMessageHistory
from pydantic import BaseModel
class Model(BaseModel):
h: ChatMessageHistory
print(Model.model_json_schema())
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
PydanticInvalidForJsonSchema Traceback (most recent call last)
Cell In[10], [line 5](vscode-notebook-cell:?execution_count=10&line=5)
[3](vscode-notebook-cell:?execution_count=10&line=3) class Model(BaseModel):
[4](vscode-notebook-cell:?execution_count=10&line=4) h: ChatMessageHistory
----> [5](vscode-notebook-cell:?execution_count=10&line=5) print(Model.model_json_schema())
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:385](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:385), in BaseModel.model_json_schema(cls, by_alias, ref_template, schema_generator, mode)
[365](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:365) @classmethod
[366](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:366) def model_json_schema(
[367](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:367) cls,
(...)
[371](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:371) mode: JsonSchemaMode = 'validation',
[372](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:372) ) -> dict[str, Any]:
[373](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:373) """Generates a JSON schema for a model class.
[374](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:374)
[375](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:375) Args:
(...)
[383](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:383) The JSON schema for the given model class.
[384](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:384) """
--> [385](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:385) return model_json_schema(
[386](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:386) cls, by_alias=by_alias, ref_template=ref_template, schema_generator=schema_generator, mode=mode
[387](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:387) )
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:2158](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:2158), in model_json_schema(cls, by_alias, ref_template, schema_generator, mode)
[2156](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:2156) cls.__pydantic_validator__.rebuild()
[2157](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:2157) assert '__pydantic_core_schema__' in cls.__dict__, 'this is a bug! please report it'
-> [2158](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:2158) return schema_generator_instance.generate(cls.__pydantic_core_schema__, mode=mode)
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:413](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:413), in GenerateJsonSchema.generate(self, schema, mode)
[406](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:406) if self._used:
[407](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:407) raise PydanticUserError(
[408](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:408) 'This JSON schema generator has already been used to generate a JSON schema. '
[409](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:409) f'You must create a new instance of {type(self).__name__} to generate a new JSON schema.',
[410](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:410) code='json-schema-already-used',
[411](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:411) )
--> [413](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:413) json_schema: JsonSchemaValue = self.generate_inner(schema)
[414](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:414) json_ref_counts = self.get_json_ref_counts(json_schema)
[416](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:416) # Remove the top-level $ref if present; note that the _generate method already ensures there are no sibling keys
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552), in GenerateJsonSchema.generate_inner(self, schema)
[548](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:548) return json_schema
[550](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:550) current_handler = _schema_generation_shared.GenerateJsonSchemaHandler(self, new_handler_func)
--> [552](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552) json_schema = current_handler(schema)
[553](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:553) if _core_utils.is_core_schema(schema):
[554](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:554) json_schema = populate_defs(schema, json_schema)
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36), in GenerateJsonSchemaHandler.__call__(self, _GenerateJsonSchemaHandler__core_schema)
[35](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:35) def __call__(self, __core_schema: CoreSchemaOrField) -> JsonSchemaValue:
---> [36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36) return self.handler(__core_schema)
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:526](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:526), in GenerateJsonSchema.generate_inner.<locals>.new_handler_func(schema_or_field, current_handler, js_modify_function)
[521](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:521) def new_handler_func(
[522](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:522) schema_or_field: CoreSchemaOrField,
[523](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:523) current_handler: GetJsonSchemaHandler = current_handler,
[524](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:524) js_modify_function: GetJsonSchemaFunction = js_modify_function,
[525](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:525) ) -> JsonSchemaValue:
--> [526](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:526) json_schema = js_modify_function(schema_or_field, current_handler)
[527](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:527) if _core_utils.is_core_schema(schema_or_field):
[528](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:528) json_schema = populate_defs(schema_or_field, json_schema)
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:603](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:603), in BaseModel.__get_pydantic_json_schema__(cls, _BaseModel__core_schema, _BaseModel__handler)
[580](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:580) @classmethod
[581](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:581) def __get_pydantic_json_schema__(
[582](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:582) cls,
[583](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:583) __core_schema: CoreSchema,
[584](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:584) __handler: GetJsonSchemaHandler,
[585](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:585) ) -> JsonSchemaValue:
[586](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:586) """Hook into generating the model's JSON schema.
[587](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:587)
[588](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:588) Args:
(...)
[601](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:601) A JSON schema, as a Python object.
[602](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:602) """
--> [603](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/main.py:603) return __handler(__core_schema)
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36), in GenerateJsonSchemaHandler.__call__(self, _GenerateJsonSchemaHandler__core_schema)
[35](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:35) def __call__(self, __core_schema: CoreSchemaOrField) -> JsonSchemaValue:
---> [36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36) return self.handler(__core_schema)
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:526](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:526), in GenerateJsonSchema.generate_inner.<locals>.new_handler_func(schema_or_field, current_handler, js_modify_function)
[521](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:521) def new_handler_func(
[522](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:522) schema_or_field: CoreSchemaOrField,
[523](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:523) current_handler: GetJsonSchemaHandler = current_handler,
[524](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:524) js_modify_function: GetJsonSchemaFunction = js_modify_function,
[525](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:525) ) -> JsonSchemaValue:
--> [526](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:526) json_schema = js_modify_function(schema_or_field, current_handler)
[527](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:527) if _core_utils.is_core_schema(schema_or_field):
[528](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:528) json_schema = populate_defs(schema_or_field, json_schema)
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:212](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:212), in modify_model_json_schema(schema_or_field, handler, cls)
[199](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:199) def modify_model_json_schema(
[200](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:200) schema_or_field: CoreSchemaOrField, handler: GetJsonSchemaHandler, *, cls: Any
[201](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:201) ) -> JsonSchemaValue:
[202](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:202) """Add title and description for model-like classes' JSON schema.
[203](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:203)
[204](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:204) Args:
(...)
[210](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:210) JsonSchemaValue: The updated JSON schema.
[211](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:211) """
--> [212](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:212) json_schema = handler(schema_or_field)
[213](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:213) original_schema = handler.resolve_ref_schema(json_schema)
[214](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:214) # Preserve the fact that definitions schemas should never have sibling keys:
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36), in GenerateJsonSchemaHandler.__call__(self, _GenerateJsonSchemaHandler__core_schema)
[35](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:35) def __call__(self, __core_schema: CoreSchemaOrField) -> JsonSchemaValue:
---> [36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36) return self.handler(__core_schema)
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509), in GenerateJsonSchema.generate_inner.<locals>.handler_func(schema_or_field)
[507](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:507) if _core_utils.is_core_schema(schema_or_field) or _core_utils.is_core_schema_field(schema_or_field):
[508](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:508) generate_for_schema_type = self._schema_type_to_method[schema_or_field['type']]
--> [509](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509) json_schema = generate_for_schema_type(schema_or_field)
[510](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:510) else:
[511](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:511) raise TypeError(f'Unexpected schema type: schema={schema_or_field}')
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1323](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1323), in GenerateJsonSchema.model_schema(self, schema)
[1320](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1320) title = config.get('title')
[1322](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1322) with self._config_wrapper_stack.push(config):
-> [1323](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1323) json_schema = self.generate_inner(schema['schema'])
[1325](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1325) json_schema_extra = config.get('json_schema_extra')
[1326](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1326) if cls.__pydantic_root_model__:
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552), in GenerateJsonSchema.generate_inner(self, schema)
[548](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:548) return json_schema
[550](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:550) current_handler = _schema_generation_shared.GenerateJsonSchemaHandler(self, new_handler_func)
--> [552](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552) json_schema = current_handler(schema)
[553](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:553) if _core_utils.is_core_schema(schema):
[554](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:554) json_schema = populate_defs(schema, json_schema)
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36), in GenerateJsonSchemaHandler.__call__(self, _GenerateJsonSchemaHandler__core_schema)
[35](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:35) def __call__(self, __core_schema: CoreSchemaOrField) -> JsonSchemaValue:
---> [36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36) return self.handler(__core_schema)
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509), in GenerateJsonSchema.generate_inner.<locals>.handler_func(schema_or_field)
[507](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:507) if _core_utils.is_core_schema(schema_or_field) or _core_utils.is_core_schema_field(schema_or_field):
[508](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:508) generate_for_schema_type = self._schema_type_to_method[schema_or_field['type']]
--> [509](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509) json_schema = generate_for_schema_type(schema_or_field)
[510](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:510) else:
[511](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:511) raise TypeError(f'Unexpected schema type: schema={schema_or_field}')
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1415](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1415), in GenerateJsonSchema.model_fields_schema(self, schema)
[1413](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1413) if self.mode == 'serialization':
[1414](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1414) named_required_fields.extend(self._name_required_computed_fields(schema.get('computed_fields', [])))
-> [1415](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1415) json_schema = self._named_required_fields_schema(named_required_fields)
[1416](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1416) extras_schema = schema.get('extras_schema', None)
[1417](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1417) if extras_schema is not None:
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1226](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1226), in GenerateJsonSchema._named_required_fields_schema(self, named_required_fields)
[1224](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1224) name = self._get_alias_name(field, name)
[1225](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1225) try:
-> [1226](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1226) field_json_schema = self.generate_inner(field).copy()
[1227](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1227) except PydanticOmit:
[1228](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1228) continue
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552), in GenerateJsonSchema.generate_inner(self, schema)
[548](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:548) return json_schema
[550](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:550) current_handler = _schema_generation_shared.GenerateJsonSchemaHandler(self, new_handler_func)
--> [552](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552) json_schema = current_handler(schema)
[553](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:553) if _core_utils.is_core_schema(schema):
[554](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:554) json_schema = populate_defs(schema, json_schema)
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36), in GenerateJsonSchemaHandler.__call__(self, _GenerateJsonSchemaHandler__core_schema)
[35](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:35) def __call__(self, __core_schema: CoreSchemaOrField) -> JsonSchemaValue:
---> [36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36) return self.handler(__core_schema)
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:544](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:544), in GenerateJsonSchema.generate_inner.<locals>.new_handler_func(schema_or_field, current_handler, js_modify_function)
[539](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:539) def new_handler_func(
[540](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:540) schema_or_field: CoreSchemaOrField,
[541](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:541) current_handler: GetJsonSchemaHandler = current_handler,
[542](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:542) js_modify_function: GetJsonSchemaFunction = js_modify_function,
[543](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:543) ) -> JsonSchemaValue:
--> [544](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:544) json_schema = js_modify_function(schema_or_field, current_handler)
[545](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:545) if _core_utils.is_core_schema(schema_or_field):
[546](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:546) json_schema = populate_defs(schema_or_field, json_schema)
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:2012](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:2012), in get_json_schema_update_func.<locals>.json_schema_update_func(core_schema_or_field, handler)
[2009](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:2009) def json_schema_update_func(
[2010](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:2010) core_schema_or_field: CoreSchemaOrField, handler: GetJsonSchemaHandler
[2011](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:2011) ) -> JsonSchemaValue:
-> [2012](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:2012) json_schema = {**handler(core_schema_or_field), **json_schema_update}
[2013](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:2013) add_json_schema_extra(json_schema, json_schema_extra)
[2014](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_generate_schema.py:2014) return json_schema
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36), in GenerateJsonSchemaHandler.__call__(self, _GenerateJsonSchemaHandler__core_schema)
[35](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:35) def __call__(self, __core_schema: CoreSchemaOrField) -> JsonSchemaValue:
---> [36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36) return self.handler(__core_schema)
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509), in GenerateJsonSchema.generate_inner.<locals>.handler_func(schema_or_field)
[507](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:507) if _core_utils.is_core_schema(schema_or_field) or _core_utils.is_core_schema_field(schema_or_field):
[508](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:508) generate_for_schema_type = self._schema_type_to_method[schema_or_field['type']]
--> [509](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509) json_schema = generate_for_schema_type(schema_or_field)
[510](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:510) else:
[511](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:511) raise TypeError(f'Unexpected schema type: schema={schema_or_field}')
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1294](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1294), in GenerateJsonSchema.model_field_schema(self, schema)
[1285](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1285) def model_field_schema(self, schema: core_schema.ModelField) -> JsonSchemaValue:
[1286](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1286) """Generates a JSON schema that matches a schema that defines a model field.
[1287](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1287)
[1288](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1288) Args:
(...)
[1292](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1292) The generated JSON schema.
[1293](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1293) """
-> [1294](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1294) return self.generate_inner(schema['schema'])
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552), in GenerateJsonSchema.generate_inner(self, schema)
[548](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:548) return json_schema
[550](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:550) current_handler = _schema_generation_shared.GenerateJsonSchemaHandler(self, new_handler_func)
--> [552](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552) json_schema = current_handler(schema)
[553](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:553) if _core_utils.is_core_schema(schema):
[554](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:554) json_schema = populate_defs(schema, json_schema)
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36), in GenerateJsonSchemaHandler.__call__(self, _GenerateJsonSchemaHandler__core_schema)
[35](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:35) def __call__(self, __core_schema: CoreSchemaOrField) -> JsonSchemaValue:
---> [36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36) return self.handler(__core_schema)
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509), in GenerateJsonSchema.generate_inner.<locals>.handler_func(schema_or_field)
[507](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:507) if _core_utils.is_core_schema(schema_or_field) or _core_utils.is_core_schema_field(schema_or_field):
[508](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:508) generate_for_schema_type = self._schema_type_to_method[schema_or_field['type']]
--> [509](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509) json_schema = generate_for_schema_type(schema_or_field)
[510](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:510) else:
[511](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:511) raise TypeError(f'Unexpected schema type: schema={schema_or_field}')
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1143](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1143), in GenerateJsonSchema.chain_schema(self, schema)
[1131](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1131) """Generates a JSON schema that matches a core_schema.ChainSchema.
[1132](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1132)
[1133](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1133) When generating a schema for validation, we return the validation JSON schema for the first step in the chain.
(...)
[1140](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1140) The generated JSON schema.
[1141](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1141) """
[1142](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1142) step_index = 0 if self.mode == 'validation' else -1 # use first step for validation, last for serialization
-> [1143](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:1143) return self.generate_inner(schema['steps'][step_index])
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552), in GenerateJsonSchema.generate_inner(self, schema)
[548](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:548) return json_schema
[550](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:550) current_handler = _schema_generation_shared.GenerateJsonSchemaHandler(self, new_handler_func)
--> [552](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:552) json_schema = current_handler(schema)
[553](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:553) if _core_utils.is_core_schema(schema):
[554](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:554) json_schema = populate_defs(schema, json_schema)
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36), in GenerateJsonSchemaHandler.__call__(self, _GenerateJsonSchemaHandler__core_schema)
[35](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:35) def __call__(self, __core_schema: CoreSchemaOrField) -> JsonSchemaValue:
---> [36](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/_internal/_schema_generation_shared.py:36) return self.handler(__core_schema)
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509), in GenerateJsonSchema.generate_inner.<locals>.handler_func(schema_or_field)
[507](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:507) if _core_utils.is_core_schema(schema_or_field) or _core_utils.is_core_schema_field(schema_or_field):
[508](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:508) generate_for_schema_type = self._schema_type_to_method[schema_or_field['type']]
--> [509](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:509) json_schema = generate_for_schema_type(schema_or_field)
[510](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:510) else:
[511](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:511) raise TypeError(f'Unexpected schema type: schema={schema_or_field}')
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:956](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:956), in GenerateJsonSchema.function_plain_schema(self, schema)
[947](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:947) def function_plain_schema(self, schema: core_schema.PlainValidatorFunctionSchema) -> JsonSchemaValue:
[948](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:948) """Generates a JSON schema that matches a function-plain schema.
[949](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:949)
[950](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:950) Args:
(...)
[954](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:954) The generated JSON schema.
[955](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:955) """
--> [956](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:956) return self._function_schema(schema)
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:921](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:921), in GenerateJsonSchema._function_schema(self, schema)
[918](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:918) return self.generate_inner(schema['schema'])
[920](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:920) # function-plain
--> [921](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:921) return self.handle_invalid_for_json_schema(
[922](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:922) schema, f'core_schema.PlainValidatorFunctionSchema ({schema["function"]})'
[923](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:923) )
File [~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:2074](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:2074), in GenerateJsonSchema.handle_invalid_for_json_schema(self, schema, error_info)
[2073](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:2073) def handle_invalid_for_json_schema(self, schema: CoreSchemaOrField, error_info: str) -> JsonSchemaValue:
-> [2074](https://file+.vscode-resource.vscode-cdn.net/Users/haolin.chen/Repos/agent_planner/~/Repos/agent_planner/.venv/lib/python3.11/site-packages/pydantic/json_schema.py:2074) raise PydanticInvalidForJsonSchema(f'Cannot generate a JsonSchema for {error_info}')
PydanticInvalidForJsonSchema: Cannot generate a JsonSchema for core_schema.PlainValidatorFunctionSchema ({'type': 'with-info', 'function': <bound method BaseModel.validate of <class 'langchain_community.chat_message_histories.in_memory.ChatMessageHistory'>>})
For further information visit https://errors.pydantic.dev/2.5/u/invalid-for-json-schema
### Description
I cannot generate an API doc with `ChatMessageHistory` in my model.
### System Info
OS: mac OS 14.3.1
python: 3.11.7
langchain: 0.1.9
pydantic: 2.5.3 | Cannot generate a JSON schema for ChatMessageHistory | https://api.github.com/repos/langchain-ai/langchain/issues/18141/comments | 0 | 2024-02-26T18:13:55Z | 2024-06-08T16:12:40Z | https://github.com/langchain-ai/langchain/issues/18141 | 2,154,808,002 | 18,141 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
@app.route('/stream2', methods=['GET'])
def stream2():
try:
user_query = request.json.get('user_query')
if not user_query:
return "No user query provided", 400
callback_handler = StreamHandler()
callback_manager = CallbackManager([callback_handler])
llm = AzureChatOpenAI(
azure_endpoint=AZURE_OPENAI_ENDPOINT,
openai_api_version=OPENAI_API_VERSION,
deployment_name=OPENAI_DEPLOYMENT_NAME,
openai_api_key=OPENAI_API_KEY,
openai_api_type=OPENAI_API_TYPE,
model_name=OPENAI_MODEL_NAME,
streaming=True,
model_kwargs={
"logprobs": None,
"best_of": None,
"echo": None
},
#callback_manager=callback_manager,
temperature=0)
@stream_with_context
async def generate():
async for chunk in llm.stream(user_query):
yield chunk
return Response(generate(), mimetype='text/event-stream')
except Exception as e:
logger.error(f"An error occurred: {e}")
return "An error occurred", 500
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "c:\Users\x\repos\y-chatbot\backend-flask-appservice\.venv\lib\site-packages\werkzeug\serving.py", line 362, in run_wsgi
execute(self.server.app)
File "c:\Users\x\repos\y-chatbot\backend-flask-appservice\.venv\lib\site-packages\werkzeug\serving.py", line 325, in execute
for data in application_iter:
File "c:\Users\x\repos\y-chatbot\backend-flask-appservice\.venv\lib\site-packages\werkzeug\wsgi.py", line 256, in __next__
return self._next()
File "c:\Users\x\repos\y-chatbot\backend-flask-appservice\.venv\lib\site-packages\werkzeug\wrappers\response.py", line 32, in _iter_encoded
for item in iterable:
TypeError: 'function' object is not iterable
### Description
I am creating a REST API with flask and using langchain and AzureOpenAI, however the llm.stream doesnt seam to work correctly based on my code.
How can I stream AzureOPENAI responses using flask?
### System Info
langchain==0.1.0
langchain-community==0.0.12
langchain-core==0.1.12
langchainhub==0.1.14 | AzureOpenAI Streaming with langchain and flask, error TypeError: 'function' object is not iterable | https://api.github.com/repos/langchain-ai/langchain/issues/18138/comments | 0 | 2024-02-26T17:26:59Z | 2024-06-08T16:12:35Z | https://github.com/langchain-ai/langchain/issues/18138 | 2,154,723,649 | 18,138 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from operator import itemgetter
import os
import urllib.parse
from sqlalchemy import create_engine
import warnings
from dotenv import load_dotenv
from langchain_community.utilities.sql_database import SQLDatabase
from langchain.chains.openai_tools import create_extraction_chain_pydantic
from langchain.chains import create_sql_query_chain
from langchain_community.chat_models import AzureChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_community.tools.sql_database.tool import QuerySQLDataBaseTool
from langchain_core.pydantic_v1 import BaseModel, Field
# Define Table class to be hashable
class Table(BaseModel):
"""Table in SQL database."""
name: str = Field(description="Name of table in SQL database.")
def __hash__(self):
return hash(self.name)
def __eq__(self, other):
if not isinstance(other, Table):
return NotImplemented
return self.name == other.name
# Ignore all warnings
warnings.filterwarnings("ignore")
# Load environment variables
basedir = os.path.abspath(os.path.dirname(__file__))
load_dotenv(os.path.join(basedir, '.env'))
API_KEY = os.environ.get("OPENAI_API_KEY")
# Database setup
username = "admin"
password = urllib.parse.quote_plus('t34!12!')
servername = "12.28.40.85"
database = "test"
uri = f"mssql+pyodbc://{username}:{password}@{servername}/{database}?driver=ODBC+Driver+17+for+SQL+Server"
engine = create_engine(uri)
db = SQLDatabase(engine, schema="aodb")
print("Connection to the SQL Server database successful")
# Initialize AzureChatOpenAI
llm = AzureChatOpenAI(
model_name=os.environ.get("AZURE_MODEL_NAME", 'gpt-4'),
deployment_name=os.environ.get("AZURE_DEPLOYMENT_NAME", 'sita-lab-gpt4'),
azure_endpoint=os.environ.get("AZURE_ENDPOINT"),
verbose=True
)
# Function to test agent execution
def test_agent(db, llm, my_question):
agent_executor = create_sql_agent(llm, db=db, agent_type="openai-tools", verbose=True)
response = agent_executor.invoke(my_question)
return response
# Function to get relevant tables for a question
def test_agent_get_relevant_tables(db, llm, my_question):
table_names = "\n".join(db.get_usable_table_names())
system = f"""Return the names of ALL the SQL tables that MIGHT be relevant to the user question. \
The tables are:
{table_names}
Remember to exclude ALL POTENTIALLY RELEVANT tables, even if you're not sure that they're needed."""
query_chain = create_sql_query_chain(llm, db)
table_chain = create_extraction_chain_pydantic(Table, llm, system_message=system)
returned_tables = table_chain.invoke({"input": my_question})
print(f".......................")
print(f"returned_tables : {returned_tables}")
table_name = returned_tables[0].name # Assuming that the table name is the first element in the list
print(f"table_name : {table_name}")
print(f".......................")
# Assign the "input" field to the table_chain
table_chain = RunnablePassthrough.assign(input=itemgetter("question")) | table_chain
print(f"table_chain : {table_chain}")
full_chain = RunnablePassthrough.assign(table_names_to_use=table_chain) | query_chain
print(f"full_chain : {full_chain}")
query = full_chain.invoke({"question": my_question, "table_name": table_name,"schema_name": "aodb"})
print(f"query : {query}")
# query = full_chain.invoke({"question": my_question})
response = db.run(query)
return response
# Example usage
final_response = test_agent_get_relevant_tables(db, llm, "list me airports")
print(final_response)
### Error Message and Stack Trace (if applicable)
PS C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot> & "c:/Users/Savita.Raghuvanshi/OneDrive - SITA/Desktop/llm/ML-LLM-OperationsCoPilot/ops_env/Scripts/python.exe" "c:/Users/Savita.Raghuvanshi/OneDrive - SITA/Desktop/llm/ML-LLM-OperationsCoPilot/test/QnA_with_sql_db/test_agent_get_relavent_tables.py"
Connection to the SQL Server database successful
.......................
returned_tables : [Table(name='Airport')]
table_name : Airport
.......................
table_chain : first=RunnableAssign(mapper={
input: RunnableLambda(itemgetter('question'))
}) middle=[ChatPromptTemplate(input_variables=['input'], messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template="Return the names of ALL the SQL tables that MIGHT be relevant to the user question. The tables are:\n\nActivityFlowActivityAssignment\nActivityFlowActivityAssignmentItem\nActivityFlowActivityAvailabilityRule\nActivityFlowActivityDefinition\nActivityFlowActivityDurationRule\nActivityFlowActivityTemplate\nActivityFlowDependencyTemplate\nActivityFlowEventAvailabilityRule\nActivityFlowEventDefinition\nActivityFlowEventTemplate\nActivityFlowGroupTemplate\nActivityFlowTemplate\nAircraft\nAircraftType\nAirline\nAirport\nAirportContext\nAirportContext2UserRole\nAllocationGroup\nAllocationGroup2ResourceAllocation\nAllocationGroupShapeAllocationRequirement\nAllocationGroupShapeRequirement\nAllocationPercentageToColor\nArea\nArrivalCodeShare\nArrivalFlight\nArrivalFlightActivity\nArrivalFlightEvent\nCascadingDowngradeRule\nCascadingDowngradeRuleAssignment\nCombinationRuleContribution\nCustomsType\nDepartureCodeShare\nDepartureFlight\nDepartureFlightActivity\nDepartureFlightEvent\nMovement\nMovementActivityFlowRequirement\nMovementCommentLog\nMovementCommentLogReason\nMovementLinkRule\nMovementMatchRequirement\nMovementPaxFlowDefinition\nMovementPaxFlowGroupLoad\nMovementPerformanceIndicatorChart\nMovementPerformanceIndicatorPaxFlow\nMovementResourceRequirement\nMovementSplitRule\nMovementStatusDefinition\nMovementStatusRule\nOverlapOnResourceContribution\nOverlapPermissionRuleContribution\nPaxFlowGroup\nPaxFlowProfileDefinition\nPaxFlowProfilePercentage\nPaxFlowSlot\nPerformanceIndicatorCell\nPerformanceIndicatorColumn\nPerformanceIndicatorGrid\nPerformanceIndicatorRow\nRecurringDowngradeRule\nRecurringDowngradeRule2Area\nRecurringDowngradeRule2Resource\nRecurringDowngradeRule2ResourceGroup\nResource\nResource2ResourceGroup\nResourceAllocation\nResourceAllocationAutoMappingRule\nResourceAllocationAutoMappingRuleAssignment\nResourceAllocationBufferTime\nResourceAllocationCombinationRule\nResourceAllocationCombinationRuleMatch\nResourceAllocationCombinationRuleMatch2Area\nResourceAllocationCombinationRuleMatch2Resource\nResourceAllocationCombinationRuleMatch2ResourceGroup\nResourceAllocationElementConfig\nResourceAllocationElementConfig2UserRole\nResourceAllocationElementConfigColor\nResourceAllocationElementConfigIcon\nResourceAllocationElementConfigText\nResourceAllocationElementConfigToolTip\nResourceAllocationMovementMatchPreference\nResourceAllocationMovementMatchPreference2Area\nResourceAllocationMovementMatchPreference2Resource\nResourceAllocationMovementMatchPreference2ResourceGroup\nResourceAllocationMovementMatchPreference2ResourceMatch\nResourceAllocationMovementMatchRule\nResourceAllocationMovementMatchRule2Area\nResourceAllocationMovementMatchRule2Resource\nResourceAllocationMovementMatchRule2ResourceGroup\nResourceAllocationMovementMatchRuleGroup\nResourceAllocationOverlapPermissionRule\nResourceAllocationOverlapPermissionRule2Area\nResourceAllocationOverlapPermissionRule2Resource\nResourceAllocationOverlapPermissionRule2ResourceGroup\nResourceDowngradeType\nResourceGroup\nResourceSerieVisualRule\nResourceSerieVisualRule2UserRole\nResourceSerieVisualRuleColor\nResourceSerieVisualRuleDescription\nResourceSerieVisualRuleHeaderText\nResourceSerieVisualRuleIcon\nResourceSerieVisualRuleToolTip\nResourceTypeSettings\nResourceTypeSettings2Details\nResourceUnavailabilityRule\nResourceUnavailabilityRule2Area\nResourceUnavailabilityRule2Resource\nResourceUnavailabilityRule2ResourceGroup\nRoute\nRouteViaPoint\nSeason\nSlotRequest\nSlotRequestComplianceRule\nSlotRequestCompliantSlot\nSlotRequestOperatedFlight\nSlotRequestPropertyMapping\nSlotRequestReservedSlot\nSlotRequestStatusDefinition\nSlotRequestStatusHistoryLog\nSlotRequestStatusTransition\nTowing\n\nRemember to exclude ALL POTENTIALLY RELEVANT tables, even if you're not sure that they're needed.")), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}'))]), RunnableBinding(bound=AzureChatOpenAI(verbose=True, client=<openai.resources.chat.completions.Completions object at 0x0000016AB8252AE0>, async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x0000016AB721ABD0>, model_name='gpt-4', openai_api_key='197d3afbceeb47dda56932803db5a3a4', openai_api_base='https://sitalabopenai2.openai.azure.com/openai/deployments/sita-lab-gpt4', openai_proxy='', openai_api_version='2023-10-01-preview', openai_api_type='azure'), kwargs={'tools': [{'type': 'function', 'function': {'name': 'Table', 'description': 'Table in SQL database.', 'parameters': {'type': 'object', 'properties': {'name': {'description': 'Name of table in SQL database.', 'type': 'string'}}, 'required': ['name']}}}]})] last=PydanticToolsParser(tools=[<class '__main__.Table'>])
full_chain : first=RunnableAssign(mapper={
table_names_to_use: RunnableAssign(mapper={
input: RunnableLambda(itemgetter('question'))
})
| ChatPromptTemplate(input_variables=['input'], messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template="Return the names of ALL the SQL tables that MIGHT be relevant to the user question. The tables are:\n\nActivityFlowActivityAssignment\nActivityFlowActivityAssignmentItem\nActivityFlowActivityAvailabilityRule\nActivityFlowActivityDefinition\nActivityFlowActivityDurationRule\nActivityFlowActivityTemplate\nActivityFlowDependencyTemplate\nActivityFlowEventAvailabilityRule\nActivityFlowEventDefinition\nActivityFlowEventTemplate\nActivityFlowGroupTemplate\nActivityFlowTemplate\nAircraft\nAircraftType\nAirline\nAirport\nAirportContext\nAirportContext2UserRole\nAllocationGroup\nAllocationGroup2ResourceAllocation\nAllocationGroupShapeAllocationRequirement\nAllocationGroupShapeRequirement\nAllocationPercentageToColor\nArea\nArrivalCodeShare\nArrivalFlight\nArrivalFlightActivity\nArrivalFlightEvent\nCascadingDowngradeRule\nCascadingDowngradeRuleAssignment\nCombinationRuleContribution\nCustomsType\nDepartureCodeShare\nDepartureFlight\nDepartureFlightActivity\nDepartureFlightEvent\nMovement\nMovementActivityFlowRequirement\nMovementCommentLog\nMovementCommentLogReason\nMovementLinkRule\nMovementMatchRequirement\nMovementPaxFlowDefinition\nMovementPaxFlowGroupLoad\nMovementPerformanceIndicatorChart\nMovementPerformanceIndicatorPaxFlow\nMovementResourceRequirement\nMovementSplitRule\nMovementStatusDefinition\nMovementStatusRule\nOverlapOnResourceContribution\nOverlapPermissionRuleContribution\nPaxFlowGroup\nPaxFlowProfileDefinition\nPaxFlowProfilePercentage\nPaxFlowSlot\nPerformanceIndicatorCell\nPerformanceIndicatorColumn\nPerformanceIndicatorGrid\nPerformanceIndicatorRow\nRecurringDowngradeRule\nRecurringDowngradeRule2Area\nRecurringDowngradeRule2Resource\nRecurringDowngradeRule2ResourceGroup\nResource\nResource2ResourceGroup\nResourceAllocation\nResourceAllocationAutoMappingRule\nResourceAllocationAutoMappingRuleAssignment\nResourceAllocationBufferTime\nResourceAllocationCombinationRule\nResourceAllocationCombinationRuleMatch\nResourceAllocationCombinationRuleMatch2Area\nResourceAllocationCombinationRuleMatch2Resource\nResourceAllocationCombinationRuleMatch2ResourceGroup\nResourceAllocationElementConfig\nResourceAllocationElementConfig2UserRole\nResourceAllocationElementConfigColor\nResourceAllocationElementConfigIcon\nResourceAllocationElementConfigText\nResourceAllocationElementConfigToolTip\nResourceAllocationMovementMatchPreference\nResourceAllocationMovementMatchPreference2Area\nResourceAllocationMovementMatchPreference2Resource\nResourceAllocationMovementMatchPreference2ResourceGroup\nResourceAllocationMovementMatchPreference2ResourceMatch\nResourceAllocationMovementMatchRule\nResourceAllocationMovementMatchRule2Area\nResourceAllocationMovementMatchRule2Resource\nResourceAllocationMovementMatchRule2ResourceGroup\nResourceAllocationMovementMatchRuleGroup\nResourceAllocationOverlapPermissionRule\nResourceAllocationOverlapPermissionRule2Area\nResourceAllocationOverlapPermissionRule2Resource\nResourceAllocationOverlapPermissionRule2ResourceGroup\nResourceDowngradeType\nResourceGroup\nResourceSerieVisualRule\nResourceSerieVisualRule2UserRole\nResourceSerieVisualRuleColor\nResourceSerieVisualRuleDescription\nResourceSerieVisualRuleHeaderText\nResourceSerieVisualRuleIcon\nResourceSerieVisualRuleToolTip\nResourceTypeSettings\nResourceTypeSettings2Details\nResourceUnavailabilityRule\nResourceUnavailabilityRule2Area\nResourceUnavailabilityRule2Resource\nResourceUnavailabilityRule2ResourceGroup\nRoute\nRouteViaPoint\nSeason\nSlotRequest\nSlotRequestComplianceRule\nSlotRequestCompliantSlot\nSlotRequestOperatedFlight\nSlotRequestPropertyMapping\nSlotRequestReservedSlot\nSlotRequestStatusDefinition\nSlotRequestStatusHistoryLog\nSlotRequestStatusTransition\nTowing\n\nRemember to exclude ALL POTENTIALLY RELEVANT tables, even if you're not sure that they're needed.")), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}'))])
| RunnableBinding(bound=AzureChatOpenAI(verbose=True, client=<openai.resources.chat.completions.Completions object at 0x0000016AB8252AE0>, async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x0000016AB721ABD0>, model_name='gpt-4', openai_api_key='197d3afbceeb47dda56932803db5a3a4', openai_api_base='https://sitalabopenai2.openai.azure.com/openai/deployments/sita-lab-gpt4', openai_proxy='', openai_api_version='2023-10-01-preview', openai_api_type='azure'), kwargs={'tools': [{'type': 'function', 'function': {'name': 'Table', 'description': 'Table in SQL database.', 'parameters': {'type': 'object', 'properties': {'name': {'description': 'Name of table in SQL database.', 'type': 'string'}}, 'required': ['name']}}}]})
| PydanticToolsParser(tools=[<class '__main__.Table'>])
}) middle=[RunnableAssign(mapper={
input: RunnableLambda(...),
table_info: RunnableLambda(...)
}), RunnableLambda(lambda x: {k: v for k, v in x.items() if k not in ('question', 'table_names_to_use')}), PromptTemplate(input_variables=['input', 'table_info'], partial_variables={'top_k': '5'}, template='You are an MS SQL expert. Given an input question, first create a syntactically correct MS SQL query to run, then look at the results of the query and return the answer to the input question.\nUnless the user specifies in the question a specific number of examples to obtain, query for at most {top_k} results using the TOP clause as per MS SQL. You can order the results to return the most informative data in the database.\nNever query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in square brackets ([]) to denote them as delimited identifiers.\nPay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\nPay attention to use CAST(GETDATE() as date) function to get the current date, if the question involves "today".\n\nUse the following format:\n\nQuestion: Question here\nSQLQuery: SQL Query to run\nSQLResult: Result of the SQLQuery\nAnswer: Final answer here\n\nOnly use the following tables:\n{table_info}\n\nQuestion: {input}'), RunnableBinding(bound=AzureChatOpenAI(verbose=True, client=<openai.resources.chat.completions.Completions object at 0x0000016AB8252AE0>, async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x0000016AB721ABD0>, model_name='gpt-4', openai_api_key='197d3afbceeb47dda56932803db5a3a4', openai_api_base='https://sitalabopenai2.openai.azure.com/openai/deployments/sita-lab-gpt4', openai_proxy='', openai_api_version='2023-10-01-preview', openai_api_type='azure'), kwargs={'stop': ['\nSQLResult:']}), StrOutputParser()] last=RunnableLambda(_strip)
Traceback (most recent call last):
File "c:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\test\QnA_with_sql_db\test_agent_get_relavent_tables.py", line 102, in <module>
final_response = test_agent_get_relevant_tables(db, llm, "list me airports")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\test\QnA_with_sql_db\test_agent_get_relavent_tables.py", line 92, in test_agent_get_relevant_tables
query = full_chain.invoke({"question": my_question, "table_name": table_name,"schema_name": "aodb"})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_core\runnables\base.py", line 2056, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_core\runnables\passthrough.py", line 419, in invoke
return self._call_with_config(self._invoke, input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_core\runnables\base.py", line 1243, in _call_with_config
context.run(
File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_core\runnables\config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_core\runnables\passthrough.py", line 406, in _invoke
**self.mapper.invoke(
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_core\runnables\base.py", line 2693, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^
File "C:\Python312\Lib\concurrent\futures\_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Python312\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "C:\Python312\Lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_core\runnables\base.py", line 3507, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_core\runnables\base.py", line 1243, in _call_with_config
context.run(
File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_core\runnables\config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_core\runnables\base.py", line 3381, in _invoke
output = call_func_with_variable_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_core\runnables\config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain\chains\sql_database\query.py", line 126, in <lambda>
"table_info": lambda x: db.get_table_info(
^^^^^^^^^^^^^^^^^^
File "C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot\ops_env\Lib\site-packages\langchain_community\utilities\sql_database.py", line 307, in get_table_info
raise ValueError(f"table_names {missing_tables} not found in database")
ValueError: table_names {Table(name='Airport')} not found in database
### Description
As you see from the stack trace my db is having Table(name='Airport') but error saying table not found
### System Info
PS C:\Users\Savita.Raghuvanshi\OneDrive - SITA\Desktop\llm\ML-LLM-OperationsCoPilot> pip freeze
aiohttp==3.9.3
aiosignal==1.3.1
aniso8601==9.0.1
annotated-types==0.6.0
anyio==3.7.1
async-timeout==4.0.3
attrs==23.1.0
azure-common==1.1.28
azure-core==1.29.5
azure-identity==1.13.0
azure-search==1.0.0b2
azure-search-documents==11.4.0b8
backoff==2.2.1
beautifulsoup4==4.12.3
blinker==1.6.3
cachetools==5.3.2
certifi==2023.7.22
cffi==1.16.0
chardet==5.2.0
charset-normalizer==3.3.1
click==8.1.7
colorama==0.4.6
cryptography==41.0.5
dataclasses-json==0.6.1
dataclasses-json-speakeasy==0.5.11
distro==1.9.0
dnspython==2.4.2
docx2txt==0.8
emoji==2.10.1
exceptiongroup==1.1.3
filetype==1.2.0
flasgger==0.9.7.1
Flask==2.3.3
Flask-Cors==3.0.10
Flask-RESTful==0.3.9
flask-restx==1.2.0
frozenlist==1.4.0
gevent==24.2.1
greenlet==3.0.1
h11==0.14.0
httpcore==1.0.3
httpx==0.26.0
idna==3.4
importlib-metadata==6.8.0
importlib-resources==6.1.0
iniconfig==2.0.0
isodate==0.6.1
itsdangerous==2.1.2
Jinja2==3.1.2
joblib==1.3.2
jsonpatch==1.33
jsonpath-python==1.0.6
jsonpointer==2.4
jsonschema==4.17.3
jsonschema-specifications==2023.7.1
**langchain==0.1.9
langchain-community==0.0.23
langchain-core==0.1.26
langchain-experimental==0.0.52
langchain-openai==0.0.6**
langdetect==1.0.9
langsmith==0.1.6
loguru==0.7.2
lxml==5.1.0
MarkupSafe==2.1.3
marshmallow==3.20.1
mistune==3.0.2
msal==1.24.1
msal-extensions==1.0.0
msrest==0.7.1
multidict==6.0.4
mypy-extensions==1.0.0
nltk==3.8.1
nose==1.3.7
numpy==1.26.4
oauthlib==3.2.2
openai==1.12.0
orjson==3.9.14
packaging==23.2
pkgutil_resolve_name==1.3.10
pluggy==1.4.0
portalocker==2.8.2
psycopg2==2.9.9
psycopg2-binary==2.9.9
pycparser==2.21
pydantic==2.4.2
pydantic_core==2.10.1
PyJWT==2.8.0
pymongo==4.5.0
PyMySQL==1.1.0
pyodbc==5.1.0
pypdf==3.17.0
pyrsistent==0.20.0
pytest==8.0.0
pytest-mock==3.12.0
python-dateutil==2.8.2
python-dotenv==1.0.0
python-environ==0.4.54
python-iso639==2024.2.7
python-magic==0.4.27
pytz==2023.3.post1
pywin32==306
PyYAML==6.0.1
rapidfuzz==3.6.1
referencing==0.30.2
regex==2023.10.3
requests==2.31.0
requests-oauthlib==1.3.1
rpds-py==0.10.6
Scaffold==0.1.3
setuptools==69.1.0
six==1.16.0
sniffio==1.3.0
soupsieve==2.5
SQLAlchemy==2.0.22
tabulate==0.9.0
tenacity==8.2.3
tiktoken==0.6.0
tqdm==4.66.1
typing-inspect==0.9.0
typing_extensions==4.8.0
unstructured==0.11.8
unstructured-client==0.18.0
urllib3==2.0.7
websocket==0.2.1
websockets==12.0
Werkzeug==2.3.7
wheel==0.42.0
win32-setctime==1.1.0
wrapt==1.16.0
yarl==1.9.2
zipp==3.17.0
zope.event==5.0
zope.interface==6.1 | Reporting missing table issue on msSql although table present | https://api.github.com/repos/langchain-ai/langchain/issues/18137/comments | 0 | 2024-02-26T17:08:01Z | 2024-06-08T16:12:30Z | https://github.com/langchain-ai/langchain/issues/18137 | 2,154,680,784 | 18,137 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.agents import create_openai_tools_agent, AgentExecutor
from langchain.tools import Tool
from langchain_community.callbacks import OpenAICallbackHandler
llm = AzureChatOpenAI(**current_settings.azureopenai_llm.dict(), temperature=0, callbacks=[OpenAICallbackHandler()])
def get_context_from_vector_store(query):
results = VectorStoreManager(collection_name=collection_name).store.similarity_search_with_score(query, k=k)
return results
add_db_context = Tool(
name="add_context_documents_from_vector_store",
func=run_qa_chain,
description=f"Useful when you need to answer questions about the contents of the files in the vector store. Use it if you are uncertain about your answer or you don't have any hard data to support your answer",
return_direct=False
)
tools = [add_db_context]
agent = create_openai_tools_agent(llm=llm, tools=tools, prompt=hub.pull("hwchase17/openai-tools-agent"))
agent_executor = AgentExecutor(agent=agent, tools=tools, callbacks=[OpenAICallbackHandler()])
agent_executor.invoke({"input": "test"})
print(agent_executor.callbacks)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am writing a simple RAG application with Langchain Tools, function calling and Langchain Agents. I want to monitor token usage for the agent. I want to use the Langchain Callbacks for that purpose. I see that the `OpenAICallbackHandler` properly monitors token usage for chat models directly, but it doesn't monitor the usage statistics for agents at all. It should be possible since the `AzureChatOpenAI` is passed to the agent. I tried defining the callback both in the agent and in the chat model, but after invoking the agent there usage statistics are not saved in any of the callbacks whatsoever.
I think this may be because the `OpenAICallbackHandler` implements the `on_llm_end` method, but not the `on_chain_end` method, which seems to be the method that the agent callbacks interact with ([source](https://api.python.langchain.com/en/latest/agents/langchain.agents.agent.AgentExecutor.html)). I wanted to define a custom callback handler that would extend `OpenAICallbackHandler` and map `on_llm_end` to `on_chain_end`, but this is not straightforward, or even not doable (the `LLMResult` instance used in `on_llm_end` seems to be lost inside the interaction between the Agent and the chat model, which disables access to the "token_usage" property).
Can token usage somehow be monitored when working with Langchain Agents?
### System Info
langchain==0.1.9
langchain-openai==0.0.7
langchainhub==0.1.14
pydantic==1.10.13 | OpenAICallbackHandler not counting token usage for Agents | https://api.github.com/repos/langchain-ai/langchain/issues/18130/comments | 10 | 2024-02-26T14:20:12Z | 2024-06-04T21:51:39Z | https://github.com/langchain-ai/langchain/issues/18130 | 2,154,300,761 | 18,130 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I tried to configure MongoDBChatMessageHistory using the code from the original documentation to store messages based on the passed session_id in MongoDB. However, this configuration did not take effect, and the session id in the database remained as 'test_session'. To resolve this issue, I found that when configuring MongoDBChatMessageHistory, it is necessary to set session_id=session_id instead of session_id=test_session.
pr: https://github.com/langchain-ai/langchain/pull/18128
_No response_ | DOC: Ineffective Configuration of MongoDBChatMessageHistory for Custom session_id Storage | https://api.github.com/repos/langchain-ai/langchain/issues/18127/comments | 0 | 2024-02-26T14:04:47Z | 2024-02-28T15:57:20Z | https://github.com/langchain-ai/langchain/issues/18127 | 2,154,265,433 | 18,127 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
index_name = f"vector_{index_suffix}"
keyword_index_name = f"keyword_{index_suffix}"
print(f"Setup with indices: {index_name} and {keyword_index_name} ")
hybrid_db = Neo4jVector.from_documents(
docs,
embeddings,
url=url,
username=username,
password=password,
search_type="hybrid",
pre_delete_collection=True,
index_name=index_name,
keyword_index_name=keyword_index_name,
)
print(f"\nLoaded hybrid_db {hybrid_db.search_type} with indices: {hybrid_db.index_name} and {hybrid_db.keyword_index_name} ")
print(f"Embedded {index_suffix}\nsize of docs: {len(docs)}\n")
```
### Error Message and Stack Trace (if applicable)
Setup with indices: vector_data and keyword_data
Loaded hybrid_db hybrid with indices: vector and keyword
Embedded data
size of docs: 543
### Description
* `Neo4jVector.from_documents` does not seem to set up the values of `index_name` and `keyword_index_name`
* They should be set as I do explicitly in the code
### System Info
$ python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version: 3.12.2 (main, Feb 7 2024, 21:49:26) [GCC 10.2.1 20210110]
Package Information
-------------------
> langchain_core: 0.1.26
> langchain: 0.1.9
> langchain_community: 0.0.24
> langsmith: 0.1.7
> langchain_openai: 0.0.7
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Neo4jVector.from_documents doesn't set the index_name and keyword_index_name | https://api.github.com/repos/langchain-ai/langchain/issues/18126/comments | 3 | 2024-02-26T13:51:37Z | 2024-06-08T16:12:26Z | https://github.com/langchain-ai/langchain/issues/18126 | 2,154,237,278 | 18,126 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code from https://github.com/langchain-ai/langgraph/blob/main/examples/web-navigation/web_voyager.ipynb
```py
from langchain import hub
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI
async def annotate(state):
marked_page = await mark_page.with_retry().ainvoke(state["page"])
return {**state, **marked_page}
def format_descriptions(state):
labels = []
for i, bbox in enumerate(state["bboxes"]):
text = bbox.get("ariaLabel") or ""
if not text.strip():
text = bbox["text"]
el_type = bbox.get("type")
labels.append(f'{i} (<{el_type}/>): "{text}"')
bbox_descriptions = "\nValid Bounding Boxes:\n" + "\n".join(labels)
return {**state, "bbox_descriptions": bbox_descriptions}
def parse(text: str) -> dict:
action_prefix = "Action: "
if not text.strip().split("\n")[-1].startswith(action_prefix):
return {"action": "retry", "args": f"Could not parse LLM Output: {text}"}
action_block = text.strip().split("\n")[-1]
action_str = action_block[len(action_prefix) :]
split_output = action_str.split(" ", 1)
if len(split_output) == 1:
action, action_input = split_output[0], None
else:
action, action_input = split_output
action = action.strip()
if action_input is not None:
action_input = [
inp.strip().strip("[]") for inp in action_input.strip().split(";")
]
return {"action": action, "args": action_input}
# Will need a later version of langchain to pull
# this image prompt template
prompt = hub.pull("wfh/web-voyager")
```
### Error Message and Stack Trace (if applicable)
{
"name": "ValueError",
"message": "Trying to deserialize something that cannot be deserialized in current version of langchain-core: ('langchain_core', 'prompts', 'image', 'ImagePromptTemplate')",
"stack": "---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[49], line 1
----> 1 prompt_new = hub.pull(\"mrpolymath/web-voyager_test\")
File /usr/local/lib/python3.11/site-packages/langchain/hub.py:81, in pull(owner_repo_commit, api_url, api_key)
79 client = _get_client(api_url=api_url, api_key=api_key)
80 resp: str = client.pull(owner_repo_commit)
---> 81 return loads(resp)
File /usr/local/lib/python3.11/site-packages/langchain_core/_api/beta_decorator.py:109, in beta.<locals>.beta.<locals>.warning_emitting_wrapper(*args, **kwargs)
107 warned = True
108 emit_warning()
--> 109 return wrapped(*args, **kwargs)
File /usr/local/lib/python3.11/site-packages/langchain_core/load/load.py:130, in loads(text, secrets_map, valid_namespaces)
111 @beta()
112 def loads(
113 text: str,
(...)
116 valid_namespaces: Optional[List[str]] = None,
117 ) -> Any:
118 \"\"\"Revive a LangChain class from a JSON string.
119 Equivalent to `load(json.loads(text))`.
120
(...)
128 Revived LangChain objects.
129 \"\"\"
--> 130 return json.loads(text, object_hook=Reviver(secrets_map, valid_namespaces))
File /usr/local/Cellar/[email protected]/3.11.7_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/__init__.py:359, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
357 if parse_constant is not None:
358 kw['parse_constant'] = parse_constant
--> 359 return cls(**kw).decode(s)
File /usr/local/Cellar/[email protected]/3.11.7_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/decoder.py:337, in JSONDecoder.decode(self, s, _w)
332 def decode(self, s, _w=WHITESPACE.match):
333 \"\"\"Return the Python representation of ``s`` (a ``str`` instance
334 containing a JSON document).
335
336 \"\"\"
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
339 if end != len(s):
File /usr/local/Cellar/[email protected]/3.11.7_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/decoder.py:353, in JSONDecoder.raw_decode(self, s, idx)
344 \"\"\"Decode a JSON document from ``s`` (a ``str`` beginning with
345 a JSON document) and return a 2-tuple of the Python
346 representation and the index in ``s`` where the document ended.
(...)
350
351 \"\"\"
352 try:
--> 353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
355 raise JSONDecodeError(\"Expecting value\", s, err.value) from None
File /usr/local/lib/python3.11/site-packages/langchain_core/load/load.py:82, in Reviver.__call__(self, value)
80 key = tuple(namespace + [name])
81 if key not in ALL_SERIALIZABLE_MAPPINGS:
---> 82 raise ValueError(
83 \"Trying to deserialize something that cannot \"
84 \"be deserialized in current version of langchain-core: \"
85 f\"{key}\"
86 )
87 import_path = ALL_SERIALIZABLE_MAPPINGS[key]
88 # Split into module and name
ValueError: Trying to deserialize something that cannot be deserialized in current version of langchain-core: ('langchain_core', 'prompts', 'image', 'ImagePromptTemplate')"
}
### Description
I was trying to fork a prompt template from "wfh/web-voyager" at LangSmith Hub and modify it to fit my own use-case.
### System Info
langchain==0.1.9
langchain-community==0.0.22
langchain-core==0.1.26
langchain-openai==0.0.7
langchainhub==0.1.14
platform (mac)
python version (Python 3.11.7) | Deserialization error when modifying an existing prompt template in LangSmith Hub | https://api.github.com/repos/langchain-ai/langchain/issues/21295/comments | 1 | 2024-02-26T12:42:11Z | 2024-08-10T16:07:10Z | https://github.com/langchain-ai/langchain/issues/21295 | 2,279,169,456 | 21,295 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
os.environ['OPENAI_API_KEY'] = openapi_key
# Define connection parameters using constants
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
# Create an engine to connect to the SQL database
engine = create_engine(connection_uri)
model_name="gpt-3.5-turbo-16k"
# db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_rnd360_overall','combined_leave_details', 'egv_attendancequery_chatgpt', 'egv_compoff_chatgpt', 'egv_leavedetails_chatgpt','egv_location_chatgpt','egv_education_chatgpt','egv_memo_chatgpt'])
# db = SQLDatabase(engine, view_support=True, include_tables=[ 'EGV_emp_departments_rnd360_overall', 'egv_leavedetails_chatgpt'])
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
PROMPT_SUFFIX = """Only use the following tables:
{table_info}
Previous Conversation:
{history}
Question: {input}"""
_DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run,
then look at the results of the query and return the answer.
Given an input question, create a syntactically correct MSSQL query by considering only the matching column names from the question,
then look at the results of the query and return the answer.
Check each view and if the question is present in different views perform join on those.
If a column name is not present, refrain from writing the SQL query. column like UAN number, PF number are not not present do not consider such columns.
Write the query only for the column names which are present in view.
Execute the query and analyze the results to formulate a response.
Return the answer in sentence form.
Use the following format:
Question: "Question here"
SQLQuery: "SQL Query to run"
SQLResult: "Result of the SQLQuery"
Answer: "Final answer here"
"""
PROMPT = PromptTemplate.from_template(_DEFAULT_TEMPLATE + PROMPT_SUFFIX)
memory = None
# cache = InMemoryCache()
# Define a function named chat that takes a question and SQL format indicator as input
def chat1(question):
# global db_chain
global memory
# prompt = """
# Given an input question, create a syntactically correct MSSQL query by considering only the matching column names from the question,
# then look at the results of the query and return the answer.
# If a column name is not present, refrain from writing the SQL query. column like UAN number, PF number are not not present do not consider such columns.
# Write the query only for the column names which are present in view.
# Execute the query and analyze the results to formulate a response.
# Return the answer in sentence form.
# The question: {question}
# """
try:
if memory == None:
memory = ConversationBufferMemory()
db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True, memory=memory)
greetings = ["hi", "hello", "hey"]
if any(greeting == question.lower() for greeting in greetings):
print(question)
print("Hello! How can I assist you today?")
return "Hello! How can I assist you today?"
# elif question in cache:
# return cache[question]
else:
answer = db_chain.run(question)
# answer = db_chain.run(prompt.format(question=question))
# print(memory.load_memory_variables()["history"])
print(memory.load_memory_variables({}))
# history = memory.load_memory_variables()["history"]
# print(history)
return answer
except exc.ProgrammingError as e:
# Check for a specific SQL error related to invalid column name
if "Invalid column name" in str(e):
print("Answer: Error Occured while processing the question")
print(str(e))
return "Invalid question. Please check your column names."
else:
print("Error Occured while processing")
print(str(e))
# return "Unknown ProgrammingError Occured"
return "Invalid Question"
except openai.RateLimitError as e:
print("Error Occured while fetching the answer")
print(str(e))
return "Rate limit exceeded. Please, Mention the Specific Columns you need!"
except openai.BadRequestError as e:
print("Error Occured while fetching the answer")
print(str(e))
return "Context length exceeded: This model's maximum context length is 16385 tokens. Please reduce the length of the messages."
except Exception as e:
print("Error Occured while processing")
print(str(e))
return "Unknown Error Occured"
### Error Message and Stack Trace (if applicable)
_No response_
### Description
while using single view or two views the model is working accurately wile giving the results, but if i'm trying to include multiple views and the question consist of joining 2 or more view, its not giving the correct result, it failing to perform join , its either considering wrong view or column from the view.
how to resolve this issue?
### System Info
python: 3.11
langchain: latest | when connecting multiple views in SQLDatabaseChain model its not performing the join properly | https://api.github.com/repos/langchain-ai/langchain/issues/18120/comments | 6 | 2024-02-26T09:54:23Z | 2024-06-08T16:12:20Z | https://github.com/langchain-ai/langchain/issues/18120 | 2,153,734,246 | 18,120 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
'''
python code:
from elasticsearch import Elasticsearch
from langchain import hub
from langchain.agents import create_openai_functions_agent, AgentExecutor
from langchain.chains.elasticsearch_database import ElasticsearchDatabaseChain
from langchain_core.tools import Tool
from langchain_openai import ChatOpenAI
# 连接 Elasticsearch
conn = Elasticsearch(
"http://xx.xxx.xxx:9200",
ca_certs="certs/http_ca.crt",
http_auth=("xx", "xxxx"),
verify_certs=False
)
if conn.ping():
print("成功连接到 Elasticsearch!")
else:
print("无法连接到 Elasticsearch!")
conn.search()
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
db_chain = ElasticsearchDatabaseChain.from_llm(llm, conn,verbose=True,)
tools = [
Tool.from_function(
func=db_chain.invoke,
name = "es_db_Search",
description="根据输入词,去Elasticsearch数据库进行检索",
),
]
prompt = hub.pull("hwchase17/openai-functions-agent")
agent = create_openai_functions_agent(llm, tools, prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True)
executor_invoke = agent_executor.invoke({"input": "根据关键词【通过在保持强度满足需要的情况下减轻整体质量】搜索索引方案"})
print(executor_invoke)
# invoke = db_chain.invoke({"question": "根据关键词【通过在保持强度满足需要的情况下减轻整体质量】搜索索引方案"})
# print(invoke)
'''
### Error Message and Stack Trace (if applicable)
File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain\chains\base.py", line 168, in invoke
raise e
File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain\chains\base.py", line 158, in invoke
self._call(inputs, run_manager=run_manager)
File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain\agents\agent.py", line 1391, in _call
next_step_output = self._take_next_step(
File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain\agents\agent.py", line 1097, in _take_next_step
[
File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain\agents\agent.py", line 1097, in <listcomp>
[
File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain\agents\agent.py", line 1182, in _iter_next_step
yield self._perform_agent_action(
File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain\agents\agent.py", line 1204, in _perform_agent_action
observation = tool.run(
File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain_core\tools.py", line 401, in run
raise e
File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain_core\tools.py", line 358, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain_core\tools.py", line 566, in _run
else self.func(*args, **kwargs)
File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain\chains\base.py", line 168, in invoke
raise e
File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain\chains\base.py", line 158, in invoke
self._call(inputs, run_manager=run_manager)
File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain\chains\elasticsearch_database\base.py", line 129, in _call
indices_info = self._get_indices_infos(indices)
File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\langchain\chains\elasticsearch_database\base.py", line 102, in _get_indices_infos
hits = self.database.search(
File "D:\soft\Anaconda3\envs\ptthon310\lib\site-packages\elasticsearch\client\utils.py", line 168, in _wrapped
return func(*args, params=params, headers=headers, **kwargs)
TypeError: Elasticsearch.search() got an unexpected keyword argument 'query'
### Description
调用langchain得elasticsearch_database其中方法_get_indices_infos中调用es得时候
hits = self.database.search(
index=k,
query={"match_all": {}},
size=self.sample_documents_in_index_info,
)["hits"]["hits"],
报错参数Elasticsearch.search() got an unexpected keyword argument 'query',query应该修改为body
### System Info
elasticsearch 7.13.3
langchain 0.1.8
platform(windows )
python 3.10 | langchain integrated with Elasticsearch, search syntax error,Elasticsearch.search() got an unexpected keyword argument 'query' | https://api.github.com/repos/langchain-ai/langchain/issues/18119/comments | 1 | 2024-02-26T09:39:49Z | 2024-06-18T16:09:20Z | https://github.com/langchain-ai/langchain/issues/18119 | 2,153,704,709 | 18,119 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
def load_documents_v2(file_path,
file_filter_regex,
vectorstore_persist_path,
vectorstore_collection_name,
vectorstore_embeddings,
parent_chunk_size,
child_chunk_size,
):
"""
It is *not* possible to replace files with updated versions.
Parameters
----------
file_path : str
file path that points the documents
file_filter_regex
Regex expression for filtering documents.
Must be a full match, that is, partial matches won't work
Ex: r".*(.ts|.tsx)"
vectorstore_persist_path : str
where to save the vector store
vectorstore_collection_name : str
the name of the vector store
vectorstore_embeddings
the previously loaded embeddings
parent_chunk_size
how many tokens each document should contain (1000 is the minimum size that gives decent results)
child_chunk_size
the smaller the better
"""
# if os.path.exists(vectorstore_persist_path):
# vectorstore = Chroma(
# persist_directory=vectorstore_persist_path,
# collection_name=vectorstore_collection_name,
# embedding_function=vectorstore_embeddings)
# return vectorstore
documents = load_documents_as_files(file_path, file_filter_regex)
print("chunking")
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=parent_chunk_size);
child_splitter = RecursiveCharacterTextSplitter(chunk_size=child_chunk_size);
store = InMemoryStore()
vectorstore = Chroma(
collection_name=vectorstore_collection_name,
embedding_function=vectorstore_embeddings
)
retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter,
)
print("adding to ParentDocumentRetriever");
retriever.add_documents(documents)
return [vectorstore, retriever]
result = load_documents_v2(
file_path="/Users/uavalos/Documents/manage/private/react/pages",
file_filter_regex = "**/*",
vectorstore_persist_path="/Users/uavalos/Documents/llm/manage-react-pages",
vectorstore_collection_name="manage",
vectorstore_embeddings=embeddings,
parent_chunk_size=1000,
child_chunk_size=200)
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[9], line 1
----> 1 result = load_documents_v2(
2 file_path="/Users/uavalos/Documents/manage/private/react/pages",
3 file_filter_regex = "**/*",
4 vectorstore_persist_path="/Users/uavalos/Documents/llm/manage-react-pages",
5 vectorstore_collection_name="manage",
6 vectorstore_embeddings=embeddings,
7 parent_chunk_size=1000,
8 child_chunk_size=400)
10 vectorstore = result[0]
11 retriever = result[1]
Cell In[7], line 126, in load_documents_v2(file_path, file_filter_regex, vectorstore_persist_path, vectorstore_collection_name, vectorstore_embeddings, parent_chunk_size, child_chunk_size)
122 print("adding to ParentDocumentRetriever");
124 vectorstore.max_batch_size = 150945
--> 126 retriever.add_documents(documents)
128 # ids = []
129
130 # for doc in documents:
(...)
139
140 # vectorstore.persist()
142 return [vectorstore, retriever]
File ~/miniconda3/envs/cisco3/lib/python3.11/site-packages/langchain/retrievers/parent_document_retriever.py:122, in ParentDocumentRetriever.add_documents(self, documents, ids, add_to_docstore)
120 docs.extend(sub_docs)
121 full_docs.append((_id, doc))
--> 122 self.vectorstore.add_documents(docs)
123 if add_to_docstore:
124 self.docstore.mset(full_docs)
File ~/miniconda3/envs/cisco3/lib/python3.11/site-packages/langchain_core/vectorstores.py:119, in VectorStore.add_documents(self, documents, **kwargs)
117 texts = [doc.page_content for doc in documents]
118 metadatas = [doc.metadata for doc in documents]
--> 119 return self.add_texts(texts, metadatas, **kwargs)
File ~/miniconda3/envs/cisco3/lib/python3.11/site-packages/langchain_community/vectorstores/chroma.py:311, in Chroma.add_texts(self, texts, metadatas, ids, **kwargs)
309 raise ValueError(e.args[0] + "\n\n" + msg)
310 else:
--> 311 raise e
312 if empty_ids:
313 texts_without_metadatas = [texts[j] for j in empty_ids]
File ~/miniconda3/envs/cisco3/lib/python3.11/site-packages/langchain_community/vectorstores/chroma.py:297, in Chroma.add_texts(self, texts, metadatas, ids, **kwargs)
295 ids_with_metadata = [ids[idx] for idx in non_empty_ids]
296 try:
--> 297 self._collection.upsert(
298 metadatas=metadatas,
299 embeddings=embeddings_with_metadatas,
300 documents=texts_with_metadatas,
301 ids=ids_with_metadata,
302 )
303 except ValueError as e:
304 if "Expected metadata value to be" in str(e):
File ~/miniconda3/envs/cisco3/lib/python3.11/site-packages/chromadb/api/models/Collection.py:487, in Collection.upsert(self, ids, embeddings, metadatas, documents, images, uris)
484 else:
485 embeddings = self._embed(input=images)
--> 487 self._client._upsert(
488 collection_id=self.id,
489 ids=ids,
490 embeddings=embeddings,
491 metadatas=metadatas,
492 documents=documents,
493 uris=uris,
494 )
File ~/miniconda3/envs/cisco3/lib/python3.11/site-packages/chromadb/telemetry/opentelemetry/__init__.py:127, in trace_method.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
125 global tracer, granularity
126 if trace_granularity < granularity:
--> 127 return f(*args, **kwargs)
128 if not tracer:
129 return f(*args, **kwargs)
File ~/miniconda3/envs/cisco3/lib/python3.11/site-packages/chromadb/api/segment.py:447, in SegmentAPI._upsert(self, collection_id, ids, embeddings, metadatas, documents, uris)
445 coll = self._get_collection(collection_id)
446 self._manager.hint_use_collection(collection_id, t.Operation.UPSERT)
--> 447 validate_batch(
448 (ids, embeddings, metadatas, documents, uris),
449 {"max_batch_size": self.max_batch_size},
450 )
451 records_to_submit = []
452 for r in _records(
453 t.Operation.UPSERT,
454 ids=ids,
(...)
459 uris=uris,
460 ):
File ~/miniconda3/envs/cisco3/lib/python3.11/site-packages/chromadb/api/types.py:488, in validate_batch(batch, limits)
477 def validate_batch(
478 batch: Tuple[
479 IDs,
(...)
485 limits: Dict[str, Any],
486 ) -> None:
487 if len(batch[0]) > limits["max_batch_size"]:
--> 488 raise ValueError(
489 f"Batch size {len(batch[0])} exceeds maximum batch size {limits['max_batch_size']}"
490 )
ValueError: Batch size 155434 exceeds maximum batch size 41666
### Description
* I'm basically following the default ParentDocumentRetriever example w/ both a child/parent splitter. See here: https://python.langchain.com/docs/modules/data_connection/retrievers/parent_document_retriever
* However, the difference is that I have many (like 10k) docs.
* Also, i'm using a local embedder. Ex: sentence-transformers/all-mpnet-base-v2
* I hit the above error message when trying to add the documents to the ParentDocumentRetriever
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 22.6.0: Sun Dec 17 22:13:25 PST 2023; root:xnu-8796.141.3.703.2~2/RELEASE_ARM64_T6020
> Python Version: 3.11.7 (main, Dec 15 2023, 12:09:56) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.1.23
> langchain: 0.1.7
> langchain_community: 0.0.20
> langsmith: 0.0.87
> langchainhub: 0.1.14
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | "Batch size 155434 exceeds maximum batch size 41666" error with ParentDocumentRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/18105/comments | 1 | 2024-02-26T03:26:21Z | 2024-06-10T16:07:38Z | https://github.com/langchain-ai/langchain/issues/18105 | 2,153,173,912 | 18,105 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
code
```python
from langchain_openai import AzureOpenAIEmbeddings
from dotenv import load_dotenv
load_dotenv()
azure_embeddings = AzureOpenAIEmbeddings(
azure_deployment="<model_deployment>",
openai_api_version="2023-05-15",
)
```
and envs
```
AZURE_OPENAI_ENDPOINT=https://<MY PROJECT>.openai.azure.com/
AZURE_OPENAI_API_KEY=...
```
I can see while debugging that the azure_deployment is set properly, though the error still makes it impossible to run
I have tried setting `validate_base_url = False` but then it throws an error that base_url and azure_deployment and exclusive
### Error Message and Stack Trace (if applicable)
```
1 validation error for AzureOpenAIEmbeddings
__root__
As of openai>=1.0.0, Azure endpoints should be specified via the `azure_endpoint` param not `openai_api_base` (or alias `base_url`). (type=value_error)
File "/<REDACTED>, line 12, in <module>
azure_embeddings = AzureOpenAIEmbeddings(
pydantic.v1.error_wrappers.ValidationError: 1 validation error for AzureOpenAIEmbeddings
__root__
As of openai>=1.0.0, Azure endpoints should be specified via the `azure_endpoint` param not `openai_api_base` (or alias `base_url`). (type=value_error)
```
### Description
pydantic should not throw an error
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:28:58 PST 2023; root:xnu-10002.81.5~7/RELEASE_X86_64
> Python Version: 3.8.18 | packaged by conda-forge | (default, Dec 23 2023, 17:23:49)
[Clang 15.0.7 ]
Package Information
-------------------
> langchain_core: 0.1.26
> langchain: 0.1.9
> langchain_community: 0.0.24
> langsmith: 0.1.8
> langchain_openai: 0.0.7
``` | AzureOpenAIEmbedding raises errors due to deprication of openai_api_base | https://api.github.com/repos/langchain-ai/langchain/issues/18099/comments | 3 | 2024-02-25T19:25:41Z | 2024-06-08T16:12:11Z | https://github.com/langchain-ai/langchain/issues/18099 | 2,152,905,624 | 18,099 |
[
"hwchase17",
"langchain"
] | I am getting this error while using
embeddings = HuggingFaceEmbeddings(model_name='sentence-transformers/all-MiniLM-L6-v2')
to_vectorize = [" ".join(example.values()) for example in few_shots]
vectorstore = Chroma.from_texts(
to_vectorize,
embedding=embeddings,
metadatas=few_shots
)
ValueError: Expected EmbeddingFunction.__call__ to have the following signature: odict_keys(['self', 'input']), got odict_keys(['self', 'args', 'kwargs']) Please see https://docs.trychroma.com/embeddings for details of the EmbeddingFunction interface. Please note the recent change to the EmbeddingFunction interface: https://docs.trychroma.com/migration#migration-to-0416---november-7-2023
Im using chromadb==0.4.15
_Originally posted by @varayush007 in https://github.com/langchain-ai/langchain/issues/13051#issuecomment-1963036031_
| Chromadb error | https://api.github.com/repos/langchain-ai/langchain/issues/18098/comments | 4 | 2024-02-25T19:25:05Z | 2024-05-31T10:24:05Z | https://github.com/langchain-ai/langchain/issues/18098 | 2,152,905,421 | 18,098 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following code: ```from typing import Any
from pydantic import BaseModel
from unstructured.partition.pdf import partition_pdf
# Get elements
raw_pdf_elements = partition_pdf(
filename=path + "2304.08485.pdf",
# Using pdf format to find embedded image blocks
extract_images_in_pdf=True,
# Use layout model (YOLOX) to get bounding boxes (for tables) and find titles
# Titles are any sub-section of the document
infer_table_structure=True,
# Post processing to aggregate text once we have the title
chunking_strategy="by_title",
# Chunking params to aggregate text blocks
# Attempt to create a new chunk 3800 chars
# Attempt to keep chunks > 2000 chars
# Hard max on chunks
max_characters=4000,
new_after_n_chars=3800,
combine_text_under_n_chars=2000,
image_output_dir_path=path,
)``` raised the error below.
### Error Message and Stack Trace (if applicable)
WARNING:unstructured:This function will be deprecated in a future release and `unstructured` will simply use the DEFAULT_MODEL from `unstructured_inference.model.base` to set default model name
---------------------------------------------------------------------------
UnidentifiedImageError Traceback (most recent call last)
[<ipython-input-9-99d863c83b7a>](https://localhost:8080/#) in <cell line: 7>()
5
6 # Get elements
----> 7 raw_pdf_elements = partition_pdf(
8 filename=path + "2304.08485.pdf",
9 # Using pdf format to find embedded image blocks
10 frames
[/usr/local/lib/python3.10/dist-packages/PIL/Image.py](https://localhost:8080/#) in open(fp, mode, formats)
3281 continue
3282 except BaseException:
-> 3283 if exclusive_fp:
3284 fp.close()
3285 raise
UnidentifiedImageError: cannot identify image file '/tmp/tmpg2qlx8jd/69fab29b-6c14-4bd4-888b-85c763aa1b31-01.ppm'
### Description
I'm trying to run this notebook in Colab: https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb?ref=blog.langchain.dev
and got the error above.
### System Info
Google Colab | Error in Semi_structured_and_multi_modal_RAG.ipynb | https://api.github.com/repos/langchain-ai/langchain/issues/18095/comments | 0 | 2024-02-25T18:11:13Z | 2024-06-08T16:12:05Z | https://github.com/langchain-ai/langchain/issues/18095 | 2,152,877,808 | 18,095 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_community.graphs import NeptuneRdfGraph
### Error Message and Stack Trace (if applicable)
_No response_
### Description
* I was trying to use the neptune_sparql in my local system and it showing the import error of the NeptuneRdfGraph and I notice that there are some file name changes in regarding the NeptuneRdfGraph but it didn't change in the neptune_sparql.py file


### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.26
> langchain: 0.1.9
> langchain_community: 0.0.23
> langsmith: 0.1.6
> langchain_experimental: 0.0.52
> langchain_google_genai: 0.0.5
> langchain_mistralai: 0.0.4
> langchain_openai: 0.0.6
> langchainhub: 0.1.14
> langgraph: 0.0.24 | Importerror: Getting import error in the neptune_sparql.py file regarding the NeptuneRdfGraph | https://api.github.com/repos/langchain-ai/langchain/issues/18094/comments | 0 | 2024-02-25T16:41:31Z | 2024-06-08T16:12:00Z | https://github.com/langchain-ai/langchain/issues/18094 | 2,152,845,374 | 18,094 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain_community.vectorstores import Chroma
from langchain_openai import OpenAIEmbeddings
from chromadb.config import Settings
db = Chroma.from_documents(texts, OpenAIEmbeddings(disallowed_special=()), client_settings= Settings( anonymized_telemetry=False))
retriever = db.as_retriever(
search_type="mmr", # Also test "similarity"
search_kwargs={"k": 8},
)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "lchain.py", line 26, in <module>
from chromadb.config import Settings
File "/home/duarte/.local/lib/python3.8/site-packages/chromadb/__init__.py", line 5, in <module>
from chromadb.auth.token import TokenTransportHeader
File "/home/duarte/.local/lib/python3.8/site-packages/chromadb/auth/token/__init__.py", line 26, in <module>
from chromadb.telemetry.opentelemetry import (
File "/home/duarte/.local/lib/python3.8/site-packages/chromadb/telemetry/opentelemetry/__init__.py", line 11, in <module>
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
File "/home/duarte/.local/lib/python3.8/site-packages/opentelemetry/exporter/otlp/proto/grpc/trace_exporter/__init__.py", line 24, in <module>
from opentelemetry.exporter.otlp.proto.common.trace_encoder import (
File "/home/duarte/.local/lib/python3.8/site-packages/opentelemetry/exporter/otlp/proto/common/trace_encoder.py", line 16, in <module>
from opentelemetry.exporter.otlp.proto.common._internal.trace_encoder import (
File "/home/duarte/.local/lib/python3.8/site-packages/opentelemetry/exporter/otlp/proto/common/_internal/trace_encoder/__init__.py", line 44, in <module>
SpanKind.INTERNAL: PB2SPan.SpanKind.SPAN_KIND_INTERNAL,
AttributeError: 'EnumTypeWrapper' object has no attribute 'SPAN_KIND_INTERNAL'
### Description
Trying to follow the Code understanding tutorial (https://python.langchain.com/docs/use_cases/code_understanding#use-case), find myself with an AttributeError related to opentelemetry. Tried reinstall the appropriate packages and turning off the usage of telemetry when initializing chroma but the issue persists.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Apr 2 22:23:49 UTC 2021
> Python Version: 3.8.10 (default, Nov 22 2023, 10:22:35)
[GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.1.26
> langchain: 0.1.9
> langchain_community: 0.0.24
> langsmith: 0.1.7
> langchain_openai: 0.0.7
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | AttributeError on opentelemetry when initializing Chroma from documents | https://api.github.com/repos/langchain-ai/langchain/issues/18093/comments | 4 | 2024-02-25T15:17:14Z | 2024-07-21T16:05:50Z | https://github.com/langchain-ai/langchain/issues/18093 | 2,152,810,537 | 18,093 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
before i updated my langchain i have this code without any issue :
llm=ChatOpenAI(temperature=0.0,model="gpt-4")
sql_toolkit=SQLDatabaseToolkit(db=db,llm=llm)
llm_with_tool = llm.bind_tools(analyze_tool)
sql_toolkit.get_tools()
but after updating my langchain i have got this error :
'ChatOpenAI' object has no attribute 'bind_tools'
### Idea or request for content:
can anyone guide me how can i solve this issue ? | ChatOpenAI' object has no attribute 'bind_tools' | https://api.github.com/repos/langchain-ai/langchain/issues/18088/comments | 4 | 2024-02-25T11:15:00Z | 2024-06-23T16:09:30Z | https://github.com/langchain-ai/langchain/issues/18088 | 2,152,714,982 | 18,088 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Readiing `ConversationBufferWindowMemory` class, it seems that the property `buffer` returns `self.buffer_as_messages` if `self.return_messages` is `True`, otherwise it returns `self.buffer_as_str`.
https://github.com/langchain-ai/langchain/blob/7fc903464a753ac10e9b671906c2e9889e4d598e/libs/langchain/langchain/memory/buffer_window.py#L17-L21
However, the docstrings of these properties `buffer_as_str` and `return_messages` seem inverted when they say `True` and `False` respectively:
https://github.com/langchain-ai/langchain/blob/7fc903464a753ac10e9b671906c2e9889e4d598e/libs/langchain/langchain/memory/buffer_window.py#L22-L30
https://github.com/langchain-ai/langchain/blob/7fc903464a753ac10e9b671906c2e9889e4d598e/libs/langchain/langchain/memory/buffer_window.py#L32-L35
### Idea or request for content:
_No response_ | DOC: Possible mistake in property's docstring of ConversationBufferWindowMemory | https://api.github.com/repos/langchain-ai/langchain/issues/18080/comments | 3 | 2024-02-25T01:56:23Z | 2024-02-26T17:08:04Z | https://github.com/langchain-ai/langchain/issues/18080 | 2,152,559,260 | 18,080 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_community.document_loaders import UnstructuredWordDocumentLoader
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/home/imxsys/flask-ui/prototypes/LangChainProto/./deeplake_vector_docxi.py", line 2, in <module>
from langchain_community.document_loaders import UnstructuredWordDocumentLoader
File "/home/imxsys/.local/lib/python3.11/site-packages/langchain_community/document_loaders/__init__.py", line 53, in <module>
from langchain_community.document_loaders.blackboard import BlackboardLoader
File "/home/imxsys/.local/lib/python3.11/site-packages/langchain_community/document_loaders/blackboard.py", line 10, in <module>
from langchain_community.document_loaders.pdf import PyPDFLoader
File "/home/imxsys/.local/lib/python3.11/site-packages/langchain_community/document_loaders/pdf.py", line 18, in <module>
from langchain_community.document_loaders.parsers.pdf import (
File "/home/imxsys/.local/lib/python3.11/site-packages/langchain_community/document_loaders/parsers/__init__.py", line 8, in <module>
from langchain_community.document_loaders.parsers.language import LanguageParser
File "/home/imxsys/.local/lib/python3.11/site-packages/langchain_community/document_loaders/parsers/language/__init__.py", line 1, in <module>
from langchain_community.document_loaders.parsers.language.language_parser import (
File "/home/imxsys/.local/lib/python3.11/site-packages/langchain_community/document_loaders/parsers/language/language_parser.py", line 39, in <module>
"c": Language.C,
^^^^^^^^^^
File "/usr/lib/python3.11/enum.py", line 784, in __getattr__
raise AttributeError(name) from None
AttributeError: C
### Description
I can't include UnstructuredWordDocumentLoader from langchain_community.document_loaders
### System Info
pip3.11 list | grep langch
langchain 0.1.7
langchain-community 0.0.21
langchain-core 0.1.26
langchain-google-vertexai 0.0.5
langchain-openai 0.0.6 | raise AttributeError(name) from None AttributeError: C | https://api.github.com/repos/langchain-ai/langchain/issues/18076/comments | 1 | 2024-02-24T23:34:27Z | 2024-02-24T23:37:32Z | https://github.com/langchain-ai/langchain/issues/18076 | 2,152,521,908 | 18,076 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
import os
from dotenv import load_dotenv
from langchain.document_loaders.pdf import PyMuPDFLoader
from langchain_community.document_loaders import DirectoryLoader, TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.vectorstores import Qdrant
from langchain_community.embeddings import GPT4AllEmbeddings
from langchain.indexes import SQLRecordManager, index
load_dotenv()
loaders = {
'.pdf': PyMuPDFLoader,
'.txt': TextLoader
}
def create_directory_loader(file_type, directory_path):
'''Define a function to create a DirectoryLoader for a specific file type'''
return DirectoryLoader(
path=directory_path,
glob=f"**/*{file_type}",
loader_cls=loaders[file_type],
show_progress=True,
use_multithreading=True
)
dirpath = os.environ.get('TEMP_DOCS_DIR')
txt_loader = create_directory_loader('.txt', dirpath)
texts = txt_loader.load()
full_text = ''
for paper in texts:
full_text = full_text + paper.page_content
full_text = " ".join(l for l in full_text.splitlines() if l)
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=2048,
chunk_overlap=512
)
document_chunks = text_splitter.create_documents(
[full_text], [{'source': 'education'}])
embeddings = GPT4AllEmbeddings()
collection_name = "testing_v1"
namespace = f"mydata/{collection_name}"
record_manager = SQLRecordManager(
namespace, db_url="sqlite:///record_manager_cache.sql"
)
record_manager.create_schema()
url = 'http://0.0.0.0:6333'
from qdrant_client import QdrantClient
client = QdrantClient(url)
qdrant = Qdrant(
client=client,
embeddings=embeddings,
collection_name='testing_v1',
)
index_stats= index(
document_chunks,
record_manager,
qdrant,
cleanup="full",
source_id_key="source"
)
print(index_stats)
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/home/khophi/Development/myApp/llm/api/embeddings.py", line 79, in <module>
index_stats = index(
File "/home/khophi/Development/myApp/llm/venv/lib/python3.10/site-packages/langchain/indexes/_api.py", line 326, in index
vector_store.add_documents(docs_to_index, ids=uids)
File "/home/khophi/Development/myApp/llm/venv/lib/python3.10/site-packages/langchain_core/vectorstores.py", line 119, in add_documents
return self.add_texts(texts, metadatas, **kwargs)
File "/home/khophi/Development/myApp/llm/venv/lib/python3.10/site-packages/langchain_community/vectorstores/qdrant.py", line 181, in add_texts
self.client.upsert(
File "/home/khophi/Development/myApp/llm/venv/lib/python3.10/site-packages/qdrant_client/qdrant_client.py", line 987, in upsert
return self._client.upsert(
File "/home/khophi/Development/myApp/llm/venv/lib/python3.10/site-packages/qdrant_client/qdrant_remote.py", line 1300, in upsert
http_result = self.openapi_client.points_api.upsert_points(
File "/home/khophi/Development/myApp/llm/venv/lib/python3.10/site-packages/qdrant_client/http/api/points_api.py", line 1439, in upsert_points
return self._build_for_upsert_points(
File "/home/khophi/Development/myApp/llm/venv/lib/python3.10/site-packages/qdrant_client/http/api/points_api.py", line 738, in _build_for_upsert_points
return self.api_client.request(
File "/home/khophi/Development/myApp/llm/venv/lib/python3.10/site-packages/qdrant_client/http/api_client.py", line 74, in request
return self.send(request, type_)
File "/home/khophi/Development/myApp/llm/venv/lib/python3.10/site-packages/qdrant_client/http/api_client.py", line 97, in send
raise UnexpectedResponse.for_response(response)
qdrant_client.http.exceptions.UnexpectedResponse: Unexpected Response: 404 (Not Found)
Raw response content:
b'{"status":{"error":"Not found: Collection `testing_v1` doesn\'t exist!"},"time":0.0000653}'
(venv) khophi@KhoPhi:~/Development/myApp/llm/api$
```
### Description
I'm following the tutorial here trying to use Qdrant as the vectorstore
https://python.langchain.com/docs/modules/data_connection/indexing
According to the docs:
> Do not use with a store that has been pre-populated with content independently of the indexing API, as the record manager will not know that records have been inserted previously.
If preferably it should work with a brand new collection from start, then the chances of such a collection not existing will be true, at which point I'd expect the index to have something similar to the `force_create` in the `Qdrant.from_documents(...)` function to create the collection if it doesn't exist, before proceeding. That way the index db and collection all have the same start.
As it stands now, there isn't a way to create an empty collection in Qdrant.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.24
> langchain: 0.0.350
> langchain_community: 0.0.3
> langsmith: 0.1.3
> langchain_cli: 0.0.19
> langchain_experimental: 0.0.47
> langchain_mistralai: 0.0.4
> langchain_openai: 0.0.6
> langchainhub: 0.1.14
> langgraph: 0.0.24
> langserve: 0.0.36 | When indexing vectorstore, if no collection, create one - 404 collection not found in Qdrant when Indexing | https://api.github.com/repos/langchain-ai/langchain/issues/18068/comments | 1 | 2024-02-24T14:25:30Z | 2024-06-25T16:13:27Z | https://github.com/langchain-ai/langchain/issues/18068 | 2,152,338,733 | 18,068 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain_community.document_loaders import WebBaseLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
# Load the Wikipedia page
loader = WebBaseLoader("https://en.wikipedia.org/wiki/New_York_City")
documents = loader.load()
# Split the text into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
# Create embeddings
embeddings = OpenAIEmbeddings()
# Create a vector store
db = Chroma.from_documents(texts, embeddings, collection_name="wiki-nyc")
# Create a retriever
retriever = db.as_retriever()
# Create a QA chain
llm = OpenAI(temperature=0)
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever)
```
### Error Message and Stack Trace (if applicable)
```
from langchain_community.document_loaders import WikipediaLoader
File "/home/runner/Langchain/.pythonlibs/lib/python3.10/site-packages/langchain_community/document_loaders/__init__.py", line 53, in <module>
from langchain_community.document_loaders.blackboard import BlackboardLoader
File "/home/runner/Langchain/.pythonlibs/lib/python3.10/site-packages/langchain_community/document_loaders/blackboard.py", line 10, in <module>
from langchain_community.document_loaders.pdf import PyPDFLoader
File "/home/runner/Langchain/.pythonlibs/lib/python3.10/site-packages/langchain_community/document_loaders/pdf.py", line 18, in <module>
from langchain_community.document_loaders.parsers.pdf import (
File "/home/runner/Langchain/.pythonlibs/lib/python3.10/site-packages/langchain_community/document_loaders/parsers/__init__.py", line 8, in <module>
from langchain_community.document_loaders.parsers.language import LanguageParser
File "/home/runner/Langchain/.pythonlibs/lib/python3.10/site-packages/langchain_community/document_loaders/parsers/language/__init__.py", line 1, in <module>
from langchain_community.document_loaders.parsers.language.language_parser import (
File "/home/runner/Langchain/.pythonlibs/lib/python3.10/site-packages/langchain_community/document_loaders/parsers/language/language_parser.py", line 39, in <module>
"c": Language.C,
File "/nix/store/xf54733x4chbawkh1qvy9i1i4mlscy1c-python3-3.10.11/lib/python3.10/enum.py", line 437, in __getattr__
raise AttributeError(name) from None
AttributeError: C
```
### Description
When trying a basic script to call the Wikipedia loader or WebBaseLoader (maybe any loader?) I get the error. Here's another example script that throws the same error.
```
from langchain_community.document_loaders import WikipediaLoader
docs = WikipediaLoader(query="Genesis of the Daleks", load_max_docs=2).load()
len(docs)
docs[0].metadata # meta-information of the Document
docs[0].page_content[:400] # a content of the Document
```
### System Info
System Information
------------------
> OS: Linux
> OS Version: #13~22.04.1-Ubuntu SMP Wed Jan 24 23:39:40 UTC 2024
> Python Version: 3.10.11 (main, Apr 4 2023, 22:10:32) [GCC 12.2.0]
Package Information
-------------------
> langchain_core: 0.1.26
> langchain: 0.1.6
> langchain_community: 0.0.24
> langsmith: 0.1.7
> langchain_openai: 0.0.5
> langchainhub: 0.1.14
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | AttributeError: C when importing WikipediaLoader / WebBaseLoader | https://api.github.com/repos/langchain-ai/langchain/issues/18067/comments | 2 | 2024-02-24T14:22:59Z | 2024-02-25T11:06:43Z | https://github.com/langchain-ai/langchain/issues/18067 | 2,152,337,867 | 18,067 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
### Problem
I spent a lot of time troubleshooting how to pass `search_kwargs` to the `vectorstore.as_retriever()` method. I believe there is a lot of nesting / traceability issues that could be improved if the optional search_kwargs parameters were defined as named parameters.
[as_retriever method](https://github.com/langchain-ai/langchain/blob/9ebbca369560e6f8eca42bf27ed5215807695f8b/libs/core/langchain_core/vectorstores.py#L573)
Although the [examples](https://python.langchain.com/docs/modules/data_connection/retrievers/vectorstore#specifying-top-k) explains it well for using `k`, I think there could still be a benefit here given that the examples aren't always easily findable for every method used. E.g., I wanted to specify`namespace` too.
Example:
```python
qa_chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
# retriever=vector_db.as_retriever(**search_kwargs),
# retriever=vector_db.as_retriever(search_kwargs=search_kwargs),
retriever=vector_db.as_retriever(k=20, namespace=uid),
chain_type_kwargs={"prompt": prompt_template},
return_source_documents=True,
verbose=True,
)
```
### Expected Usage
From reading the documents on as_retriever(), I had tried these methods for passing my keyword arguments, since the as_retriever() method accepts **kwargs.
```python
qa_chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
chain_type_kwargs={"prompt": prompt_template},
return_source_documents=True,
verbose=True,
######### THIS LINE ##############
retriever=vector_db.as_retriever(k=20, namespace=uid),
######## THIS DOESN'T WORK EITHER #####
# retriever=vector_db.as_retriever(**search_kwargs),
)
```
### Proper Usage
```python
qa_chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
chain_type_kwargs={"prompt": prompt_template},
return_source_documents=True,
verbose=True,
### THIS WORKS ###
retriever=vector_db.as_retriever(search_kwargs={"namespace": uid, "k": 20}),
)
```
I’m confused on why the search_type and search_kwargs are not named parameters. From going through this exercise, it is clearer to me now that I should read the function docstring, but for the sake of readability, traceability, and type checking, wouldn’t it be better to just add those search_kwargs function definition?
E.g., my search_kwargs contained `namespace` and `k` — but it’s not really clear in the documentation for as_retriever how to do this.
### Idea or request for content:
## Suggested Enhancement
I propose revising the `as_retriever()` method to include named parameters for common search options, such as `namespace`, `k`, and `search_type`. This would not only clarify usage but also enhance developer experience by providing code completion hints and reducing reliance on external documentation.
For instance, the method signature could be enhanced as follows:
```python
def as_retriever(self, namespace=None, k=4, search_type="similarity", **kwargs):
...
```
**Benefits**
- This reduces the need for relying on examples in the docstring of this method to understand its proper usage.
- Allows for auto-completion in IDE.
What do you think? | DOC: Clarification on as_retriever Method Parameters in RetrievalQA Chain | https://api.github.com/repos/langchain-ai/langchain/issues/18045/comments | 0 | 2024-02-23T19:27:56Z | 2024-06-11T00:33:24Z | https://github.com/langchain-ai/langchain/issues/18045 | 2,151,676,931 | 18,045 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.agents import create_sql_agent
db_chain = SQLDatabaseToolkit(
db=db, llm=bedrock_llm)
agent_executor = create_sql_agent(llm=bedrock_llm, toolkit=db_chain, verbose=True, prompt=few_shot_prompt)
agent_executor.invoke("Question?")
```
### Error Message and Stack Trace (if applicable)
```
ValueError Traceback (most recent call last)
Cell In[11], line 6
3 from langchain.schema.cache import BaseCache
4 db_chain = SQLDatabaseToolkit(
5 db=db, llm=bedrock_llm)
----> 6 agent_executor = create_sql_agent(llm=bedrock_llm, toolkit=db_chain, verbose=True, prompt=few_shot_prompt)
7 #agent_executor.run("What are the number of catapult impressions for new to alexa customers in US for last 2 weeks")
8 agent_executor.invoke("What are the number of catapult impressions for new to alexa customers in US for last 2 weeks?")
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain_community/agent_toolkits/sql/base.py:182, in create_sql_agent(llm, toolkit, agent_type, callback_manager, prefix, suffix, format_instructions, input_variables, top_k, max_iterations, max_execution_time, early_stopping_method, verbose, agent_executor_kwargs, extra_tools, db, prompt, **kwargs)
172 template = "\n\n".join(
173 [
174 react_prompt.PREFIX,
(...)
178 ]
179 )
180 prompt = PromptTemplate.from_template(template)
181 agent = RunnableAgent(
--> 182 runnable=create_react_agent(llm, tools, prompt),
183 input_keys_arg=["input"],
184 return_keys_arg=["output"],
185 )
187 elif agent_type == AgentType.OPENAI_FUNCTIONS:
188 if prompt is None:
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/agents/react/agent.py:97, in create_react_agent(llm, tools, prompt)
93 missing_vars = {"tools", "tool_names", "agent_scratchpad"}.difference(
94 prompt.input_variables
95 )
96 if missing_vars:
---> 97 raise ValueError(f"Prompt missing required variables: {missing_vars}")
99 prompt = prompt.partial(
100 tools=render_text_description(list(tools)),
101 tool_names=", ".join([t.name for t in tools]),
102 )
103 llm_with_stop = llm.bind(stop=["\nObservation"])
ValueError: Prompt missing required variables: {'tool_names', 'tools', 'agent_scratchpad'}
```
### Description
I am trying to use SQL Agent from LangChain and facing this error.
### System Info
langchain 0.1.9
langchain-experimental 0.0.49 | SQL Agent got error : Prompt missing required variables: {'tool_names', 'tools', 'agent_scratchpad'} | https://api.github.com/repos/langchain-ai/langchain/issues/18035/comments | 2 | 2024-02-23T18:29:38Z | 2024-06-09T16:07:27Z | https://github.com/langchain-ai/langchain/issues/18035 | 2,151,595,129 | 18,035 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
install the following dependencies
```bash
pip install openai
pip install google-search-results
pip install langchain # version 0.1.9
pip install numexpr
```
Run the following python code (and add the openai api key):
```python
from langchain import hub
from langchain import LLMMathChain, SerpAPIWrapper
from langchain.agents import Tool
from langchain.chat_models import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_functions_agent
import os
os.environ['OPENAI_API_KEY'] = str("xxx")
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")
llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)
tools = [
Tool(
name="Calculator",
func=llm_math_chain.run,
description="useful for when you need to answer questions about math"
)
]
hub_prompt: object = hub.pull("hwchase17/openai-tools-agent")
agent = create_openai_functions_agent(llm, tools, hub_prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, handle_parsing_errors=True)
input_prompt = "What is the square root of the year of birth of the founder of Space X?"
agent_executor.invoke({"input": input_prompt})
```
### Error Message and Stack Trace (if applicable)
AttributeError: 'AIMessageChunk' object has no attribute 'text'
Trace
Traceback (most recent call last):
File "/Users/fteutsch/Desktop/PythonProjects/private/digiprod-gen/bug_report.py", line 31, in <module>
agent_executor.invoke({"input": input_prompt})
File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain/agents/agent.py", line 1391, in _call
next_step_output = self._take_next_step(
File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain/agents/agent.py", line 1097, in _take_next_step
[
File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain/agents/agent.py", line 1097, in <listcomp>
[
File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain/agents/agent.py", line 1125, in _iter_next_step
output = self.agent.plan(
File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain/agents/agent.py", line 387, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2427, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2414, in transform
yield from self._transform_stream_with_config(
File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1494, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2378, in _transform
for output in final_pipeline:
File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1032, in transform
for chunk in input:
File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4167, in transform
yield from self.bound.transform(
File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1042, in transform
yield from self.stream(final, config, **kwargs)
File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 250, in stream
raise e
File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 234, in stream
for chunk in self._stream(
File "/Users/fteutsch/Library/Caches/pypoetry/virtualenvs/digiprod-gen-qSAcCc-Q-py3.10/lib/python3.10/site-packages/langchain_community/chat_models/openai.py", line 418, in _stream
run_manager.on_llm_new_token(chunk.text, chunk=cg_chunk)
AttributeError: 'AIMessageChunk' object has no attribute 'text'
### Description
Since i updated langchain from 0.1.7 to latest version 0.1.9 i got the exception mentioned above.
I already found the issue and maybe the solution as well.
libs/community/langchain_community/chat_models/openai.py
lines 414-418
```python
cg_chunk = ChatGenerationChunk(
message=chunk, generation_info=generation_info
)
if run_manager:
run_manager.on_llm_new_token(**chunk**.text, chunk=cg_chunk)
```
**chunk** has the type AIMessageChunk which does not contain the attribute **text**, whereas **cg_chunk** has the type ChatGenerationChunk which contains **text** as attribute (and in version 0.1.7 the same class was used).
The fix probably would be:
```python
cg_chunk = ChatGenerationChunk(
message=chunk, generation_info=generation_info
)
if run_manager:
run_manager.on_llm_new_token(**cg_chunk**.text, chunk=cg_chunk)
```
line 510 in the same file contains the same issue.
### System Info
langchain==0.1.9
langchain-community==0.0.22
langchain-core==0.1.26
langchainhub==0.1.14
macOs
python 3.10 | AttributeError: 'AIMessageChunk' object has no attribute 'text' | https://api.github.com/repos/langchain-ai/langchain/issues/18024/comments | 4 | 2024-02-23T16:03:25Z | 2024-02-25T22:41:11Z | https://github.com/langchain-ai/langchain/issues/18024 | 2,151,376,945 | 18,024 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
print("?")
```
### Error Message and Stack Trace (if applicable)
```
File "/home/martin/dev/mlki/langchain/llmaa/env/lib/python3.10/site-packages/langchain_community/utils/math.py", line 29, in cosine_similarity
Z = 1 - simd.cdist(X, Y, metric="cosine")
TypeError: unsupported operand type(s) for -: 'int' and 'simsimd.OutputDistances'
```
### Description
Error came up after installing langchain [v0.1.9](https://github.com/langchain-ai/langchain/releases/tag/v0.1.9) and the python bindings for [simsimd v3.8.1](https://pypi.org/project/simsimd/3.8.1/).
After switching the python package to [simsimd v3.7.7](https://github.com/ashvardanian/SimSIMD/releases/tag/v3.7.7), the error disappeared.
Seems to be related to this change https://github.com/ashvardanian/SimSIMD/commit/819a40666faf038613b2368d1810e2563fb9d422
### System Info
langchain==0.1.9
langchain-community==0.0.22
langchain-core==0.1.26
langchain-openai==0.0.7
langchainhub==0.1.14
| TypeError: unsupported operand type(s) for -: 'int' and 'simsimd.OutputDistances' | https://api.github.com/repos/langchain-ai/langchain/issues/18022/comments | 4 | 2024-02-23T15:41:24Z | 2024-07-17T16:05:03Z | https://github.com/langchain-ai/langchain/issues/18022 | 2,151,340,319 | 18,022 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The code sample: https://python.langchain.com/docs/integrations/vectorstores/bigquery_vector_search
produces an error message.
The exception is thrown when running the following code :
store.similarity_search(query)
### Error Message and Stack Trace (if applicable)
`the JSON object must be str, bytes or bytearray, not dict`
### Description
Once, you have debugged the code, the root cause is this located block of source code :
Source file :
`lib/langchain_community/vectorstores/bigquery_vector_search.py`
around Line 548 (langchain_community 0.0.22)
```python
metadata = row[self.metadata_field]
if metadata:
metadata = json.loads(metadata)
else:
metadata = {}
```
This can't work, because row[self.metadata_field] is a dictionary.
To make it work, i suggest to replace this part by :
```python
metadata = row[self.metadata_field]
if metadata is None:
metadata = {}
````
but, maybe there is a smarter fix, in my case, it does the job.
### System Info
pip freeze | grep langchain
langchain==0.1.6
langchain-community==0.0.22
langchain-core==0.1.26
langchain-google-genai==0.0.9
langchain-google-vertexai==0.0.5
langchain-openai==0.0.5 | When using BigQueryVectorSearch store and similarity_search(query) : I got the message : `the JSON object must be str, bytes or bytearray, not dict` | https://api.github.com/repos/langchain-ai/langchain/issues/18020/comments | 0 | 2024-02-23T14:10:23Z | 2024-06-08T16:11:45Z | https://github.com/langchain-ai/langchain/issues/18020 | 2,151,174,694 | 18,020 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from getpass import getpass
NOTION_TOKEN = getpass()
DATABASE_ID = getpass()
········
········
from langchain_community.document_loaders import NotionDBLoader
loader = NotionDBLoader(
integration_token=NOTION_TOKEN,
database_id=DATABASE_ID,
request_timeout_sec=30, # optional, defaults to 10
)
docs = loader.load()
print(docs)
```
### Error Message and Stack Trace (if applicable)
b'{"object":"error","status":400,"code":"validation_error","message":"body failed validation. Fix one:\\nbody.filter.or should be defined, instead was `undefined`.\\nbody.filter.and should be defined, instead was `undefined`.\\nbody.filter.title should be defined, instead was `undefined`.\\nbody.filter.rich_text should be defined, instead was `undefined`.\\nbody.filter.number should be defined, instead was `undefined`.\\nbody.filter.checkbox should be defined, instead was `undefined`.\\nbody.filter.select should be defined, instead was `undefined`.\\nbody.filter.multi_select should be defined, instead was `undefined`.\\nbody.filter.status should be defined, instead was `undefined`.\\nbody.filter.date should be defined, instead was `undefined`.\\nbody.filter.people should be defined, instead was `undefined`.\\nbody.filter.files should be defined, instead was `undefined`.\\nbody.filter.url should be defined, instead was `undefined`.\\nbody.filter.email should be defined, instead was `undefined`.\\nbody.filter.phone_number should be defined, instead was `undefined`.\\nbody.filter.relation should be defined, instead was `undefined`.\\nbody.filter.created_by should be defined, instead was `undefined`.\\nbody.filter.created_time should be defined, instead was `undefined`.\\nbody.filter.last_edited_by should be defined, instead was `undefined`.\\nbody.filter.last_edited_time should be defined, instead was `undefined`.\\nbody.filter.formula should be defined, instead was `undefined`.\\nbody.filter.unique_id should be defined, instead was `undefined`.\\nbody.filter.rollup should be defined, instead was `undefined`.","request_id":"a251ecce-5757-44bf-a5f1-c4d7582d72dd"}'
### Description
getting 400, Bad Request
### System Info
langchain==0.1.9
langchain-community==0.0.22
langchain-core==0.1.26
| BadRequest Error on Simple Query to NotionDB Database Using notion_db_secret and database_id, due to passing empty dict in json for filter by default | https://api.github.com/repos/langchain-ai/langchain/issues/18009/comments | 5 | 2024-02-23T09:53:43Z | 2024-03-14T13:56:59Z | https://github.com/langchain-ai/langchain/issues/18009 | 2,150,741,437 | 18,009 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
langchain_core->prompts->loading.py->load_template line46
with open(template_path) as f:
### Error Message and Stack Trace (if applicable)
UnicodeDecodeError: 'gbk' codec can't decode byte 0xaa in position 55: illegal multibyte sequence
### Description
The default encoding mode of the Windows system is 'gbk', and you need to specify the decoding mode as 'utf-8'.
langchain_core->prompts->loading.py->load_template line46
with open(template_path) as f: -> with open(template_path, encoding="utf-8") as f:
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.9.13 (main, Aug 25 2022, 23:51:50) [MSC v.1916 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.25
> langchain: 0.1.8
> langchain_community: 0.0.21
> langsmith: 0.1.5
> langchain_experimental: 0.0.51
> langchain_openai: 0.0.6
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | UnicodeDecodeError: 'gbk' codec can't decode byte 0xaa in position 55: illegal multibyte sequence | https://api.github.com/repos/langchain-ai/langchain/issues/17995/comments | 1 | 2024-02-23T03:07:18Z | 2024-07-20T02:28:43Z | https://github.com/langchain-ai/langchain/issues/17995 | 2,150,289,392 | 17,995 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
import os
prompt = ChatPromptTemplate.from_template("tell me a joke about {foo}")
model = ChatOpenAI(openai_api_key=os.environ["OPEN_AI_KEY"], model="gpt-4")
functions = [
{
"name": "joke",
"description": "A joke",
"parameters": {
"type": "object",
"properties": {
"setup": {"type": "string", "description": "The setup for the joke"},
"punchline": {
"type": "string",
"description": "The punchline for the joke",
},
},
"required": ["setup", "punchline"],
},
}
]
from langchain.output_parsers.openai_functions import JsonOutputFunctionsParser
chain = (
prompt
| model.bind(function_call={"name": "joke"}, functions=functions)
| JsonOutputFunctionsParser()
)
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[7], [line 21](vscode-notebook-cell:?execution_count=7&line=21)
[1](vscode-notebook-cell:?execution_count=7&line=1) functions = [
[2](vscode-notebook-cell:?execution_count=7&line=2) {
[3](vscode-notebook-cell:?execution_count=7&line=3) "name": "joke",
(...)
[16](vscode-notebook-cell:?execution_count=7&line=16) }
[17](vscode-notebook-cell:?execution_count=7&line=17) ]
[18](vscode-notebook-cell:?execution_count=7&line=18) from langchain.output_parsers.openai_functions import JsonOutputFunctionsParser
[20](vscode-notebook-cell:?execution_count=7&line=20) chain = (
---> [21](vscode-notebook-cell:?execution_count=7&line=21) prompt
[22](vscode-notebook-cell:?execution_count=7&line=22) | model.bind(function_call={"name": "joke"}, functions=functions)
[23](vscode-notebook-cell:?execution_count=7&line=23) | JsonOutputFunctionsParser()
[24](vscode-notebook-cell:?execution_count=7&line=24) )
File [~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2010](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2010), in RunnableSequence.__or__(self, other)
[1996](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:1996) return RunnableSequence(
[1997](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:1997) self.first,
[1998](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:1998) *self.middle,
(...)
[2003](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2003) name=self.name or other.name,
[2004](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2004) )
[2005](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2005) else:
[2006](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2006) return RunnableSequence(
[2007](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2007) self.first,
[2008](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2008) *self.middle,
[2009](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2009) self.last,
-> [2010](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2010) coerce_to_runnable(other),
[2011](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2011) name=self.name,
[2012](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:2012) )
File [~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:4366](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:4366), in coerce_to_runnable(thing)
[4364](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:4364) return cast(Runnable[Input, Output], RunnableParallel(thing))
[4365](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:4365) else:
-> [4366](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:4366) raise TypeError(
[4367](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:4367) f"Expected a Runnable, callable or dict."
[4368](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:4368) f"Instead got an unsupported type: {type(thing)}"
[4369](https://file+.vscode-resource.vscode-cdn.net/Users/omar.qusous/Projects/Omar/llm_validate_enrich_sr/~/Library/Python/3.9/lib/python/site-packages/langchain_core/runnables/base.py:4369) )
TypeError: Expected a Runnable, callable or dict.Instead got an unsupported type: <class 'langchain.output_parsers.openai_functions.JsonOutputFunctionsParser'>
### Description
I am running the example given in the Cookbook https://python.langchain.com/docs/expression_language/cookbook/prompt_llm_parser#prompttemplate-llm-outputparser
and I am getting the error shown
### System Info
```bash
langchain==0.0.229
langchain-core==0.1.25
langchain-google-genai==0.0.9
langchain-openai==0.0.6
langchainplus-sdk==0.0.20
``` | Error running the example https://python.langchain.com/docs/expression_language/cookbook/prompt_llm_parser | https://api.github.com/repos/langchain-ai/langchain/issues/17975/comments | 1 | 2024-02-22T21:25:30Z | 2024-06-08T16:11:35Z | https://github.com/langchain-ai/langchain/issues/17975 | 2,149,965,621 | 17,975 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
import os
import pinecone
import streamlit as st
from dotenv import load_dotenv, find_dotenv
from langchain.chains import RetrievalQA
from langchain_openai import ChatOpenAI
from langchain_openai import OpenAIEmbeddings
from langchain_community.document_loaders import PyPDFLoader, Docx2txtLoader, TextLoader
from langchain_core.vectorstores import VectorStore
from langchain_community.vectorstores import Pinecone
from langchain_pinecone import PineconeVectorStore
from langchain.text_splitter import RecursiveCharacterTextSplitter
### Error Message and Stack Trace (if applicable)
cannot import name 'PineconeVectorStore' from 'langchain_pinecone'
### Description
So while import all of the libraries i provided Unfortunatelly i get an error. Can't tell why. I am using import that has been set in the code libs/partners/pinecone/langchain_pinecone/vectorstores.py. The example says directly to use this import. If anyone had this issue, i would be glad to hear the solution.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #18~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Feb 7 11:40:03 UTC 2
> Python Version: 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.1.22
> langchain: 0.1.6
> langchain_community: 0.0.19
> langsmith: 0.0.87
> langchain_openai: 0.0.5
> langchain_pinecone: 0.0.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| Isse while importing PineconeVectorStore | https://api.github.com/repos/langchain-ai/langchain/issues/17965/comments | 1 | 2024-02-22T18:54:53Z | 2024-02-22T18:57:55Z | https://github.com/langchain-ai/langchain/issues/17965 | 2,149,739,963 | 17,965 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
# test.py
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores.mongodb_atlas import MongoDBAtlasVectorSearch
embeddings = OpenAIEmbeddings()
MONGO_CONNECTION_URI = "mongo uri"
VECTORS_DB_NAME = "db name"
VECTOR_SEARCH_INDEX_NAME = "index name"
db = MongoDBAtlasVectorSearch.from_connection_string(
MONGO_CONNECTION_URI,
VECTORS_DB_NAME,
embeddings,
index_name=VECTOR_SEARCH_INDEX_NAME,
)
db_query = "Find info related to ..."
num_of_db_results = 10
raw_contexts = db.max_marginal_relevance_search(query=db_query, k=num_of_db_results, lambda_mult=0)
```
### Error Message and Stack Trace (if applicable)
```cmd
Traceback (most recent call last):
File ".../test.py", line 39, in <module>
raw_contexts = db.max_marginal_relevance_search(query=db_query, k=num_of_db_results, lambda_mult=0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../lib/python3.12/site-packages/langchain_community/vectorstores/mongodb_atlas.py", line 325, in max_marginal_relevance_search
[doc.metadata[self._embedding_key] for doc, _ in docs],
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^
KeyError: 'embedding'
```
### Description
* Error when using MMR (max marginal relevance) search. This includes
- calling. `max_marginal_relevance_search` directly
- using `db.as_retriever(search_type="mmr")`
* `self._embedding_key` is deleted from `docs` in `_similarity_search_with_score`
https://github.com/langchain-ai/langchain/blob/f6e3aa9770e32216954c3e0f2fa6825e5d89bd75/libs/community/langchain_community/vectorstores/mongodb_atlas.py#L212
the following method `maximal_marginal_relevance` tries to access the deleted field resulting in an error
https://github.com/langchain-ai/langchain/blob/f6e3aa9770e32216954c3e0f2fa6825e5d89bd75/libs/community/langchain_community/vectorstores/mongodb_atlas.py#L325
Expectation
* MMR option working with no error
### System Info
Python
`v3.12.0`
requirements.txt
```
langchain==0.1.8
langchain-community==0.0.21
langchain-core==0.1.25
langchain-openai==0.0.5
langsmith==0.1.5
openai==1.11.1
``` | community: [MongoDBAtlasVectorSearch] Fix KeyError 'embedding' when using MMR | https://api.github.com/repos/langchain-ai/langchain/issues/17963/comments | 1 | 2024-02-22T18:40:16Z | 2024-06-08T16:11:30Z | https://github.com/langchain-ai/langchain/issues/17963 | 2,149,716,421 | 17,963 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Hi. I try to reproduce the exact same exemple used [here](https://python.langchain.com/docs/modules/agents/agent_types/react), but I use duckduck go to search. The following error occur
```
from langchain import hub
from langchain.agents import AgentExecutor, create_react_agent
from langchain.tools import DuckDuckGoSearchRun, DuckDuckGoSearchResults
# setting the tools
tools = [DuckDuckGoSearchResults()]
# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/react")
# Choose the LLM to use
#llm = OpenAI()
#llm = VertexAI()
# Construct the ReAct agent
agent = create_react_agent(llm, tools, prompt)
# Create an agent executor by passing in the agent and tools
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "what is LangChain?"})
```
### Error Message and Stack Trace (if applicable)
```
> Entering new AgentExecutor chain...
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[20], line 21
18 # Create an agent executor by passing in the agent and tools
19 agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
---> 21 agent_executor.invoke({"input": "what is LangChain?"})
File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:168, in Chain.invoke(self, input, config, **kwargs)
166 except BaseException as e:
167 run_manager.on_chain_error(e)
--> 168 raise e
169 run_manager.on_chain_end(outputs)
171 if include_run_info:
File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:158, in Chain.invoke(self, input, config, **kwargs)
155 try:
156 self._validate_inputs(inputs)
157 outputs = (
--> 158 self._call(inputs, run_manager=run_manager)
159 if new_arg_supported
160 else self._call(inputs)
161 )
163 final_outputs: Dict[str, Any] = self.prep_outputs(
164 inputs, outputs, return_only_outputs
165 )
166 except BaseException as e:
File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:1391, in AgentExecutor._call(self, inputs, run_manager)
1389 # We now enter the agent loop (until it returns something).
1390 while self._should_continue(iterations, time_elapsed):
-> 1391 next_step_output = self._take_next_step(
1392 name_to_tool_map,
1393 color_mapping,
1394 inputs,
1395 intermediate_steps,
1396 run_manager=run_manager,
1397 )
1398 if isinstance(next_step_output, AgentFinish):
1399 return self._return(
1400 next_step_output, intermediate_steps, run_manager=run_manager
1401 )
File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:1097, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1088 def _take_next_step(
1089 self,
1090 name_to_tool_map: Dict[str, BaseTool],
(...)
1094 run_manager: Optional[CallbackManagerForChainRun] = None,
1095 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1096 return self._consume_next_step(
-> 1097 [
1098 a
1099 for a in self._iter_next_step(
1100 name_to_tool_map,
1101 color_mapping,
1102 inputs,
1103 intermediate_steps,
1104 run_manager,
1105 )
1106 ]
1107 )
File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:1097, in <listcomp>(.0)
1088 def _take_next_step(
1089 self,
1090 name_to_tool_map: Dict[str, BaseTool],
(...)
1094 run_manager: Optional[CallbackManagerForChainRun] = None,
1095 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1096 return self._consume_next_step(
-> 1097 [
1098 a
1099 for a in self._iter_next_step(
1100 name_to_tool_map,
1101 color_mapping,
1102 inputs,
1103 intermediate_steps,
1104 run_manager,
1105 )
1106 ]
1107 )
File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:1125, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1122 intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
1124 # Call the LLM to see what to do.
-> 1125 output = self.agent.plan(
1126 intermediate_steps,
1127 callbacks=run_manager.get_child() if run_manager else None,
1128 **inputs,
1129 )
1130 except OutputParserException as e:
1131 if isinstance(self.handle_parsing_errors, bool):
File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:387, in RunnableAgent.plan(self, intermediate_steps, callbacks, **kwargs)
381 # Use streaming to make sure that the underlying LLM is invoked in a streaming
382 # fashion to make it possible to get access to the individual LLM tokens
383 # when using stream_log with the Agent Executor.
384 # Because the response from the plan is not a generator, we need to
385 # accumulate the output into final output and return that.
386 final_output: Any = None
--> 387 for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
388 if final_output is None:
389 final_output = chunk
File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:2427, in RunnableSequence.stream(self, input, config, **kwargs)
2421 def stream(
2422 self,
2423 input: Input,
2424 config: Optional[RunnableConfig] = None,
2425 **kwargs: Optional[Any],
2426 ) -> Iterator[Output]:
-> 2427 yield from self.transform(iter([input]), config, **kwargs)
File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:2414, in RunnableSequence.transform(self, input, config, **kwargs)
2408 def transform(
2409 self,
2410 input: Iterator[Input],
2411 config: Optional[RunnableConfig] = None,
2412 **kwargs: Optional[Any],
2413 ) -> Iterator[Output]:
-> 2414 yield from self._transform_stream_with_config(
2415 input,
2416 self._transform,
2417 patch_config(config, run_name=(config or {}).get("run_name") or self.name),
2418 **kwargs,
2419 )
File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:1494, in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
1492 try:
1493 while True:
-> 1494 chunk: Output = context.run(next, iterator) # type: ignore
1495 yield chunk
1496 if final_output_supported:
File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:2378, in RunnableSequence._transform(self, input, run_manager, config)
2369 for step in steps:
2370 final_pipeline = step.transform(
2371 final_pipeline,
2372 patch_config(
(...)
2375 ),
2376 )
-> 2378 for output in final_pipeline:
2379 yield output
File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:1032, in Runnable.transform(self, input, config, **kwargs)
1029 final: Input
1030 got_first_val = False
-> 1032 for chunk in input:
1033 if not got_first_val:
1034 final = chunk
File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:4164, in RunnableBindingBase.transform(self, input, config, **kwargs)
4158 def transform(
4159 self,
4160 input: Iterator[Input],
4161 config: Optional[RunnableConfig] = None,
4162 **kwargs: Any,
4163 ) -> Iterator[Output]:
-> 4164 yield from self.bound.transform(
4165 input,
4166 self._merge_configs(config),
4167 **{**self.kwargs, **kwargs},
4168 )
File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:1042, in Runnable.transform(self, input, config, **kwargs)
1039 final = final + chunk # type: ignore[operator]
1041 if got_first_val:
-> 1042 yield from self.stream(final, config, **kwargs)
File /opt/conda/lib/python3.10/site-packages/langchain_core/language_models/llms.py:452, in BaseLLM.stream(self, input, config, stop, **kwargs)
445 except BaseException as e:
446 run_manager.on_llm_error(
447 e,
448 response=LLMResult(
449 generations=[[generation]] if generation else []
450 ),
451 )
--> 452 raise e
453 else:
454 run_manager.on_llm_end(LLMResult(generations=[[generation]]))
File /opt/conda/lib/python3.10/site-packages/langchain_core/language_models/llms.py:436, in BaseLLM.stream(self, input, config, stop, **kwargs)
434 generation: Optional[GenerationChunk] = None
435 try:
--> 436 for chunk in self._stream(
437 prompt, stop=stop, run_manager=run_manager, **kwargs
438 ):
439 yield chunk.text
440 if generation is None:
File /opt/conda/lib/python3.10/site-packages/langchain_community/llms/vertexai.py:376, in VertexAI._stream(self, prompt, stop, run_manager, **kwargs)
368 def _stream(
369 self,
370 prompt: str,
(...)
373 **kwargs: Any,
374 ) -> Iterator[GenerationChunk]:
375 params = self._prepare_params(stop=stop, stream=True, **kwargs)
--> 376 for stream_resp in completion_with_retry( # type: ignore[misc]
377 self,
378 [prompt],
379 stream=True,
380 is_gemini=self._is_gemini_model,
381 run_manager=run_manager,
382 **params,
383 ):
384 chunk = self._response_to_generation(stream_resp)
385 yield chunk
File /opt/conda/lib/python3.10/site-packages/langchain_community/llms/vertexai.py:76, in completion_with_retry(llm, prompt, stream, is_gemini, run_manager, **kwargs)
73 return llm.client.predict_streaming(prompt[0], **kwargs)
74 return llm.client.predict(prompt[0], **kwargs)
---> 76 return _completion_with_retry(prompt, is_gemini, **kwargs)
File /opt/conda/lib/python3.10/site-packages/tenacity/__init__.py:289, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw)
287 @functools.wraps(f)
288 def wrapped_f(*args: t.Any, **kw: t.Any) -> t.Any:
--> 289 return self(f, *args, **kw)
File /opt/conda/lib/python3.10/site-packages/tenacity/__init__.py:379, in Retrying.__call__(self, fn, *args, **kwargs)
377 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
378 while True:
--> 379 do = self.iter(retry_state=retry_state)
380 if isinstance(do, DoAttempt):
381 try:
File /opt/conda/lib/python3.10/site-packages/tenacity/__init__.py:314, in BaseRetrying.iter(self, retry_state)
312 is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain)
313 if not (is_explicit_retry or self.retry(retry_state)):
--> 314 return fut.result()
316 if self.after is not None:
317 self.after(retry_state)
File /opt/conda/lib/python3.10/concurrent/futures/_base.py:451, in Future.result(self, timeout)
449 raise CancelledError()
450 elif self._state == FINISHED:
--> 451 return self.__get_result()
453 self._condition.wait(timeout)
455 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
File /opt/conda/lib/python3.10/concurrent/futures/_base.py:403, in Future.__get_result(self)
401 if self._exception:
402 try:
--> 403 raise self._exception
404 finally:
405 # Break a reference cycle with the exception in self._exception
406 self = None
File /opt/conda/lib/python3.10/site-packages/tenacity/__init__.py:382, in Retrying.__call__(self, fn, *args, **kwargs)
380 if isinstance(do, DoAttempt):
381 try:
--> 382 result = fn(*args, **kwargs)
383 except BaseException: # noqa: B902
384 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
File /opt/conda/lib/python3.10/site-packages/langchain_community/llms/vertexai.py:73, in completion_with_retry.<locals>._completion_with_retry(prompt, is_gemini, **kwargs)
71 else:
72 if stream:
---> 73 return llm.client.predict_streaming(prompt[0], **kwargs)
74 return llm.client.predict(prompt[0], **kwargs)
AttributeError: 'TextGenerationModel' object has no attribute 'predict_streaming'
```
### Description
Im trying to use langchain to search and ask about specifics topics. I expect to see the answer about the topics, but the above mentioned error occur
### System Info
langchain==0.1.8
langchain-community==0.0.21
langchain-core==0.1.25
langchainhub==0.1.14 | AttributeError: 'TextGenerationModel' object has no attribute 'predict_streaming' | https://api.github.com/repos/langchain-ai/langchain/issues/17962/comments | 0 | 2024-02-22T18:36:10Z | 2024-06-08T16:11:25Z | https://github.com/langchain-ai/langchain/issues/17962 | 2,149,707,516 | 17,962 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
I define a Redis database like
```
self.db: Redis = Redis(
embedding=embeddings,
redis_url=self.url,
index_name=Database.index_name,
)
```
and later on, if my docs are out of date, I am clearing the database by using `self.db.client` directly to call `FLUSHALL`. After that, I add the docs, which hangs indefinitely:
```
self.clear_db()
logger.info(f"Adding {len(docs)} docs to the database")
self.db.add_documents(documents=docs)
logger.info("Finished adding docs")
```
The last log is never seen and no docs are added to the database.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am clearing the Redis database after creating it, which deletes the index in which is used by for the vector database functionality. However, it seems the current implementation cannot handle the index being deleted or the database getting cleared before adding documents to it.
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Dec 7 03:06:13 EST 2023
> Python Version: 3.11.5 (main, Sep 22 2023, 15:34:29) [GCC 8.5.0 20210514 (Red Hat 8.5.0-20)]
Package Information
-------------------
> langchain_core: 0.1.23
> langchain: 0.1.7
> langchain_community: 0.0.20
> langsmith: 0.0.87
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | Redis `add/aadd_documents` hangs after calling `FLUSHALL` on Database | https://api.github.com/repos/langchain-ai/langchain/issues/17959/comments | 1 | 2024-02-22T17:06:33Z | 2024-02-22T19:09:41Z | https://github.com/langchain-ai/langchain/issues/17959 | 2,149,527,094 | 17,959 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoader
loader = AzureAIDocumentIntelligenceLoader(file_path='<a pdf containing a table>', api_endpoint="<your endpoint>", api_key="<your key>", mode="object")
loaded_documents = loader.load_and_split()
```
### Error Message and Stack Trace (if applicable)
AzureAIDocumentIntelligenceParser._generate_docs_object(self, result)
72 # table
73 for table in result.tables:
---> 74 yield Document(
75 page_content=table.cells, # json object
76 metadata={
77 "footnote": table.footnotes,
78 "caption": table.caption,
79 "page": para.bounding_regions[0].page_number,
80 "bounding_box": para.bounding_regions[0].polygon,
81 "row_count": table.row_count,
82 "column_count": table.column_count,
83 "type": "table",
84 },
85 )
...
ValidationError: 1 validation error for Document
page_content
str type expected (type=type_error.str)
### Description
* it seems that the page_content attribute is not filled correctly
* it feeds in a list[DocumentTableCell] in a field that expects a string.
* there is even a comment in the code, that declares that it is not a string that is passed in but a "json object" instead
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:44 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6000
> Python Version: 3.11.7 (main, Dec 4 2023, 18:10:11) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.1.22
> langchain: 0.1.6
> langchain_community: 0.0.19
> langsmith: 0.0.87
| AzureAIDocumentIntelligenceParser fills the Document Model incorrectly for tables | https://api.github.com/repos/langchain-ai/langchain/issues/17957/comments | 0 | 2024-02-22T16:47:11Z | 2024-06-08T16:11:20Z | https://github.com/langchain-ai/langchain/issues/17957 | 2,149,492,486 | 17,957 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
tried to run the SelfQueryRetriever for retrieving the snippets, but it is returning some weird error which is returning JSON Decode Error
### Idea or request for content:
below is the code
```
def fetch_unique_documents(query_template, company_names, initial_limit, desired_count):
company_documents = {}
for company_name in company_names:
# Format the query with the current company name
query = query_template.format(company_names=company_name)
unique_docs = []
seen_contents = set()
current_limit = initial_limit
while len(unique_docs) < desired_count:
structured_query = StructuredQuery(query=query, limit=current_limit)
docs = retriever.get_relevant_documents(structured_query)
# Keep track of whether we found new unique documents in this iteration
found_new_unique = False
for doc in docs:
if doc.page_content not in seen_contents:
unique_docs.append(doc)
seen_contents.add(doc.page_content)
found_new_unique = True
if len(unique_docs) == desired_count:
break
if not found_new_unique or len(unique_docs) == desired_count:
break # Exit if no new unique documents are found or if we've reached the desired count
# Increase the limit more aggressively if we are still far from the desired count
current_limit += desired_count - len(unique_docs)
# Store the results in the dictionary with the company name as the key
company_documents[company_name] = unique_docs
return company_documents
# Example usage
company_names = company_names
query_template = "Does the company {company_names}, has plans to get financial statements?"
desired_count = 5 # The number of unique documents you want per company
initial_limit = 50
# Fetch documents for each company
company_documents = fetch_unique_documents(query_template, company_names, initial_limit=desired_count, desired_count=desired_count)
```
and below is the output
```
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/langchain_core/output_parsers/json.py](https://localhost:8080/#) in parse_and_check_json_markdown(text, expected_keys)
174 try:
--> 175 json_obj = parse_json_markdown(text)
176 except json.JSONDecodeError as e:
17 frames
JSONDecodeError: Extra data: line 6 column 1 (char 78)
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
OutputParserException: Got invalid JSON object. Error: Extra data: line 6 column 1 (char 78)
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/langchain/chains/query_constructor/base.py](https://localhost:8080/#) in parse(self, text)
61 )
62 except Exception as e:
---> 63 raise OutputParserException(
64 f"Parsing text\n{text}\n raised following error:\n{e}"
65 )
OutputParserException: Parsing text
```json
{
"query": "Macquarie Group",
"filter": "NO_FILTER",
"limit": 5
}
```
```
how to resolve this issue? | returning JSON Decode Error for SelfQueryRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/17952/comments | 0 | 2024-02-22T14:46:52Z | 2024-06-08T16:11:15Z | https://github.com/langchain-ai/langchain/issues/17952 | 2,149,244,148 | 17,952 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Making a repro will be a *lot* of work for what is likely a fairly simple server-side issue of langsmith. If it isn't, let me know and I'll try to do a repro.
### Error Message and Stack Trace (if applicable)
Failed to batch ingest runs: LangSmithError('Failed to post https://api.smith.langchain.com/runs/batch in LangSmith API. HTTPError(\'422 Client Error: unknown for url: [https://api.smith.langchain.com/runs/batch\](https://api.smith.langchain.com/runs/batch/)', \'{"detail":"[\\\'post\\\', \\\'items\\\', \\\'properties\\\', \\\'name\\\', \\\'maxLength\\\']: \\\\"ChannelRead<[\\\'raw_conversation\\\', \\\'messages\\\', \\\'query\\\', \\\'assistant_id\\\', \\\'assistant_nickname\\\', \\\'user_name\\\', \\\'user_roles\\\', \\\'organization_name\\\']>\\\\" is longer than 128 characters"}\')')
### Description
I started having these errors from langsmith when using it to trace langgraph runs. It ran fine until a couple of days ago, might be an issue with the latest version of langsmith?
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 ZEN SMP PREEMPT_DYNAMIC Mon, 05 Feb 2024 22:07:37 +0000
> Python Version: 3.11.7 (main, Jan 29 2024, 16:03:57) [GCC 13.2.1 20230801]
Package Information
-------------------
> langchain_core: 0.1.19
> langchain: 0.1.4
> langchain_community: 0.0.16
> langsmith: 0.0.84
> langchain_openai: 0.0.5
> langchainhub: 0.1.14
> langgraph: 0.0.21
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve | Langsmith: Getting random "Failed to batch ingest runs: LangSmithError('Failed to post https://api.smith.langchain.com/runs/batch in LangSmith API. HTTPError(\'422 Client Error: unknown for url: https://api.smith.langchain.com/runs/batch\', \'{"detail":"..." is longer than 128 characters"}\')') | https://api.github.com/repos/langchain-ai/langchain/issues/17950/comments | 2 | 2024-02-22T13:48:56Z | 2024-03-06T19:00:15Z | https://github.com/langchain-ai/langchain/issues/17950 | 2,149,119,232 | 17,950 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
embedding = OpenAIEmbeddings()
memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True, output_key='answer')
llm = ChatOpenAI(model="gpt-3.5-turbo-16k",temperature=0.1)
vector_store = PGVector(
connection_string=CONNECTION_STRING,
collection_name=COLLECTION_NAME,
embedding_function=embedding
)
retriever = vector_store.as_retriever(search_kwargs={"k": 3})
qa = ConversationalRetrievalChain.from_llm(
llm=llm,
memory=memory,
retriever=retriever,
combine_docs_chain_kwargs={'prompt': prompt},
return_source_documents=True,
)
```
### Error Message and Stack Trace (if applicable)
when running I got :
{'question': 'what is code of conduct policy?', 'chat_history': [HumanMessage(content='what is code of conduct policy?'), AIMessage(content="The Code of Conduct policy at HabileLabs is a set of guidelines that all team members and board members are expected to know and follow. Here are the key points of the policy:\n\n1. Purpose and Standards:\n 1.1 The Code is based on the highest standards of ethical business conduct.\n 1.2 It serves as a practical guide.........
but not able to store previous conversation in chat_history
### Description
I am not able to store previous conversation in memory as when I asked about the previous instance, the bot reply as I dont Know
### System Info
I am using memory with langchain. | Not able to store chat_history while Implementing Memory in ConversationalRetreival Chain | https://api.github.com/repos/langchain-ai/langchain/issues/17944/comments | 0 | 2024-02-22T12:13:31Z | 2024-06-08T16:11:10Z | https://github.com/langchain-ai/langchain/issues/17944 | 2,148,938,171 | 17,944 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain_openai import AzureChatOpenAI
from llama_index.core.agent import ReActAgent
from llama_index.llms.openai import OpenAI
query_engine_tools = [
QueryEngineTool(
query_engine=lyft_engine,
metadata=ToolMetadata(
name="lyft_10k",
description=(
"Provides information about Lyft financials for year 2021. "
"Use a detailed plain text question as input to the tool."
),
),
),
QueryEngineTool(
query_engine=uber_engine,
metadata=ToolMetadata(
name="uber_10k",
description=(
"Provides information about Uber financials for year 2021. "
"Use a detailed plain text question as input to the tool."
),
),
),
]
azure_llm = AzureChatOpenAI(
openai_api_version=api_version,
api_key=api_key,
azure_endpoint=azure_endpoint,
api_version=api_version,
)
azure_agent =ReActAgent.from_tools(
query_engine_tools,
llm=azure_llm,
memory={},
verbose=True,
)
```
### Error Message and Stack Trace (if applicable)
```
[56](https://vscode-remote+ssh-002dremote-002b143-002e185-002e124-002e40.vscode-resource.vscode-cdn.net/mnt/4TB/okobo-06/codes/repos/algo.ai.lake-carmel.nlp/oren/Azure_Summarizaion/.venv/lib/python3.8/site-packages/llama_index/core/memory/chat_memory_buffer.py:56) """Create a chat memory buffer from an LLM."""
[57](https://vscode-remote+ssh-002dremote-002b143-002e185-002e124-002e40.vscode-resource.vscode-cdn.net/mnt/4TB/okobo-06/codes/repos/algo.ai.lake-carmel.nlp/oren/Azure_Summarizaion/.venv/lib/python3.8/site-packages/llama_index/core/memory/chat_memory_buffer.py:57) if llm is not None:
---> [58](https://vscode-remote+ssh-002dremote-002b143-002e185-002e124-002e40.vscode-resource.vscode-cdn.net/mnt/4TB/okobo-06/codes/repos/algo.ai.lake-carmel.nlp/oren/Azure_Summarizaion/.venv/lib/python3.8/site-packages/llama_index/core/memory/chat_memory_buffer.py:58) context_window = llm.metadata.context_window
[59](https://vscode-remote+ssh-002dremote-002b143-002e185-002e124-002e40.vscode-resource.vscode-cdn.net/mnt/4TB/okobo-06/codes/repos/algo.ai.lake-carmel.nlp/oren/Azure_Summarizaion/.venv/lib/python3.8/site-packages/llama_index/core/memory/chat_memory_buffer.py:59) token_limit = token_limit or int(context_window * DEFAULT_TOKEN_LIMIT_RATIO)
[60](https://vscode-remote+ssh-002dremote-002b143-002e185-002e124-002e40.vscode-resource.vscode-cdn.net/mnt/4TB/okobo-06/codes/repos/algo.ai.lake-carmel.nlp/oren/Azure_Summarizaion/.venv/lib/python3.8/site-packages/llama_index/core/memory/chat_memory_buffer.py:60) elif token_limit is None:
AttributeError: 'NoneType' object has no attribute 'context_window'
```
### Description
I am following the tutorial here:
https://docs.llamaindex.ai/en/latest/examples/agent/react_agent_with_query_engine.html
And it is working great,
However, when I use AzureChatOpenAI (or AzureOpenAI) instead I get an error:
```
[56](https://vscode-remote+ssh-002dremote-002b143-002e185-002e124-002e40.vscode-resource.vscode-cdn.net/mnt/4TB/okobo-06/codes/repos/algo.ai.lake-carmel.nlp/oren/Azure_Summarizaion/.venv/lib/python3.8/site-packages/llama_index/core/memory/chat_memory_buffer.py:56) """Create a chat memory buffer from an LLM."""
[57](https://vscode-remote+ssh-002dremote-002b143-002e185-002e124-002e40.vscode-resource.vscode-cdn.net/mnt/4TB/okobo-06/codes/repos/algo.ai.lake-carmel.nlp/oren/Azure_Summarizaion/.venv/lib/python3.8/site-packages/llama_index/core/memory/chat_memory_buffer.py:57) if llm is not None:
---> [58](https://vscode-remote+ssh-002dremote-002b143-002e185-002e124-002e40.vscode-resource.vscode-cdn.net/mnt/4TB/okobo-06/codes/repos/algo.ai.lake-carmel.nlp/oren/Azure_Summarizaion/.venv/lib/python3.8/site-packages/llama_index/core/memory/chat_memory_buffer.py:58) context_window = llm.metadata.context_window
[59](https://vscode-remote+ssh-002dremote-002b143-002e185-002e124-002e40.vscode-resource.vscode-cdn.net/mnt/4TB/okobo-06/codes/repos/algo.ai.lake-carmel.nlp/oren/Azure_Summarizaion/.venv/lib/python3.8/site-packages/llama_index/core/memory/chat_memory_buffer.py:59) token_limit = token_limit or int(context_window * DEFAULT_TOKEN_LIMIT_RATIO)
[60](https://vscode-remote+ssh-002dremote-002b143-002e185-002e124-002e40.vscode-resource.vscode-cdn.net/mnt/4TB/okobo-06/codes/repos/algo.ai.lake-carmel.nlp/oren/Azure_Summarizaion/.venv/lib/python3.8/site-packages/llama_index/core/memory/chat_memory_buffer.py:60) elif token_limit is None:
AttributeError: 'NoneType' object has no attribute 'context_window'
```
### System Info
langchain==0.1.8
langchain-community==0.0.21
langchain-core==0.1.25
langchain-openai==0.0.6
langchainhub==0.1.14
python 3.8
Ubuntu | Cant use agent with AzureChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/17940/comments | 1 | 2024-02-22T11:48:30Z | 2024-06-08T16:11:05Z | https://github.com/langchain-ai/langchain/issues/17940 | 2,148,886,063 | 17,940 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
few_shot_prompt = FewShotPromptTemplate(
examples=examples,
example_prompt=PromptTemplate.from_template(
"User input: {input}\nSQL query: {query}"
),
input_variables=["input", "dialect", "top_k"],
prefix=system_prefix,
suffix="User input: {input}\nSQL query: ",
)
full_prompt = ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate(prompt=few_shot_prompt),
("human", "{input}"),
MessagesPlaceholder("agent_scratchpad"),
]
)
agent = create_sql_agent(
llm=llm,
db=db,
prompt=full_prompt,
verbose=True
)
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[32], line 1
----> 1 agent = create_sql_agent(
2 llm=llm,
3 db=db,
4 prompt=full_prompt,
5 verbose=True
6 )
File ~\anaconda3\Lib\site-packages\langchain_community\agent_toolkits\sql\base.py:182, in create_sql_agent(llm, toolkit, agent_type, callback_manager, prefix, suffix, format_instructions, input_variables, top_k, max_iterations, max_execution_time, early_stopping_method, verbose, agent_executor_kwargs, extra_tools, db, prompt, **kwargs)
172 template = "\n\n".join(
173 [
174 react_prompt.PREFIX,
(...)
178 ]
179 )
180 prompt = PromptTemplate.from_template(template)
181 agent = RunnableAgent(
--> 182 runnable=create_react_agent(llm, tools, prompt),
183 input_keys_arg=["input"],
184 return_keys_arg=["output"],
185 )
187 elif agent_type == AgentType.OPENAI_FUNCTIONS:
188 if prompt is None:
File ~\anaconda3\Lib\site-packages\langchain\agents\react\agent.py:97, in create_react_agent(llm, tools, prompt)
93 missing_vars = {"tools", "tool_names", "agent_scratchpad"}.difference(
94 prompt.input_variables
95 )
96 if missing_vars:
---> 97 raise ValueError(f"Prompt missing required variables: {missing_vars}")
99 prompt = prompt.partial(
100 tools=render_text_description(list(tools)),
101 tool_names=", ".join([t.name for t in tools]),
102 )
103 llm_with_stop = llm.bind(stop=["\nObservation"])
ValueError: Prompt missing required variables: {'tools', 'tool_names'}
### Description
Create_sql_agent is throwing an error
### System Info
langchain 0.1.8
langchain-community 0.0.21
langchain-core 0.1.25
langchain-experimental 0.0.52
langchain-openai 0.0.6 | create_sql_agent: Prompt missing required variables: {'tools', 'tool_names'} | https://api.github.com/repos/langchain-ai/langchain/issues/17939/comments | 10 | 2024-02-22T11:47:34Z | 2024-07-30T23:01:45Z | https://github.com/langchain-ai/langchain/issues/17939 | 2,148,884,491 | 17,939 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_community.document_loaders import S3FileLoader
from langchain_community.document_loaders import UnstructuredWordDocumentLoader
# loader = S3FileLoader(s3_bucket, s3_key)
# docs = loader.load()
s3.download_file(s3_bucket, s3_key, f"/tmp/{s3_key}")
with open(f"/tmp/{s3_key}", "rb") as f:
loader = UnstructuredWordDocumentLoader(f"/tmp/{s3_key}")
docs = loader.load()
### Error Message and Stack Trace (if applicable)
[ERROR] OSError: [Errno 30] Read-only file system: '/home/sbx_user1051'
Traceback (most recent call last):
File "/var/task/aws_lambda_powertools/logging/logger.py", line 449, in decorate
return lambda_handler(event, context, *args, **kwargs)
File "/var/task/lambda_handler.py", line 54, in handler
docs = loader.load()
File "/var/task/langchain_community/document_loaders/unstructured.py", line 87, in load
elements = self._get_elements()
File "/var/task/langchain_community/document_loaders/word_document.py", line 124, in _get_elements
return partition_docx(filename=self.file_path, **self.unstructured_kwargs)
File "/var/task/unstructured/documents/elements.py", line 526, in wrapper
elements = func(*args, **kwargs)
File "/var/task/unstructured/file_utils/filetype.py", line 619, in wrapper
elements = func(*args, **kwargs)
File "/var/task/unstructured/file_utils/filetype.py", line 574, in wrapper
elements = func(*args, **kwargs)
File "/var/task/unstructured/chunking/__init__.py", line 69, in wrapper
elements = func(*args, **kwargs)
File "/var/task/unstructured/partition/docx.py", line 228, in partition_docx
return list(elements)
File "/var/task/unstructured/partition/lang.py", line 397, in apply_lang_metadata
elements = list(elements)
File "/var/task/unstructured/partition/docx.py", line 305, in _iter_document_elements
yield from self._iter_paragraph_elements(block_item)
File "/var/task/unstructured/partition/docx.py", line 541, in _iter_paragraph_elements
yield from self._classify_paragraph_to_element(item)
File "/var/task/unstructured/partition/docx.py", line 361, in _classify_paragraph_to_element
TextSubCls = self._parse_paragraph_text_for_element_type(paragraph)
File "/var/task/unstructured/partition/docx.py", line 868, in _parse_paragraph_text_for_element_type
if is_possible_narrative_text(text):
File "/var/task/unstructured/partition/text_type.py", line 78, in is_possible_narrative_text
if exceeds_cap_ratio(text, threshold=cap_threshold):
File "/var/task/unstructured/partition/text_type.py", line 274, in exceeds_cap_ratio
if sentence_count(text, 3) > 1:
File "/var/task/unstructured/partition/text_type.py", line 223, in sentence_count
sentences = sent_tokenize(text)
File "/var/task/unstructured/nlp/tokenize.py", line 29, in sent_tokenize
_download_nltk_package_if_not_present(package_category="tokenizers", package_name="punkt")
File "/var/task/unstructured/nlp/tokenize.py", line 23, in _download_nltk_package_if_not_present
nltk.download(package_name)
File "/var/task/nltk/downloader.py", line 777, in download
for msg in self.incr_download(info_or_id, download_dir, force):
File "/var/task/nltk/downloader.py", line 642, in incr_download
yield from self._download_package(info, download_dir, force)
File "/var/task/nltk/downloader.py", line 699, in _download_package
os.makedirs(download_dir)
File "<frozen os>", line 215, in makedirs
File "<frozen os>", line 225, in makedirs
### Description
I am trying to load a file from S3 bucket using AWS lambda using langchain document loaders
I first tried using S3FileLoader when it gave the read-only file error.
So I tried downloading the docx file first from the S3 bucket and then used the specific document loader "UnstructuredWordDocumentLoader" as it was the word document I uploaded. but it still gave the same error.
Eventually I want to load any type of document in S3 bucket and generate embeddings to store in an opensearch vector database.
Also if I try to deploy my lambda with docker using image public.ecr.aws/lambda/python:3.11
I get a error "FileNotFoundError: soffice command was not found. Please install libreoffice"
### System Info
python version 3.11
langchain==0.1.6
opensearch-py==2.4.2
langchain-community==0.0.19
tiktoken==0.6.0
unstructured
unstructured[docx]
aws_lambda_powertools==2.33.1 | Langchain document loaders give read only file system error on AWS lambda while loading the document | https://api.github.com/repos/langchain-ai/langchain/issues/17936/comments | 7 | 2024-02-22T10:59:30Z | 2024-07-12T16:03:53Z | https://github.com/langchain-ai/langchain/issues/17936 | 2,148,793,334 | 17,936 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
import cx_Oracle
from sqlalchemy import create_engine
import openai
from langchain.agents.agent_types import AgentType
from langchain.sql_database import SQLDatabase
from langchain_community.agent_toolkits import create_sql_agent
from langchain_openai import ChatOpenAI
from langchain import OpenAI, SQLDatabase
import cx_Oracle
import openai
from langchain_openai import OpenAIEmbeddings
from langchain.llms import OpenAI
from langchain_community.agent_toolkits import create_sql_agent
from langchain_openai import ChatOpenAI
from langchain.sql_database import SQLDatabase
from langchain_experimental.sql import SQLDatabaseChain
from langchain_openai import ChatOpenAI
from langchain.chains import create_sql_query_chain
from langchain_community.vectorstores import FAISS
from langchain_core.example_selectors import SemanticSimilarityExampleSelector
from langchain_openai import OpenAIEmbeddings
from langchain_core.prompts import (
ChatPromptTemplate,
FewShotPromptTemplate,
MessagesPlaceholder,
PromptTemplate,
SystemMessagePromptTemplate,
)
# Database credentials
username = ""
password = ""
hostname = ""
port = ""
service_name = ""
# Oracle connection string
oracle_connection_string_fmt = (
'oracle+cx_oracle://{username}:{password}@' +
cx_Oracle.makedsn('{hostname}', '{port}', service_name='{service_name}')
)
url = oracle_connection_string_fmt.format(
username=username, password=password,
hostname=hostname, port=port,
service_name=service_name,
)
# Create SQLAlchemy engine
engine = create_engine(url, echo=False)
# Create SQLDatabase instance
db = SQLDatabase(engine)
print(db.dialect)
print(db.get_usable_table_names())
db.run("SELECT * FROM our_oracle_table")
import getpass
import os
#os.environ["OPENAI_API_KEY"] = getpass.getpass()
openai.api_type = "azure"
openai.api_base = "oururl"
# Create OpenAI Completion instance with the 'ChatOpenAI' class
llm = OpenAI(temperature=0, openai_api_key='xxxxxxxxxxxxxxxxxxx', engine='gpt-35-turbo')
# Create SQL agent
agent_executor = create_sql_agent(llm, db=db, verbose=True, handle_parsing_errors=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,)
# Invoke the agent with a specific input
agent_executor.invoke({
"input": ""
})
### Error Message and Stack Trace (if applicable)
I have the following questions and errors:
1. Is the above code is the right way of connecting langchain to oracle sql developer because it wont fetch the right table name, also it is not straight forward as using sqllite
2. If i run certain prompts it provides me this error: DatabaseError: ORA-01805: possible error in date/time operation
3. For certain prompts: i get this error ValueError: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse LLM output: ` I should check the list of tables to see if there is a table called MINI_FACTORY
Action: sql_db_list_tables, ""`
4. Please recommend the right way of connecting to the oracle sql developer database
### Description
I'm trying to query the oracle SQL developer DB for getting answer in the natural language. But, going into lots of issues as mentioned above.
Please help me out :)
### System Info
Name: langchain
Version: 0.1.8
Summary: Building applications with LLMs through composability
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Location: C:\Users\ajm1nh3\AppData\Local\anaconda3\envs\sample\Lib\site-packages
Requires: aiohttp, dataclasses-json, jsonpatch, langchain-community, langchain-core, langsmith, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity
Required-by: langchain-experimental
Note: you may need to restart the kernel to use updated packages.
Name: openai
Version: 0.28.0
Summary: Python client library for the OpenAI API
Home-page: https://github.com/openai/openai-python
Author: OpenAI
Author-email: [email protected]
Required-by: langchain-openai | Integration of LangChain with Oracle SQL developer DB | https://api.github.com/repos/langchain-ai/langchain/issues/17933/comments | 2 | 2024-02-22T10:30:32Z | 2024-06-27T05:10:17Z | https://github.com/langchain-ai/langchain/issues/17933 | 2,148,736,658 | 17,933 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
link : https://python.langchain.com/docs/expression_language/get_started#basic-example-prompt-model-output-parser
Missing import for `1. Prompt section`
The code below throws an error, so we need to add an import.
```python
ChatPromptValue(messages=[HumanMessage(content='tell me a short joke about ice cream')])
```
```python
[HumanMessage(content='tell me a short joke about ice cream')]
```
### Idea or request for content:
Add import `from langchain_core.prompt_values import ChatPromptValue, HumanMessage` for `1. Prompt section`
```python
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
## New import below
from langchain_core.prompt_values import ChatPromptValue, HumanMessage
prompt = ChatPromptTemplate.from_template("tell me a short joke about {topic}")
model = ChatOpenAI(model="gpt-4")
output_parser = StrOutputParser()
chain = prompt | model | output_parser
chain.invoke({"topic": "ice cream"})
``` | DOC: Missing Import on LCEL's 'Get Started' | https://api.github.com/repos/langchain-ai/langchain/issues/17931/comments | 0 | 2024-02-22T10:08:04Z | 2024-06-08T16:10:55Z | https://github.com/langchain-ai/langchain/issues/17931 | 2,148,689,238 | 17,931 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
embedding = OpenAIEmbeddings()
memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True, output_key='answer')
llm = ChatOpenAI(model="gpt-3.5-turbo-16k",temperature=0.1)
vector_store = PGVector(
connection_string=CONNECTION_STRING,
collection_name=COLLECTION_NAME,
embedding_function=embedding
)
retriever = vector_store.as_retriever(search_kwargs={"k": 3})
qa = ConversationalRetrievalChain.from_llm(
llm=llm,
memory=memory,
retriever=retriever,
combine_docs_chain_kwargs={'prompt': prompt},
return_source_documents=True,
)
```
### Error Message and Stack Trace (if applicable)
when running I got :
{'question': 'what is code of conduct policy?', 'chat_history': [HumanMessage(content='what is code of conduct policy?'), AIMessage(content="The Code of Conduct policy at HabileLabs is a set of guidelines that all team members and board members are expected to know and follow. Here are the key points of the policy:\n\n1. Purpose and Standards:\n 1.1 The Code is based on the highest standards of ethical business conduct.\n 1.2 It serves as a practical guide.........
but not able to store previous conversation in chat_history
### Description
I am not able to store previous conversation in memory.
### System Info
I am using memory | Getting error while Implementing Memory in ConversationalRetreival Chain | https://api.github.com/repos/langchain-ai/langchain/issues/17930/comments | 1 | 2024-02-22T09:24:51Z | 2024-06-08T16:10:50Z | https://github.com/langchain-ai/langchain/issues/17930 | 2,148,602,240 | 17,930 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
This Code is just an example. Can't post the full code due to restrictions.
```python
class LLMOutputJSON(OldBaseModel):
msg: str = OldField(description="The message to be sent to the user")
finished_validating: bool = OldField(description="Whether the agent has finished validating the users request")
metadata: dict = OldField(description="Metadata")
def main():
llm = AzureChatOpenAI()
prompt="You are funny and tell the user Jokes"
parser=JsonOutputParser(pydantic_object=LLMOutputJSON)
tools = []
memory = ConversationBufferMemory()
agent = (
{
"input": lambda x: x["input"],
"chat_history": lambda x: x["chat_history"],
"agent_scratchpad": lambda x: format_log_to_str(
x["intermediate_steps"]
),
}
| prompt
| llm
| parser
)
agent = AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
verbose=True,
memory=memory,
handle_parsing_errors=True,
)
agent.ainvoke:("input": "Tell me a Joke.")
```
### Error Message and Stack Trace (if applicable)
res = await agent.ainvoke(
app-1 | ^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 217, in ainvoke
app-1 | raise e
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 208, in ainvoke
app-1 | await self._acall(inputs, run_manager=run_manager)
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 1440, in _acall
app-1 | next_step_output = await self._atake_next_step(
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 1234, in _atake_next_step
app-1 | [
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 1234, in <listcomp>
app-1 | [
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 1262, in _aiter_next_step
app-1 | output = await self.agent.aplan(
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 422, in aplan
app-1 | async for chunk in self.runnable.astream(
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2452, in astream
app-1 | async for chunk in self.atransform(input_aiter(), config, **kwargs):
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2435, in atransform
app-1 | async for chunk in self._atransform_stream_with_config(
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1592, in _atransform_stream_with_config
app-1 | chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2405, in _atransform
app-1 | async for output in final_pipeline:
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/transform.py", line 60, in atransform
app-1 | async for chunk in self._atransform_stream_with_config(
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1557, in _atransform_stream_with_config
app-1 | final_input: Optional[Input] = await py_anext(input_for_tracing, None)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 62, in anext_impl
app-1 | return await __anext__(iterator)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer
app-1 | item = await iterator.__anext__()
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4176, in atransform
app-1 | async for item in self.bound.atransform(
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1058, in atransform
app-1 | async for chunk in input:
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1068, in atransform
app-1 | async for output in self.astream(final, config, **kwargs):
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 589, in astream
app-1 | yield await self.ainvoke(input, config, **kwargs)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 495, in ainvoke
app-1 | return await run_in_executor(config, self.invoke, input, config, **kwargs)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 493, in run_in_executor
app-1 | return await asyncio.get_running_loop().run_in_executor(
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/concurrent/futures/thread.py", line 58, in run
app-1 | result = self.fn(*self.args, **self.kwargs)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/prompts/base.py", line 113, in invoke
app-1 | return self._call_with_config(
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1243, in _call_with_config
app-1 | context.run(
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
app-1 | return func(input, **kwargs) # type: ignore[call-arg]
app-1 | ^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain_core/prompts/base.py", line 98, in _format_prompt_with_error_handling
app-1 | raise KeyError(
app-1 | KeyError: 'Input to PromptTemplate is missing variables {\'"properties"\', \'"foo"\'}. Expected: [\'"foo"\', \'"properties"\', \'agent_scratchpad\', \'chat_history\', \'input\'] Received: [\'input\', \'chat_history\', \'agent_scratchpad\']'
### Description
The Problem seems to come from within the JSONOutputParser.
```python
JSON_FORMAT_INSTRUCTIONS = """
The output should be formatted as a JSON instance that conforms to the JSON schema below.
As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}
the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.
Here is the output schema:
{schema}
"""
```
The foo and properties are registered as input_variables what causes the Error. Not exactly sure why that happens.
### System Info
pip freeze | grep langchain
langchain==0.1.8
langchain-community==0.0.21
langchain-core==0.1.25
langchain-openai==0.0.6
langchainhub==0.1.14 | JsonOutputParser throws KeyError for missing variables | https://api.github.com/repos/langchain-ai/langchain/issues/17929/comments | 1 | 2024-02-22T09:12:17Z | 2024-05-27T20:00:56Z | https://github.com/langchain-ai/langchain/issues/17929 | 2,148,578,038 | 17,929 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
os.environ['OPENAI_API_KEY'] = openapi_key
# Define connection parameters using constants
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
# Create an engine to connect to the SQL database
engine = create_engine(connection_uri)
model_name="gpt-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT','egv_employee_attendance'])
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name) #, n=2, best_of=2)
PROMPT_SUFFIX = """Only use the following tables:
{table_info}
Previous Conversation:
{history}
Question: {input}"""
_DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run,
then look at the results of the query and return the answer.
Given an input question, create a syntactically correct MSSQL query by considering only the matching column names from the question,
then look at the results of the query and return the answer.
If a column name is not present, refrain from writing the SQL query. column like UAN number, PF number are not not present do not consider such columns.
Write the query only for the column names which are present in view.
Execute the query and analyze the results to formulate a response.
Return the answer in sentence form.
Use the following format:
Question: "Question here"
SQLQuery: "SQL Query to run"
SQLResult: "Result of the SQLQuery"
Answer: "Final answer here"
"""
PROMPT = PromptTemplate.from_template(_DEFAULT_TEMPLATE + PROMPT_SUFFIX)
memory = None
if memory == None:
memory = ConversationBufferMemory()
db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True, memory=memory)
answer = db_chain.run(question)
print(memory.load_memory_variables({}))
return answer
here is my code with interact with sql database in natural language , it uses langchain SQLDatabaseChain
### Error Message and Stack Trace (if applicable)
_No response_
### Description
here if an user ask a question like hi, hello, welcome or they simply type any thing, that time it is executing some query and just returning the 1st employee details, instead if it greeting can the model return something like hello or how can i help you, in short can the model try to understand the question and give response based on the question or if it doesn't understand the question instead of writing query to give 1st employee details or irrelevant answer can it respond like invalid question or something.
can we integrate some other functionality or model which will interact with user if the question is not related to the database.
### System Info
python: 3.11
langchain: latest | in a chatbot which interact with sql db, how can it return a user friendly answer/based on user question, instead of executing query to return the 1st row of db | https://api.github.com/repos/langchain-ai/langchain/issues/17926/comments | 2 | 2024-02-22T09:03:36Z | 2024-06-08T16:10:45Z | https://github.com/langchain-ai/langchain/issues/17926 | 2,148,561,499 | 17,926 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
below is the code
```
embeddings = OpenAIEmbeddings(
openai_api_type="",
openai_api_key="",
openai_api_base="",
deployment="text-embedding-ada-002",
model="text-embedding-ada-002",
chunk_size=1
)
# Create a FAISS vector store from the embeddings
vectorstore = FAISS.from_documents(documents, embeddings)
```
it is returning below error
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-27-ab8130d30993>](https://localhost:8080/#) in <cell line: 2>()
1 # Create a FAISS vector store from the embeddings
----> 2 vectorStore = FAISS.from_documents(documents, embeddings)
5 frames
[/usr/local/lib/python3.10/dist-packages/langchain_community/embeddings/openai.py](https://localhost:8080/#) in _create_retry_decorator(embeddings)
55 ),
56 retry=(
---> 57 retry_if_exception_type(openai.error.Timeout)
58 | retry_if_exception_type(openai.error.APIError)
59 | retry_if_exception_type(openai.error.APIConnectionError)
AttributeError: module 'openai' has no attribute 'error'
```
Even i tried AzureOpenAIEmbeddings with openai version>1, but it is returning some issue. How to resolve this issue? | unable to use Azure openai key for OpenAIEmbeddings(), openai version is <1.0.0 | https://api.github.com/repos/langchain-ai/langchain/issues/17925/comments | 0 | 2024-02-22T08:46:30Z | 2024-06-08T16:10:40Z | https://github.com/langchain-ai/langchain/issues/17925 | 2,148,531,878 | 17,925 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Similar code works with open ai (see https://python.langchain.com/docs/use_cases/sql/agents) and but not with Azure Open AI
Below is my full code
```
import os
os.environ['OPENAI_API_KEY'] = 'your key'
os.environ['OPENAI_API_TYPE'] = 'azure'
os.environ['OPENAI_API_VERSION'] = '2023-05-15'
from langchain.sql_database import SQLDatabase
db = SQLDatabase.from_uri("sqlite:///Chinook.db")
from langchain_community.agent_toolkits import create_sql_agent
from langchain_openai import AzureOpenAI # Updated import
from langchain_community.vectorstores import FAISS
from langchain_core.example_selectors import SemanticSimilarityExampleSelector
from langchain_openai import AzureOpenAIEmbeddings
llm = AzureOpenAI(deployment_name="GPT35Turbo",model_name="gpt-35-turbo"
,azure_endpoint="https://copilots-aoai-df.openai.azure.com/"
, temperature=0, verbose=True)
embeddings = AzureOpenAIEmbeddings(
azure_endpoint="https://copilots-aoai-df.openai.azure.com/",
azure_deployment="Ada002EmbeddingModel",
openai_api_version="2023-05-15",
)
examples = [
{"input": "List all artists.", "query": "SELECT * FROM Artist;"},
{
"input": "Find all albums for the artist 'AC/DC'.",
"query": "SELECT * FROM Album WHERE ArtistId = (SELECT ArtistId FROM Artist WHERE Name = 'AC/DC');",
},
{
"input": "List all tracks in the 'Rock' genre.",
"query": "SELECT * FROM Track WHERE GenreId = (SELECT GenreId FROM Genre WHERE Name = 'Rock');",
},
{
"input": "Find the total duration of all tracks.",
"query": "SELECT SUM(Milliseconds) FROM Track;",
},
{
"input": "List all customers from Canada.",
"query": "SELECT * FROM Customer WHERE Country = 'Canada';",
},
{
"input": "How many tracks are there in the album with ID 5?",
"query": "SELECT COUNT(*) FROM Track WHERE AlbumId = 5;",
},
{
"input": "Find the total number of invoices.",
"query": "SELECT COUNT(*) FROM Invoice;",
},
{
"input": "List all tracks that are longer than 5 minutes.",
"query": "SELECT * FROM Track WHERE Milliseconds > 300000;",
},
{
"input": "Who are the top 5 customers by total purchase?",
"query": "SELECT CustomerId, SUM(Total) AS TotalPurchase FROM Invoice GROUP BY CustomerId ORDER BY TotalPurchase DESC LIMIT 5;",
},
{
"input": "Which albums are from the year 2000?",
"query": "SELECT * FROM Album WHERE strftime('%Y', ReleaseDate) = '2000';",
},
{
"input": "How many employees are there",
"query": 'SELECT COUNT(*) FROM "Employee"',
},
]
example_selector = SemanticSimilarityExampleSelector.from_examples(
examples,
embeddings,
FAISS,
k=5,
input_keys=["input"],
)
from langchain_core.prompts import (
ChatPromptTemplate,
FewShotPromptTemplate,
MessagesPlaceholder,
PromptTemplate,
SystemMessagePromptTemplate,
)
system_prefix = """You are an agent designed to interact with a SQL database.
Given an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.
Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.
You can order the results by a relevant column to return the most interesting examples in the database.
Never query for all the columns from a specific table, only ask for the relevant columns given the question.
You have access to tools for interacting with the database.
Only use the given tools. Only use the information returned by the tools to construct your final answer.
You MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.
DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.
If the question does not seem related to the database, just return "I don't know" as the answer.
Here are some examples of user inputs and their corresponding SQL queries:"""
few_shot_prompt = FewShotPromptTemplate(
example_selector=example_selector,
example_prompt=PromptTemplate.from_template(
"User input: {input}\nSQL query: {query}"
),
input_variables=["input", "dialect", "top_k"],
prefix=system_prefix,
suffix=""
)
full_prompt = ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate(prompt=few_shot_prompt),
("human", "{input}"),
MessagesPlaceholder("agent_scratchpad"),
]
)
# Example formatted prompt
prompt_val = full_prompt.invoke(
{
"input": "How many arists are there",
"top_k": 5,
"dialect": "SQLite",
"agent_scratchpad": [],
}
)
print(prompt_val.to_string())
from langchain.agents.agent_types import AgentType
from langchain_community.agent_toolkits import SQLDatabaseToolkit
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
# Get list of tools
tools = toolkit.get_tools()
agent = create_sql_agent(
llm=llm,
db = db,
tools = tools,
prompt=full_prompt,
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
)
agent.invoke({"input": "list top 10 customers by total purchase?"})
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
raceback (most recent call last):
File "C:\Projects\Personal\Langchain-DB-Example\main-sqlite-azure.py", line 142, in <module>
agent = create_sql_agent(
File "C:\Projects\Personal\Langchain-DB-Example\.venv\lib\site-packages\langchain_community\agent_toolkits\sql\base.py", line 182, in create_sql_agent
runnable=create_react_agent(llm, tools, prompt),
File "C:\Projects\Personal\Langchain-DB-Example\.venv\lib\site-packages\langchain\agents\react\agent.py", line 97, in create_react_agent
raise ValueError(f"Prompt missing required variables: {missing_vars}")
**ValueError: Prompt missing required variables: {'tools', 'tool_names'}**
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22621
> Python Version: 3.9.0 (tags/v3.9.0:9cf6752, Oct 5 2020, 15:34:40) [MSC v.1927 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.25
> langchain: 0.1.8
> langchain_community: 0.0.21
> langsmith: 0.1.5
> langchain_experimental: 0.0.52
> langchain_openai: 0.0.6
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
PIP freeze
--------------------------------------------------
aiohttp==3.9.3
aiosignal==1.3.1
annotated-types==0.6.0
anyio==4.3.0
async-timeout==4.0.3
attrs==23.2.0
certifi==2024.2.2
charset-normalizer==3.3.2
colorama==0.4.6
dataclasses-json==0.6.4
distro==1.9.0
exceptiongroup==1.2.0
faiss-cpu==1.7.4
frozenlist==1.4.1
greenlet==3.0.3
h11==0.14.0
httpcore==1.0.3
httpx==0.26.0
idna==3.6
jsonpatch==1.33
jsonpointer==2.4
langchain==0.1.8
langchain-community==0.0.21
langchain-core==0.1.25
langchain-experimental==0.0.52
langchain-openai==0.0.6
langsmith==0.1.5
marshmallow==3.20.2
multidict==6.0.5
mypy-extensions==1.0.0
numpy==1.26.4
openai==1.12.0
packaging==23.2
pydantic==2.6.1
pydantic_core==2.16.2
PyYAML==6.0.1
regex==2023.12.25
requests==2.31.0
sniffio==1.3.0
SQLAlchemy==2.0.27
tenacity==8.2.3
tiktoken==0.6.0
tqdm==4.66.2
typing-inspect==0.9.0
typing_extensions==4.9.0
urllib3==2.2.1
yarl==1.9.4 | AzureOpenAI and SQLTools with agent -> ValueError(f"Prompt missing required variables: {missing_vars}") | https://api.github.com/repos/langchain-ai/langchain/issues/17921/comments | 5 | 2024-02-22T07:00:16Z | 2024-06-09T16:07:22Z | https://github.com/langchain-ai/langchain/issues/17921 | 2,148,358,385 | 17,921 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain_openai import OpenAI
lm_runnable = (
OpenAI(temperature=0, openai_api_key="nothing")
.configurable_alternatives(
ConfigurableField(id="llm"),
default_key="openai",
prefix_keys=False)
.configurable_fields(
temperature=ConfigurableField(
id="llm_temperature",
name="LLM Temperature",
description="The temperature of the LLM",
)
)
)
```
### Error Message and Stack Trace (if applicable)
```
pydantic.v1.error_wrappers.ValidationError: 1 validation error for RunnableConfigurableAlternatives
prefix_keys
field required (type=value_error.missing)
```
### Description
I'm trying to add configure_fields to the existing configure_alternatives instance but I got the below exception.
### System Info
```
langchain==0.1.8
langchain-community==0.0.21
langchain-core==0.1.25
langchain-openai==0.0.6
``` | Invoking configurable_fields method on RunnableConfigurableAlternatives instance throws prefix_keys field required error | https://api.github.com/repos/langchain-ai/langchain/issues/17915/comments | 0 | 2024-02-22T05:40:36Z | 2024-02-26T18:27:08Z | https://github.com/langchain-ai/langchain/issues/17915 | 2,148,244,376 | 17,915 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Now I find the [API documentation](https://api.python.langchain.com/en/latest/langchain_api_reference.html) is only provided in the format of web page, which is unstructured. This makes it difficult for me to use programs to batch process these data.
### Idea or request for content:
I hope the developers of LangChain can provide a structured format of API documentation, such as JSON and XML. It will be convenient for those people who want to batch process the data in the API documentation. | DOC: Is it possible to provide a structured format (like JSON) of API documentation? | https://api.github.com/repos/langchain-ai/langchain/issues/17908/comments | 0 | 2024-02-22T03:03:37Z | 2024-06-08T16:10:30Z | https://github.com/langchain-ai/langchain/issues/17908 | 2,148,087,220 | 17,908 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
https://python.langchain.com/docs/integrations/llms/vllm
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to use langchain VLLM for mistral v2.
for that I went to this documentation and tried to run first sample code.
from langchain_community.llms import VLLM
llm = VLLM(
model="mistralai/Mistral-7B-Instruct-v0.2",
trust_remote_code=True, # mandatory for hf models
max_new_tokens=128,
top_k=10,
top_p=0.95,
temperature=0.8,
)
print(llm.invoke("What is the capital of France ?"))
however I ended up getting error -
The model’s max sew len (32768) is larger than the maximum number of tokens that can be stored in KV cache (18896). Try increasing ‘gpu_memory_utilization’ or decreasing ‘max_model_len’ when initializing the engine.
I tried to update VLLM constructor code like this-
from langchain_community.llms import VLLM
llm = VLLM(
model="mosaicml/mpt-7b",
trust_remote_code=True, # mandatory for hf models
max_new_tokens=128,
top_k=10,
top_p=0.95,
temperature=0.8,
max_model_len=4096,
)
print(llm.invoke("What is the capital of France ?"))
This change also it didn’t pick up.
when I look at log I can see LLM engine gets initialized with config - max_seq_len=32768
I also tried to add
model_kwargs={“max_model_len”:4096}
None of the changes made to VLLM constructor are it getting picked up.
This seems to be bug.
### System Info
Langchain==0.1.8
python version 3.10.12
platform ubuntu22 | Vllm not able to set max_model_len | https://api.github.com/repos/langchain-ai/langchain/issues/17906/comments | 2 | 2024-02-22T02:50:34Z | 2024-06-17T16:09:26Z | https://github.com/langchain-ai/langchain/issues/17906 | 2,148,076,026 | 17,906 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain_community.document_loaders import JSONLoader
```
### Error Message and Stack Trace (if applicable)
When I want to use 'https://python.langchain.com/docs/integrations/retrievers/bm25', as the snippet `from langchain.retrievers import BM25Retriever` , an error throwed that hint user to use pip install -U langchain_community. I did it as mentioned.
But I found that JSONLoader cannot be load and the error message is very short. I have to dive into the code and found that after langchian_community updated, jsonloader support c language which is a dict contained in langchian without 'c'
```python
LANGUAGE_EXTENSIONS: Dict[str, str] = {
"py": Language.PYTHON,
"js": Language.JS,
"cobol": Language.COBOL,
"cpp": Language.CPP,
"cs": Language.CSHARP,
"rb": Language.RUBY,
"scala": Language.SCALA,
"rs": Language.RUST,
"go": Language.GO,
"kt": Language.KOTLIN,
"lua": Language.LUA,
"pl": Language.PERL,
"ts": Language.TS,
"java": Language.JAVA,
}
```
so the title issue appear. After update langchain, this issue solved.
### Description
I have two suggestion that:
1.Consider updating the outdated BM25 document.
2 The issue caused by inconsistent versions should not be addressed so briefly. Consider providing a more specific prompt or including the appropriate version of LangChain in the requirements of LangChain_Community.
### System Info
The issue will appear in langchian 0.1.7 and community 0.0.21 / while 0.1.8+0.0.21 is ok | AttributeError: C | https://api.github.com/repos/langchain-ai/langchain/issues/17905/comments | 5 | 2024-02-22T02:31:44Z | 2024-03-12T09:17:05Z | https://github.com/langchain-ai/langchain/issues/17905 | 2,148,059,381 | 17,905 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import FAISS
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
# embeddings = OpenAIEmbeddings(model="text-embedding-ada-002")
db = FAISS.from_documents(split_document, embeddings)
# db.save_local("qa_faiss_index")
db.save_local("qa_faiss_test_index")
```
### Error Message and Stack Trace (if applicable)
Warning: model not found. Using cl100k_base encoding.
### Description
I want to use the new embedding model, but after running, the result is :Warning: model not found. Using cl100k_base encoding.
I have upgrade langchain,But the problem still occurs.
So, I do not how to address this issue
### System Info
platform(windows)
python version: 3.11.5(anaconda) | How to use openai new embedding model such as : text-embedding-3-small? | https://api.github.com/repos/langchain-ai/langchain/issues/17903/comments | 7 | 2024-02-22T02:03:03Z | 2024-06-08T16:10:25Z | https://github.com/langchain-ai/langchain/issues/17903 | 2,148,032,480 | 17,903 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
memory = ConversationSummaryBufferMemory(
max_token_limit=os.getenv("MEMORY_MAX_TOKEN_LIMIT"),
memory_key="history",
llm=ChatOpenAI(model=os.getenv("GPT_3")),
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
To report this issue on the LangChain GitHub, your bug report description should be clear, concise, and detailed. Below is a structured description that you can use to raise this bug:
---
**Overview**:
The ConversationSummaryBufferMemory component is designed to manage chat memory efficiently within a defined maximum token limit. It is expected to monitor the buffer (chat memory messages) and summarize the buffer content into a `moving_summary_buffer` once the token limit is reached. However, an issue has been identified where the token count of the `moving_summary_buffer` content keeps increasing over time, leading to the overall token limit being exceeded.
**Expected Behavior**:
- The ConversationSummaryBufferMemory should ensure that the combined token count of chat memory messages and the `moving_summary_buffer` does not exceed the defined maximum token limit.
- Upon reaching the token limit, the buffer should be summarized effectively, with the `moving_summary_buffer` maintaining or reducing its token count to comply with the token limitations.
**Current Behavior**:
- The `moving_summary_buffer`'s token count increases over time, even after summarization processes.
- This increase contributes to the total token count exceeding the maximum limit, potentially impacting performance and functionality.
**Steps to Reproduce**:
1. Engage in a conversation that progressively fills the chat memory buffer close to the maximum token limit.
2. Observe the behavior as the system attempts to summarize the buffer into the `moving_summary_buffer`.
3. Monitor the token count of the `moving_summary_buffer` and the overall token usage over time.
4. Note instances where the total token count exceeds the predefined maximum limit.
**Impact**:
This issue can lead to degraded performance with potential implications for conversation continuity and user experience. It challenges the system's ability to manage long conversations efficiently and could lead to errors or limitations in processing further input.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.4.0: Wed Feb 7 23:21:07 PST 2024; root:xnu-10063.100.637.501.2~2/RELEASE_ARM64_T8112
> Python Version: 3.11.4 (main, Feb 2 2024, 14:55:45) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.1.23
> langchain: 0.1.6
> langchain_community: 0.0.20
> langsmith: 0.0.87
> langchain_openai: 0.0.5
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Issue with ConversationSummaryBufferMemory: Token Limit Exceeded by moving_summary_buffer(system message) | https://api.github.com/repos/langchain-ai/langchain/issues/17888/comments | 0 | 2024-02-21T19:09:33Z | 2024-05-31T23:58:44Z | https://github.com/langchain-ai/langchain/issues/17888 | 2,147,496,777 | 17,888 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
As well as some of the blob loading semantics | BaseLoader should be in core | https://api.github.com/repos/langchain-ai/langchain/issues/17883/comments | 0 | 2024-02-21T17:47:48Z | 2024-05-31T23:56:12Z | https://github.com/langchain-ai/langchain/issues/17883 | 2,147,350,240 | 17,883 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from fastapi import FastAPI
from langchain_openai import ChatOpenAI
from langchain.schema.runnable import (
ConfigurableField,
Runnable,
RunnableBranch,
RunnableLambda,
RunnableMap,
)
from langchain_community.chat_message_histories import SQLChatMessageHistory
from langchain_core import __version__
from langchain_community.vectorstores import Milvus
from langserve import add_routes as langserve_add_routes
from dotenv import load_dotenv
from operator import itemgetter
from file_manager import add_routes as file_manager_add_routes
from fastapi import FastAPI
from langchain.schema.output_parser import StrOutputParser
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_core.messages import get_buffer_string
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from prompts import CONDENSE_QUESTION_PROMPT, ANSWER_PROMPT, DEFAULT_DOCUMENT_PROMPT, RESPONSE_TEMPLATE
from langchain_community.retrievers.llama_index import LlamaIndexRetriever
from llama_index.core.indices.vector_store.base import VectorStoreIndex
from llama_index.vector_stores.milvus import MilvusVectorStore
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.retrievers.document_compressors import (
DocumentCompressorPipeline,
EmbeddingsFilter,
)
from datetime import datetime
from typing import Sequence
from langchain.schema.document import Document
from llama_index.core.vector_stores.types import VectorStore
from langchain.retrievers import ContextualCompressionRetriever
from langchain_openai import OpenAIEmbeddings
load_dotenv()
def get_chat_history(session_id: str) -> str:
"""Get the chat history for the session."""
history = SQLChatMessageHistory(
session_id=session_id,
connection_string="postgresql://rag:[email protected]:5432/rag"
)
return history
def format_docs(docs: Sequence[Document]) -> str:
formatted_docs = []
for i, doc in enumerate(docs):
doc_string = f"<doc id='{i}'>{doc.page_content}</doc>"
formatted_docs.append(doc_string)
return "\n".join(formatted_docs)
def _get_retriever(vector_store: VectorStore):
embeddings = OpenAIEmbeddings()
splitter = RecursiveCharacterTextSplitter(chunk_size=800, chunk_overlap=20)
relevance_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.8)
pipeline_compressor = DocumentCompressorPipeline(
transformers=[splitter, relevance_filter]
)
base_retriever = LlamaIndexRetriever(index=VectorStoreIndex.from_vector_store(vector_store).as_query_engine())
retriever = ContextualCompressionRetriever(
base_retriever=base_retriever, base_compressor=pipeline_compressor
)
return retriever.with_config(run_name="SourceRetriever")
def create_retriever_chain(
llm, retriever
) -> Runnable:
condense_question_chain = (
CONDENSE_QUESTION_PROMPT | llm | StrOutputParser()
).with_config(
run_name="CondenseQuestion",
)
conversation_chain = condense_question_chain | retriever
return RunnableBranch(
(
RunnableLambda(lambda x: get_buffer_string(x["chat_history"])).with_config(
run_name="HasChatHistoryCheck"
),
conversation_chain.with_config(run_name="RetrievalChainWithHistory"),
),
(
RunnableLambda(itemgetter("question")).with_config(
run_name="Itemgetter:question"
)
| retriever
).with_config(run_name="RetrievalChainWithNoHistory"),
).with_config(run_name="RouteDependingOnChatHistory")
def _build_chain() -> Runnable:
vector_store = MilvusVectorStore(dim=1536)
retriever = _get_retriever(vector_store)
llm = ChatOpenAI(model="gpt-3.5-turbo").configurable_alternatives(
ConfigurableField(id="llm"),
default_key="gpt-3.5-turbo",
gpt_4_turbo_preview=ChatOpenAI(model="gpt-4-turbo-preview")
)
retriever_chain = create_retriever_chain(llm, retriever) | RunnableLambda(
format_docs
).with_config(run_name="FormatDocumentChunks")
_context = RunnableMap(
{
"context": retriever_chain.with_config(run_name="RetrievalChain"),
"question": RunnableLambda(itemgetter("question")).with_config(
run_name="Itemgetter:question"
),
"chat_history": RunnableLambda(itemgetter("chat_history")).with_config(
run_name="Itemgetter:chat_history"
),
}
)
prompt = ChatPromptTemplate.from_messages(
[
("system", RESPONSE_TEMPLATE),
MessagesPlaceholder(variable_name="chat_history"),
("human", "{question}"),
]
).partial(current_date=datetime.now().isoformat())
response_synthesizer = (prompt | llm | StrOutputParser()).with_config(
run_name="GenerateResponse",
)
return (
{
"question": RunnableLambda(itemgetter("question")).with_config(
run_name="Itemgetter:question"
),
"chat_history": RunnableLambda(itemgetter("chat_history")).with_config(
run_name="SerializeHistory"
),
}
| _context
| response_synthesizer
)
chain = _build_chain()
chain_with_history = RunnableWithMessageHistory(
chain,
get_chat_history,
input_messages_key="question",
history_messages_key="chat_history",
)
def add_routes(app: FastAPI) -> None:
"""Add routes to the FastAPI app."""
langserve_add_routes(
app,
chain_with_history,
# disabled_endpoints=["playground", "batch"],
)
file_manager_add_routes(app)
```
### Error Message and Stack Trace (if applicable)
RuntimeError: super(): __class__ cell not found
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 758, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 778, in app
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 299, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 79, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 77, in app
await response(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/sse_starlette/sse.py", line 255, in __call__
async with anyio.create_task_group() as task_group:
File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 597, in __aexit__
raise exceptions[0]
File "/usr/local/lib/python3.11/site-packages/sse_starlette/sse.py", line 258, in wrap
await func()
File "/usr/local/lib/python3.11/site-packages/sse_starlette/sse.py", line 245, in stream_response
async for data in self.body_iterator:
File "/usr/local/lib/python3.11/site-packages/langserve/api_handler.py", line 1085, in _stream_log
"data": self._serializer.dumps(data).decode("utf-8"),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langserve/serialization.py", line 168, in dumps
return orjson.dumps(obj, default=default)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Type is not JSON serializable: numpy.float64
### Description
I'm trying to run langserve api with RAG and Runnable with message history but i'm facing this bug
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.1.0: Mon Oct 9 21:27:27 PDT 2023; root:xnu-10002.41.9~6/RELEASE_X86_64
> Python Version: 3.11.6 (main, Oct 2 2023, 13:45:54) [Clang 15.0.0 (clang-1500.0.40.1)]
Package Information
-------------------
> langchain_core: 0.1.24
> langchain: 0.1.8
> langchain_community: 0.0.21
> langsmith: 0.1.3
> langchain_cli: 0.0.21
> langchain_experimental: 0.0.50
> langchain_openai: 0.0.5
> langchainhub: 0.1.14
> langserve: 0.0.41
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph | TypeError: Type is not JSON serializable: numpy.float64 | https://api.github.com/repos/langchain-ai/langchain/issues/17875/comments | 5 | 2024-02-21T15:20:41Z | 2024-02-21T15:45:53Z | https://github.com/langchain-ai/langchain/issues/17875 | 2,147,022,028 | 17,875 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
``` python
from langchain.chains.combine_documents.stuff import StuffDocumentsChain
from langchain.chains import MapReduceDocumentsChain, ReduceDocumentsChain
text_splitter = RecursiveCharacterTextSplitterRegroup( # custom RecursiveCharacterTextSplitter
model_name = self.name,
chunk_size = max_input_tokens, # 2048
chunk_overlap = 0,
length_function = text_tokenized_length,
separators = ["\nFile name:", "\n\n", "\n"]#PATCH_SEPARATORS
)
map_chain = LLMChain(llm=self.llm, prompt=self._prompt("patch", "explain"))
reduce_chain = LLMChain(llm=self.llm, prompt=self._prompt("patch", "summarize"))
combine_documents_chain = StuffDocumentsChain(llm_chain=reduce_chain, document_variable_name="patch_explain")
reduce_documents_chain = ReduceDocumentsChain(combine_documents_chain=combine_documents_chain,
collapse_documents_chain=combine_documents_chain,
token_max=max_input_tokens, # 2048
collapse_max_retries=1
)
map_reduce_chain = MapReduceDocumentsChain(llm_chain=map_chain,
reduce_documents_chain=reduce_documents_chain,
document_variable_name="patch",
return_intermediate_steps=False,
)
texts = text_splitter.split_text(text)
input_dict = {
"input_documents": [Document( page_content = text ) for text in texts],
"text_name": text_name,
}
map_reduce_chain.invoke(input=input_dict)
```
### Error Message and Stack Trace (if applicable)
Token indices sequence length is longer than the specified maximum sequence length for this model (6589 > 2048). Running this sequence through the model will result in indexing errors
### Description
I'm working with the **MapReduceDocumentsChain** class, following the instructions in the LangChain tutorial on [Summarization](https://python.langchain.com/docs/use_cases/summarization). According to what I've gathered from the documentation, the output documents produced by the ReduceDocumentChain are restricted to not exceed a maximum token length, referred to as **token_max**.
Searching online, I've come across others facing the same issue, yet no solutions provided seem adequate.
I'm eager for any assistance you can offer.
### System Info
Platform: macOS 14.2.1 (23C71)
Python version: 3.11
langchain==0.1.8
langchain-community==0.0.21
langchain-core==0.1.2 | ReduceDocumentsChain: token_max has not effect on chunk length | https://api.github.com/repos/langchain-ai/langchain/issues/17869/comments | 3 | 2024-02-21T12:15:36Z | 2024-07-13T16:05:31Z | https://github.com/langchain-ai/langchain/issues/17869 | 2,146,609,976 | 17,869 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
I think there's an issue with `save_context` and `asave_context` after introducing this [change](https://github.com/langchain-ai/langchain/pull/16728).
Here's the minimal example:
```python
from langchain.agents import AgentExecutor, create_react_agent
from langchain import hub
import os
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
from langchain.tools import Tool
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
load_dotenv()
llm = ChatOpenAI(
openai_api_key=os.getenv("OPENAI_API_KEY"),
model="gpt-3.5-turbo",
)
tools = [
Tool.from_function(
name="General Chat",
description="For general chat not covered by other tools",
func=llm.invoke,
return_direct=True
)
]
memory = ConversationBufferWindowMemory(
memory_key='chat_history',
k=5,
return_messages=True,
)
agent_prompt = hub.pull("hwchase17/react-chat")
agent = create_react_agent(llm, tools, agent_prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
memory=memory,
verbose=True
)
def generate_response(prompt):
"""
Create a handler that calls the Conversational agent
and returns a response to be rendered in the UI
"""
response = agent_executor.invoke({"input": prompt})
return response['output']
generate_response(input("Write: "))
```
### Error Message and Stack Trace (if applicable)
```
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: General Chat
Action Input: help me writecontent='Sure, I can help you write. What would you like to write about?'
Traceback (most recent call last):
File "D:\Development\WW\langchain-minimal\demo.py", line 53, in <module>
generate_response(input("Write: "))
File "D:\Development\WW\langchain-minimal\demo.py", line 49, in generate_response
response = agent_executor.invoke({"input": prompt})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Development\WW\langchain-minimal\venv\Lib\site-packages\langchain\chains\base.py", line 168, in invoke
raise e
File "D:\Development\WW\langchain-minimal\venv\Lib\site-packages\langchain\chains\base.py", line 163, in invoke
final_outputs: Dict[str, Any] = self.prep_outputs(
^^^^^^^^^^^^^^^^^^
File "D:\Development\WW\langchain-minimal\venv\Lib\site-packages\langchain\chains\base.py", line 460, in prep_outputs
self.memory.save_context(inputs, outputs)
File "D:\Development\WW\langchain-minimal\venv\Lib\site-packages\langchain\memory\chat_memory.py", line 40, in save_context
[HumanMessage(content=input_str), AIMessage(content=output_str)]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Development\WW\langchain-minimal\venv\Lib\site-packages\langchain_core\messages\base.py", line 37, in __init__
return super().__init__(content=content, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Development\WW\langchain-minimal\venv\Lib\site-packages\langchain_core\load\serializable.py", line 120, in __init__
super().__init__(**kwargs)
File "D:\Development\WW\langchain-minimal\venv\Lib\site-packages\pydantic\v1\main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 2 validation errors for AIMessage
content
str type expected (type=type_error.str)
content
value is not a valid list (type=type_error.list)
```
### Description
[See related line](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/memory/chat_memory.py#L40)
I am encountering this error when creating reAct agents. I looked into the possible cause and I think it has something to do with `save_context` and `asave_context` functions:
```python
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save context from this conversation to buffer."""
input_str, output_str = self._get_input_output(inputs, outputs)
self.chat_memory.add_messages(
[HumanMessage(content=input_str), AIMessage(content=output_str)]
)
```
It happens when `output_str` is already of type `AIMessage`. The error message also suggests that validation fails because `AIMessage(content=output_str)` is not `str`.
Removing the cast `AIMessage` seems to resolve the issue. But I think we really need to address the pydantic error and make the validation more flexible in this part (like `Any` instead of `str`? Or Union of types).
### System Info
langchain==0.1.8
platform: windows
python version: 3.11.4 | The functions `save_context` and `asave_context` are experiencing problems when trying to store output messages, specifically when `output_str` is of the `AIMessage` type | https://api.github.com/repos/langchain-ai/langchain/issues/17867/comments | 5 | 2024-02-21T11:43:51Z | 2024-02-23T09:14:24Z | https://github.com/langchain-ai/langchain/issues/17867 | 2,146,545,271 | 17,867 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
`from langchain.agents.agent_types import AgentType`
### Error Message and Stack Trace (if applicable)
TypeError: type 'Result' is not subscriptable
### Description
TypeError: type 'Result' is not subscriptable
----> 6 from langchain.agents.agent_types import AgentType
7 from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent
8 #from langchain_openai import ChatOpenAI
File ~/anaconda3/lib/python3.11/site-packages/langchain/agents/__init__.py:34
31 from pathlib import Path
32 from typing import Any
---> 34 from langchain_community.agent_toolkits import (
35 create_json_agent,
36 create_openapi_agent,
37 create_pbi_agent,
38 create_pbi_chat_agent,
39 create_spark_sql_agent,
40 create_sql_agent,
41 )
42 from langchain_core._api.path import as_import_path
44 from langchain.agents.agent import (
45 Agent,
46 AgentExecutor,
(...)
50 LLMSingleActionAgent,
51 )
File ~/anaconda3/lib/python3.11/site-packages/langchain_community/agent_toolkits/__init__.py:46
44 from langchain_community.agent_toolkits.spark_sql.base import create_spark_sql_agent
45 from langchain_community.agent_toolkits.spark_sql.toolkit import SparkSQLToolkit
---> 46 from langchain_community.agent_toolkits.sql.base import create_sql_agent
47 from langchain_community.agent_toolkits.sql.toolkit import SQLDatabaseToolkit
48 from langchain_community.agent_toolkits.steam.toolkit import SteamToolkit
File ~/anaconda3/lib/python3.11/site-packages/langchain_community/agent_toolkits/sql/base.py:29
19 from langchain_core.prompts.chat import (
20 ChatPromptTemplate,
21 HumanMessagePromptTemplate,
22 MessagesPlaceholder,
23 )
25 from langchain_community.agent_toolkits.sql.prompt import (
26 SQL_FUNCTIONS_SUFFIX,
27 SQL_PREFIX,
28 )
---> 29 from langchain_community.agent_toolkits.sql.toolkit import SQLDatabaseToolkit
30 from langchain_community.tools.sql_database.tool import (
31 InfoSQLDatabaseTool,
32 ListSQLDatabaseTool,
33 )
35 if TYPE_CHECKING:
File ~/anaconda3/lib/python3.11/site-packages/langchain_community/agent_toolkits/sql/toolkit.py:9
7 from langchain_community.agent_toolkits.base import BaseToolkit
8 from langchain_community.tools import BaseTool
----> 9 from langchain_community.tools.sql_database.tool import (
10 InfoSQLDatabaseTool,
11 ListSQLDatabaseTool,
12 QuerySQLCheckerTool,
13 QuerySQLDataBaseTool,
14 )
15 from langchain_community.utilities.sql_database import SQLDatabase
18 class SQLDatabaseToolkit(BaseToolkit):
File ~/anaconda3/lib/python3.11/site-packages/langchain_community/tools/sql_database/tool.py:33
29 class _QuerySQLDataBaseToolInput(BaseModel):
30 query: str = Field(..., description="A detailed and correct SQL query.")
---> 33 class QuerySQLDataBaseTool(BaseSQLDatabaseTool, BaseTool):
34 """Tool for querying a SQL database."""
36 name: str = "sql_db_query"
File ~/anaconda3/lib/python3.11/site-packages/langchain_community/tools/sql_database/tool.py:47, in QuerySQLDataBaseTool()
36 name: str = "sql_db_query"
37 description: str = """
38 Execute a SQL query against the database and get back the result..
39 If the query is not correct, an error message will be returned.
40 If an error is returned, rewrite the query, check the query, and try again.
41 """
43 def _run(
44 self,
45 query: str,
46 run_manager: Optional[CallbackManagerForToolRun] = None,
---> 47 ) -> Union[str, Sequence[Dict[str, Any]], Result[Any]]:
48 """Execute the query, return the results or an error message."""
49 return self.db.run_no_throw(query)
TypeError: type 'Result' is not subscriptable
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:44 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6000
> Python Version: 3.11.5 (main, Sep 11 2023, 08:31:25) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.1.25
> langchain: 0.1.8
> langchain_community: 0.0.21
> langsmith: 0.1.5
> langchain_experimental: 0.0.52
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Agent Type Import iSsue | https://api.github.com/repos/langchain-ai/langchain/issues/17866/comments | 2 | 2024-02-21T11:36:00Z | 2024-02-21T11:40:53Z | https://github.com/langchain-ai/langchain/issues/17866 | 2,146,531,888 | 17,866 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
When I was testing with the ChatZhipuAI provided by the official documentation, an AttributeError occurred: module 'zhipuai' has no attribute 'model_api'. The documentation URL is:https://python.langchain.com/docs/integrations/chat/zhipuai
```
from langchain_community.chat_models import ChatZhipuAI
from langchain_core.messages import AIMessage, HumanMessage, SystemMessage
chat = ChatZhipuAI(
temperature=0.5,
api_key="my key***",
model="chatglm_turbo",
)
messages = [
AIMessage(content="Hi."),
SystemMessage(content="Your role is a poet."),
HumanMessage(content="Write a short poem about AI in four lines."),
]
response = chat(messages)
print(response.content)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/ysk/Library/Mobile Documents/com~apple~CloudDocs/langchain/zhipu/simple.py", line 17, in <module>
response = chat(messages)
File "/Users/ysk/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/Users/ysk/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 691, in __call__
generation = self.generate(
File "/Users/ysk/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 408, in generate
raise e
File "/Users/ysk/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 398, in generate
self._generate_with_cache(
File "/Users/ysk/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 577, in _generate_with_cache
return self._generate(
File "/Users/ysk/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/zhipuai.py", line 265, in _generate
response = self.invoke(prompt)
File "/Users/ysk/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/zhipuai.py", line 183, in invoke
return self.zhipuai.model_api.invoke(
AttributeError: module 'zhipuai' has no attribute 'model_api'
### Description
I was testing with the ChatZhipuAI provided by the official documentation
### System Info
langchain==0.1.8
langchain-community==0.0.21
langchain-core==0.1.24
langchain-openai==0.0.6
langchainhub==0.1.14
platform mac
python 3.10 | ChatZhipuAI invoke error | https://api.github.com/repos/langchain-ai/langchain/issues/17863/comments | 9 | 2024-02-21T11:12:01Z | 2024-07-02T16:08:57Z | https://github.com/langchain-ai/langchain/issues/17863 | 2,146,485,006 | 17,863 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
Take the following Pydantic class:
```python
from langchain_core.pydantic_v1 import BaseModel, Field
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke")
```
When used with `JsonOutputParser` we can generate pydantic schema as follows:-
```python
from langchain_core.output_parsers import JsonOutputParser
instructions = JsonOutputParser(pydantic_object=Joke).get_format_instructions()
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The given example steps are not replicable with `XMLOutputParser`. Although it does have the `tags` field, there is no way to pass the pydantic object directly. Passing JSON schema can result in transient errors in generated outputs.
Ideally an XML schema or something similar should be injected, which would be dynamically generated from the Pydantic object.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #18~22.04.1-Ubuntu SMP Wed Jan 10 22:54:16 UTC 2024
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.25
> langchain: 0.1.8
> langchain_community: 0.0.21
> langsmith: 0.1.5
> langchainhub: 0.1.14
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Missing first class pydantic objects support by `XMLOutputParser` | https://api.github.com/repos/langchain-ai/langchain/issues/17862/comments | 2 | 2024-02-21T10:47:37Z | 2024-08-06T16:07:38Z | https://github.com/langchain-ai/langchain/issues/17862 | 2,146,438,574 | 17,862 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
`
from llama_index.vector_stores.milvus import MilvusVectorStore
from langchain_community.retrievers.llama_index import LlamaIndexRetriever
from llama_index.core.indices.vector_store.base import VectorStoreIndex
vector_store = MilvusVectorStore(dim=1536)
retriever = LlamaIndexRetriever(index=VectorStoreIndex.from_vector_store(vector_store))
`
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 758, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 778, in app
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 299, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 79, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 77, in app
await response(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/sse_starlette/sse.py", line 255, in __call__
async with anyio.create_task_group() as task_group:
File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 597, in __aexit__
raise exceptions[0]
File "/usr/local/lib/python3.11/site-packages/sse_starlette/sse.py", line 258, in wrap
await func()
File "/usr/local/lib/python3.11/site-packages/sse_starlette/sse.py", line 245, in stream_response
async for data in self.body_iterator:
File "/usr/local/lib/python3.11/site-packages/langserve/api_handler.py", line 1056, in _stream_log
async for chunk in self._runnable.astream_log(
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 686, in astream_log
async for item in _astream_log_implementation( # type: ignore
File "/usr/local/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py", line 612, in _astream_log_implementation
await task
File "/usr/local/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py", line 566, in consume_astream
async for chunk in runnable.astream(input, config, **kwargs):
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4144, in astream
async for item in self.bound.astream(
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4144, in astream
async for item in self.bound.astream(
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2449, in astream
async for chunk in self.atransform(input_aiter(), config, **kwargs):
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2432, in atransform
async for chunk in self._atransform_stream_with_config(
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1595, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py", line 237, in tap_output_aiter
async for chunk in output:
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2402, in _atransform
async for output in final_pipeline:
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4180, in atransform
async for item in self.bound.atransform(
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2432, in atransform
async for chunk in self._atransform_stream_with_config(
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1595, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py", line 237, in tap_output_aiter
async for chunk in output:
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2402, in _atransform
async for output in final_pipeline:
File "/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/transform.py", line 60, in atransform
async for chunk in self._atransform_stream_with_config(
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1560, in _atransform_stream_with_config
final_input: Optional[Input] = await py_anext(input_for_tracing, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 62, in anext_impl
return await __anext__(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer
item = await iterator.__anext__()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/configurable.py", line 218, in atransform
async for chunk in runnable.atransform(input, config, **kwargs):
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1061, in atransform
async for chunk in input:
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1061, in atransform
async for chunk in input:
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2862, in atransform
async for chunk in self._atransform_stream_with_config(
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1595, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py", line 237, in tap_output_aiter
async for chunk in output:
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2849, in _atransform
chunk = AddableDict({step_name: task.result()})
^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2832, in get_next_chunk
return await py_anext(generator)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2432, in atransform
async for chunk in self._atransform_stream_with_config(
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1595, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py", line 237, in tap_output_aiter
async for chunk in output:
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2402, in _atransform
async for output in final_pipeline:
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3711, in atransform
async for output in self._atransform_stream_with_config(
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1560, in _atransform_stream_with_config
final_input: Optional[Input] = await py_anext(input_for_tracing, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 62, in anext_impl
return await __anext__(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer
item = await iterator.__anext__()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1071, in atransform
async for output in self.astream(final, config, **kwargs):
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 592, in astream
yield await self.ainvoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/retrievers.py", line 137, in ainvoke
return await self.aget_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/retrievers.py", line 280, in aget_relevant_documents
raise e
File "/usr/local/lib/python3.11/site-packages/langchain_core/retrievers.py", line 273, in aget_relevant_documents
result = await self._aget_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/retrievers.py", line 168, in _aget_relevant_documents
return await run_in_executor(
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 493, in run_in_executor
return await asyncio.get_running_loop().run_in_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/[email protected]/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_community/retrievers/llama_index.py", line 28, in _get_relevant_documents
raise ImportError(
ImportError: You need to install `pip install llama-index` to use this retriever.
### Description
Since LlamaIndex v0.10, packages structure is different. You might have to change these imports for LlamaIndexRetriever class in langchain_community.retrievers.llama_index:
`from llama_index.core.indices.base import BaseGPTIndex
from llama_index.core.base.response.schema import Response`
to
`from llama_index.core.indices.base import BaseGPTIndex
from llama_index.core.base.response.schema import Response`
**NOTE : ** This might not be the only import problem since the new Llama index update
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.1.0: Mon Oct 9 21:27:27 PDT 2023; root:xnu-10002.41.9~6/RELEASE_X86_64
> Python Version: 3.11.6 (main, Oct 2 2023, 13:45:54) [Clang 15.0.0 (clang-1500.0.40.1)]
Package Information
-------------------
> langchain_core: 0.1.24
> langchain: 0.1.8
> langchain_community: 0.0.21
> langsmith: 0.1.3
> langchain_cli: 0.0.21
> langchain_experimental: 0.0.50
> langchain_openai: 0.0.5
> langchainhub: 0.1.14
> langserve: 0.0.41
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph | LlamaIndexRetriever imports not working since Llamaindex v0.10 | https://api.github.com/repos/langchain-ai/langchain/issues/17860/comments | 2 | 2024-02-21T09:24:32Z | 2024-04-26T21:12:53Z | https://github.com/langchain-ai/langchain/issues/17860 | 2,146,229,581 | 17,860 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
found_response_schemas = [
ResponseSchema(name="answer", description="Response substitutes from the context"),
ResponseSchema(
name="found",
description="Return whether it could find the answer from the references or not.",
),
]
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to make an output parser that is telling whether it could successfully find an answer form the reference document using RAG or not.
But it seems like it always returns ```"found" = true```.
### System Info
langchain==0.1.2
langchain-community==0.0.14 | custom boolean output parser not working well | https://api.github.com/repos/langchain-ai/langchain/issues/17858/comments | 3 | 2024-02-21T08:43:25Z | 2024-05-31T23:56:15Z | https://github.com/langchain-ai/langchain/issues/17858 | 2,146,134,674 | 17,858 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Just a silly mistake by your developer or maybe content editor.
Link : https://python.langchain.com/docs/modules/chains/
### Idea or request for content:
Just change it. Nothing else. You are going to be unicorn soon. Hire me as your content writer :)

| DOC: Spelling Mistake on Website (Chains: RetreivalQA) | https://api.github.com/repos/langchain-ai/langchain/issues/17851/comments | 1 | 2024-02-21T06:20:00Z | 2024-03-08T20:09:23Z | https://github.com/langchain-ai/langchain/issues/17851 | 2,145,924,931 | 17,851 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
`# Refer : https://python.langchain.com/docs/integrations/chat/google_generative_ai
from langchain_google_genai import (
ChatGoogleGenerativeAI,
HarmBlockThreshold,
HarmCategory,
)
import os
os.environ["GOOGLE_API_KEY"] = ""
llm = ChatGoogleGenerativeAI(model="gemini-pro",
safety_settings={
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
},
)
result = llm.invoke("Write a ballad about LangChain")
print(result.content)
`
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/manjunath.s/Desktop/sample_test/gemini/safety_settings/test.py", line 18, in <module>
result = llm.invoke("Write a ballad about LangChain")
File "/Users/manjunath.s/Library/Python/3.9/lib/python/site-packages/langchain_core/language_models/chat_models.py", line 166, in invoke
self.generate_prompt(
File "/Users/manjunath.s/Library/Python/3.9/lib/python/site-packages/langchain_core/language_models/chat_models.py", line 544, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "/Users/manjunath.s/Library/Python/3.9/lib/python/site-packages/langchain_core/language_models/chat_models.py", line 408, in generate
raise e
File "/Users/manjunath.s/Library/Python/3.9/lib/python/site-packages/langchain_core/language_models/chat_models.py", line 398, in generate
self._generate_with_cache(
File "/Users/manjunath.s/Library/Python/3.9/lib/python/site-packages/langchain_core/language_models/chat_models.py", line 577, in _generate_with_cache
return self._generate(
File "/Users/manjunath.s/Library/Python/3.9/lib/python/site-packages/langchain_google_genai/chat_models.py", line 550, in _generate
params, chat, message = self._prepare_chat(
File "/Users/manjunath.s/Library/Python/3.9/lib/python/site-packages/langchain_google_genai/chat_models.py", line 645, in _prepare_chat
client = genai.GenerativeModel(
TypeError: __init__() got an unexpected keyword argument 'tools'
### Description
I was trying to disable safety filters on Google Gemini and I'm facing the error mentioned below
TypeError: __init__() got an unexpected keyword argument 'tools'
Sample code is attached for reference.
### System Info
`langchain==0.1.8
langchain-community==0.0.21
langchain-core==0.1.25
langchain-google-genai==0.0.9
`
Python 3.9.6
| safety settings lead to __init__() got an unexpected keyword argument 'tools' | https://api.github.com/repos/langchain-ai/langchain/issues/17847/comments | 1 | 2024-02-21T05:07:42Z | 2024-05-31T23:58:42Z | https://github.com/langchain-ai/langchain/issues/17847 | 2,145,848,491 | 17,847 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
SemanticSimilarityExampleSelector.from_examples is throwing an error :
ServiceRequestError: Invalid URL "/indexes('langchain-index')?api-version=2023-10-01-Preview": No scheme supplied. Perhaps you meant [https:///indexes('langchain-index')?api-version=2023-10-01-Preview?](https://indexes('langchain-index')/?api-version=2023-10-01-Preview?)
I am creating vector store and using it in SemanticSimilarityExampleSelector.from_examples like below:
vector_store = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query,
)
example_selector = SemanticSimilarityExampleSelector.from_examples(
examples,
embeddings,
vector_store,
k=5,
input_keys=["input"],
)
### Error Message and Stack Trace (if applicable)
erviceRequestError Traceback (most recent call last)
Cell In[11], line 1
----> 1 example_selector = SemanticSimilarityExampleSelector.from_examples(
2 examples,
3 embeddings,
4 vector_store,
5 k=5,
6 input_keys=["input"]
7 )
File ~\anaconda3\Lib\site-packages\langchain_core\example_selectors\semantic_similarity.py:105, in SemanticSimilarityExampleSelector.from_examples(cls, examples, embeddings, vectorstore_cls, k, input_keys, example_keys, vectorstore_kwargs, **vectorstore_cls_kwargs)
103 else:
104 string_examples = [" ".join(sorted_values(eg)) for eg in examples]
--> 105 vectorstore = vectorstore_cls.from_texts(
106 string_examples, embeddings, metadatas=examples, **vectorstore_cls_kwargs
107 )
108 return cls(
109 vectorstore=vectorstore,
110 k=k,
(...)
113 vectorstore_kwargs=vectorstore_kwargs,
114 )
File ~\anaconda3\Lib\site-packages\langchain_community\vectorstores\azuresearch.py:632, in AzureSearch.from_texts(cls, texts, embedding, metadatas, azure_search_endpoint, azure_search_key, index_name, **kwargs)
620 @classmethod
621 def from_texts(
622 cls: Type[AzureSearch],
(...)
630 ) -> AzureSearch:
631 # Creating a new Azure Search instance
--> 632 azure_search = cls(
633 azure_search_endpoint,
634 azure_search_key,
635 index_name,
636 embedding,
637 )
638 azure_search.add_texts(texts, metadatas, **kwargs)
639 return azure_search
File ~\anaconda3\Lib\site-packages\langchain_community\vectorstores\azuresearch.py:269, in AzureSearch.__init__(self, azure_search_endpoint, azure_search_key, index_name, embedding_function, search_type, semantic_configuration_name, fields, vector_search, semantic_configurations, scoring_profiles, default_scoring_profile, cors_options, **kwargs)
267 if "user_agent" in kwargs and kwargs["user_agent"]:
268 user_agent += " " + kwargs["user_agent"]
--> 269 self.client = _get_search_client(
270 azure_search_endpoint,
271 azure_search_key,
272 index_name,
273 semantic_configuration_name=semantic_configuration_name,
274 fields=fields,
275 vector_search=vector_search,
276 semantic_configurations=semantic_configurations,
277 scoring_profiles=scoring_profiles,
278 default_scoring_profile=default_scoring_profile,
279 default_fields=default_fields,
280 user_agent=user_agent,
281 cors_options=cors_options,
282 )
283 self.search_type = search_type
284 self.semantic_configuration_name = semantic_configuration_name
File ~\anaconda3\Lib\site-packages\langchain_community\vectorstores\azuresearch.py:112, in _get_search_client(endpoint, key, index_name, semantic_configuration_name, fields, vector_search, semantic_configurations, scoring_profiles, default_scoring_profile, default_fields, user_agent, cors_options)
108 index_client: SearchIndexClient = SearchIndexClient(
109 endpoint=endpoint, credential=credential, user_agent=user_agent
110 )
111 try:
--> 112 index_client.get_index(name=index_name)
113 except ResourceNotFoundError:
114 # Fields configuration
115 if fields is not None:
116 # Check mandatory fields
File ~\anaconda3\Lib\site-packages\azure\core\tracing\decorator.py:78, in distributed_trace.<locals>.decorator.<locals>.wrapper_use_tracer(*args, **kwargs)
76 span_impl_type = settings.tracing_implementation()
77 if span_impl_type is None:
---> 78 return func(*args, **kwargs)
80 # Merge span is parameter is set, but only if no explicit parent are passed
81 if merge_span and not passed_in_parent:
File ~\anaconda3\Lib\site-packages\azure\search\documents\indexes\_search_index_client.py:149, in SearchIndexClient.get_index(self, name, **kwargs)
131 """
132
133 :param name: The name of the index to retrieve.
(...)
146 :caption: Get an index.
147 """
148 kwargs["headers"] = self._merge_client_headers(kwargs.get("headers"))
--> 149 result = self._client.indexes.get(name, **kwargs)
150 return SearchIndex._from_generated(result)
File ~\anaconda3\Lib\site-packages\azure\core\tracing\decorator.py:78, in distributed_trace.<locals>.decorator.<locals>.wrapper_use_tracer(*args, **kwargs)
76 span_impl_type = settings.tracing_implementation()
77 if span_impl_type is None:
---> 78 return func(*args, **kwargs)
80 # Merge span is parameter is set, but only if no explicit parent are passed
81 if merge_span and not passed_in_parent:
File ~\anaconda3\Lib\site-packages\azure\search\documents\indexes\_generated\operations\_indexes_operations.py:857, in IndexesOperations.get(self, index_name, request_options, **kwargs)
854 _request.url = self._client.format_url(_request.url, **path_format_arguments)
856 _stream = False
--> 857 pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access
858 _request, stream=_stream, **kwargs
859 )
861 response = pipeline_response.http_response
863 if response.status_code not in [200]:
File ~\anaconda3\Lib\site-packages\azure\core\pipeline\_base.py:230, in Pipeline.run(self, request, **kwargs)
228 pipeline_request: PipelineRequest[HTTPRequestType] = PipelineRequest(request, context)
229 first_node = self._impl_policies[0] if self._impl_policies else _TransportRunner(self._transport)
--> 230 return first_node.send(pipeline_request)
File ~\anaconda3\Lib\site-packages\azure\core\pipeline\_base.py:86, in _SansIOHTTPPolicyRunner.send(self, request)
84 _await_result(self._policy.on_request, request)
85 try:
---> 86 response = self.next.send(request)
87 except Exception: # pylint: disable=broad-except
88 _await_result(self._policy.on_exception, request)
File ~\anaconda3\Lib\site-packages\azure\core\pipeline\_base.py:86, in _SansIOHTTPPolicyRunner.send(self, request)
84 _await_result(self._policy.on_request, request)
85 try:
---> 86 response = self.next.send(request)
87 except Exception: # pylint: disable=broad-except
88 _await_result(self._policy.on_exception, request)
[... skipping similar frames: _SansIOHTTPPolicyRunner.send at line 86 (2 times)]
File ~\anaconda3\Lib\site-packages\azure\core\pipeline\_base.py:86, in _SansIOHTTPPolicyRunner.send(self, request)
84 _await_result(self._policy.on_request, request)
85 try:
---> 86 response = self.next.send(request)
87 except Exception: # pylint: disable=broad-except
88 _await_result(self._policy.on_exception, request)
File ~\anaconda3\Lib\site-packages\azure\core\pipeline\policies\_redirect.py:197, in RedirectPolicy.send(self, request)
195 original_domain = get_domain(request.http_request.url) if redirect_settings["allow"] else None
196 while retryable:
--> 197 response = self.next.send(request)
198 redirect_location = self.get_redirect_location(response)
199 if redirect_location and redirect_settings["allow"]:
File ~\anaconda3\Lib\site-packages\azure\core\pipeline\policies\_retry.py:553, in RetryPolicy.send(self, request)
551 is_response_error = True
552 continue
--> 553 raise err
554 finally:
555 end_time = time.time()
File ~\anaconda3\Lib\site-packages\azure\core\pipeline\policies\_retry.py:531, in RetryPolicy.send(self, request)
529 try:
530 self._configure_timeout(request, absolute_timeout, is_response_error)
--> 531 response = self.next.send(request)
532 if self.is_retry(retry_settings, response):
533 retry_active = self.increment(retry_settings, response=response)
File ~\anaconda3\Lib\site-packages\azure\core\pipeline\_base.py:86, in _SansIOHTTPPolicyRunner.send(self, request)
84 _await_result(self._policy.on_request, request)
85 try:
---> 86 response = self.next.send(request)
87 except Exception: # pylint: disable=broad-except
88 _await_result(self._policy.on_exception, request)
File ~\anaconda3\Lib\site-packages\azure\core\pipeline\_base.py:86, in _SansIOHTTPPolicyRunner.send(self, request)
84 _await_result(self._policy.on_request, request)
85 try:
---> 86 response = self.next.send(request)
87 except Exception: # pylint: disable=broad-except
88 _await_result(self._policy.on_exception, request)
[... skipping similar frames: _SansIOHTTPPolicyRunner.send at line 86 (2 times)]
File ~\anaconda3\Lib\site-packages\azure\core\pipeline\_base.py:86, in _SansIOHTTPPolicyRunner.send(self, request)
84 _await_result(self._policy.on_request, request)
85 try:
---> 86 response = self.next.send(request)
87 except Exception: # pylint: disable=broad-except
88 _await_result(self._policy.on_exception, request)
File ~\anaconda3\Lib\site-packages\azure\core\pipeline\_base.py:119, in _TransportRunner.send(self, request)
109 """HTTP transport send method.
110
111 :param request: The PipelineRequest object.
(...)
114 :rtype: ~azure.core.pipeline.PipelineResponse
115 """
116 cleanup_kwargs_for_transport(request.context.options)
117 return PipelineResponse(
118 request.http_request,
--> 119 self._sender.send(request.http_request, **request.context.options),
120 context=request.context,
121 )
File ~\anaconda3\Lib\site-packages\azure\core\pipeline\transport\_requests_basic.py:381, in RequestsTransport.send(self, request, **kwargs)
378 error = ServiceRequestError(err, error=err)
380 if error:
--> 381 raise error
382 if _is_rest(request):
383 from azure.core.rest._requests_basic import RestRequestsTransportResponse
ServiceRequestError: Invalid URL "/indexes('langchain-index')?api-version=2023-10-01-Preview": No scheme supplied. Perhaps you meant [https:///indexes('langchain-index')?api-version=2023-10-01-Preview?](https://indexes('langchain-index')/?api-version=2023-10-01-Preview?)
### Description
I am trying to use the example selector (SemanticSimilarityExampleSelector.from_examples) feature of langchain.
### System Info
langchain 0.1.8
platform - windows
python version - 3.11.5 | Issue with SemanticSimilarityExampleSelector.from_examples | https://api.github.com/repos/langchain-ai/langchain/issues/17846/comments | 7 | 2024-02-21T04:44:48Z | 2024-06-27T16:07:34Z | https://github.com/langchain-ai/langchain/issues/17846 | 2,145,823,292 | 17,846 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Hi,
I was reading through the guide for Prompts and I realized that the ordering of the articles didn't make sense.
After the article about composition there is "Example Selector Types" article which is followed by "Example Selectors" which is followed by "Few Shot Prompting". I think the ordering needs to be opposite to what it is.
Link to the documentation: https://python.langchain.com/docs/modules/model_io/prompts/example_selectors
Have a nice day,
### Idea or request for content:
_No response_ | Ordering under Modules > Prompts seems to be wrong | https://api.github.com/repos/langchain-ai/langchain/issues/17825/comments | 3 | 2024-02-20T22:56:00Z | 2024-05-31T23:56:12Z | https://github.com/langchain-ai/langchain/issues/17825 | 2,145,430,838 | 17,825 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
max_marginal_relevance should be moved to core in a way that doesn't require numpy in order to prevent code duplication between packages | Some vectorstore utils are duplicated between packages | https://api.github.com/repos/langchain-ai/langchain/issues/17824/comments | 1 | 2024-02-20T22:41:55Z | 2024-05-31T23:58:43Z | https://github.com/langchain-ai/langchain/issues/17824 | 2,145,415,747 | 17,824 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from flask import Blueprint, request, Response, current_app, jsonify, stream_with_context
from flask_jwt_extended import jwt_required, current_user
from project import db
from langchain_community.llms import Ollama
from langchain_community.vectorstores import Chroma
from langchain_community.document_loaders.csv_loader import CSVLoader
from langchain_community.embeddings import OllamaEmbeddings
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder, PromptTemplate
from langchain_community.chat_message_histories import MongoDBChatMessageHistory
from langchain.agents import tool, AgentExecutor, create_react_agent
from langchain.tools.retriever import create_retriever_tool
from langchain.agents import AgentExecutor
from langchain_openai import OpenAI
from langchain_openai import OpenAIEmbeddings
import json
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from langchain_openai import ChatOpenAI
from langchain.callbacks.streaming_stdout_final_only import FinalStreamingStdOutCallbackHandler
import time
import asyncio
from random import choice
chat_bot_2_Blueprint = Blueprint('chat_bot_2', __name__)
@chat_bot_2_Blueprint.route('/api/ai/chat_bot_2/', methods=['GET', 'POST']) # <- from '/'
@jwt_required()
async def chat_bot_2():
user_ID = current_user.user_ID
chat_input = request.json.get("chat_input", None)
chat_ID = request.json.get("live_chat_ID", None)
path_to_chroma_db = "./project/api/ai/chroma_db/"
path_to_csv = "/love_qa.csv"
loader = CSVLoader(file_path=path_to_csv)
data = loader.load()
vector_collection = "dr_love_4"
model_type = "open_ai"
if model_type == "open_ai":
embedding_model = OpenAIEmbeddings(openai_api_key=current_app.config['OPENAI_SECRET'])
llm = ChatOpenAI(
openai_api_key=current_app.config['OPENAI_SECRET'],
model_name="gpt-3.5-turbo-0125",
streaming=True,
max_tokens=2000,
callbacks=[FinalStreamingStdOutCallbackHandler(answer_prefix_tokens=['Final', ' Answer', ':'])],)
else:
model_type = OllamaEmbeddings(model="llama2")
llm = OllamaFunctions(model="llama2")
print('LLAMA')
react_template = get_react_template()
custom_react_prompt = PromptTemplate.from_template(react_template)
# save to disk
#vectorstore = Chroma.from_documents(documents=data, embedding=embedding_model, persist_directory=path_to_chroma_db + vector_collection)
# load from disk
vectorstore = Chroma(persist_directory=path_to_chroma_db + vector_collection, embedding_function=embedding_model)
retriever = vectorstore.as_retriever(search_type="similarity", search_kwargs={"k": 2})
retreiver_title = "hormone_doctor_medical_questions_and_answers"
retreiver_description = "Searches and returns a series of questions and answers on hormones and medical advice and also personal questions"
retriever_tool = create_retriever_tool(retriever, retreiver_title, retreiver_description)
tools = [confetti, get_word_length, retriever_tool]
message_history = MongoDBChatMessageHistory(session_id=chat_ID, connection_string=current_app.config['MONGO_CLI'], database_name=current_app.config['MONGO_DB'], collection_name="chat_histories")
agent = create_react_agent(llm=llm, tools=tools, prompt=custom_react_prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, max_iterations=5, max_execution_time=100, return_intermediate_steps=True, handle_parsing_errors=True).with_config({"run_name": "Agent"})
async def stream_output(chat_input):
# async for chunk in agent_executor.stream({"input": chat_input, "chat_history": message_history}):
# print(chunk)
# if "output" in chunk:
# # print(f'Final Output: {chunk["output"]}')
# yield chunk["output"]
async for event in agent_executor.astream_events({"input": chat_input, "chat_history": message_history}, version="v1"):
kind = event["event"]
if kind == "on_chain_start":
if (event["name"] == "Agent"): # Was assigned when creating the agent with `.with_config({"run_name": "Agent"})`
print(f"Starting agent: {event['name']} with input: {event['data'].get('input')}")
elif kind == "on_chat_model_stream":
content = event["data"]["chunk"].content
if content:
print(content, end="|")
yield content
update_chat_history(user_ID, chat_ID, chat_input)
return Response(stream_with_context(stream_output(chat_input)), mimetype='text/event-stream')
def update_chat_history(user_ID, chat_ID, chat_input):
preview = None
chats = current_user.chats
# Assume chat_ID and user_ID are defined earlier in your code
chat_exists = any(chat_ID == item.get('chat_ID') for item in chats)
if not chat_exists:
db.credentials.update_one({"user_ID": user_ID}, {"$push": {'chats': {'chat_ID': chat_ID, 'preview': chat_input}}})
else:
#Move the items to the front of the list
#get the preview
for item in chats:
if item['chat_ID'] == chat_ID:
preview = item['preview']
# First, pull the item if it exists
db.credentials.update_one({"user_ID": user_ID, "chats.chat_ID": chat_ID}, {"$pull": {"chats": {"chat_ID": chat_ID}}})
# Then, push the item to the array
db.credentials.update_one({"user_ID": user_ID}, {"$push": {"chats": {"chat_ID": chat_ID, "preview": preview}}})
def get_react_template():
react_template="""
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
TOOLS:
------
Assistant has access to the following tools:
{tools}
To use a tool, please use the following format:
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:
Thought: Do I need to use a tool? No
Final Answer: [your response here]
Begin!
Previous conversation history:
{chat_history}
Question: {input}
New input:
{agent_scratchpad}
"""
return react_template
@tool
async def confetti(string: str) -> str:
"""Adds a random word to the string"""
return "confetti " + string
@tool
async def get_word_length(word: str) -> int:
"""Returns the length of a word."""
return len(word)
@tool
async def where_cat_is_hiding() -> str:
"""Where is the cat hiding right now?"""
return choice(["under the bed", "on the shelf", "in your heart"])
```
### Error Message and Stack Trace (if applicable)
Debugging middleware caught exception in streamed response at a point where response headers were already sent.
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/werkzeug/wsgi.py", line 500, in __next__
return self._next()
^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/werkzeug/wrappers/response.py", line 50, in _iter_encoded
for item in iterable:
TypeError: 'function' object is not iterable
### Description
I am trying to stream the output from an OpenAI agent using astream or astream_events. Neither work.
I have tried many different potential solutions and would appreciate some guidance.
Thank You.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:33:31 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T8112
> Python Version: 3.11.2 (main, Mar 24 2023, 00:31:37) [Clang 14.0.0 (clang-1400.0.29.202)]
Package Information
-------------------
> langchain_core: 0.1.23
> langchain: 0.1.7
> langchain_community: 0.0.20
> langsmith: 0.0.87
> langchain_cli: 0.0.21
> langchain_experimental: 0.0.51
> langchain_openai: 0.0.6
> langchainhub: 0.1.14
> langserve: 0.0.41
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph | Streaming Agent Error -----> TypeError: 'function' object is not iterable | https://api.github.com/repos/langchain-ai/langchain/issues/17821/comments | 4 | 2024-02-20T20:53:59Z | 2024-03-07T21:50:42Z | https://github.com/langchain-ai/langchain/issues/17821 | 2,145,263,044 | 17,821 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from langchain.agents import AgentExecutor, create_react_agent, Tool
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/home/airflow/tm-ml-chat/new.py", line 25, in <module>
from langchain.agents import AgentExecutor, create_react_agent, Tool
File "/home/airflow/.local/lib/python3.10/site-packages/langchain/agents/__init__.py", line 34, in <module>
from langchain_community.agent_toolkits import (
File "/home/airflow/.local/lib/python3.10/site-packages/langchain_community/agent_toolkits/__init__.py", line 46, in <module>
from langchain_community.agent_toolkits.sql.base import create_sql_agent
File "/home/airflow/.local/lib/python3.10/site-packages/langchain_community/agent_toolkits/sql/base.py", line 29, in <module>
from langchain_community.agent_toolkits.sql.toolkit import SQLDatabaseToolkit
File "/home/airflow/.local/lib/python3.10/site-packages/langchain_community/agent_toolkits/sql/toolkit.py", line 9, in <module>
from langchain_community.tools.sql_database.tool import (
File "/home/airflow/.local/lib/python3.10/site-packages/langchain_community/tools/sql_database/tool.py", line 33, in <module>
class QuerySQLDataBaseTool(BaseSQLDatabaseTool, BaseTool):
File "/home/airflow/.local/lib/python3.10/site-packages/langchain_community/tools/sql_database/tool.py", line 47, in QuerySQLDataBaseTool
) -> Union[str, Sequence[Dict[str, Any]], Result[Any]]:
TypeError: 'type' object is not subscriptable
```
### Description
When I try and run the import command as shown, I get the error
### System Info
langchain 0.1.8
langchain_community-0.0.21
sqlalchemy 1.4.51 | TypeError: 'type' object is not subscriptable when importing Agent | https://api.github.com/repos/langchain-ai/langchain/issues/17819/comments | 8 | 2024-02-20T19:07:03Z | 2024-02-23T00:52:08Z | https://github.com/langchain-ai/langchain/issues/17819 | 2,145,085,038 | 17,819 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Doc page: https://python.langchain.com/docs/integrations/vectorstores/azuresearch
The documentation for AzureSearch uses environment variables from the deprecated <v1.x openai library. It should be updated to use the environment variables that pertain to v1.x + Azure: https://github.com/openai/openai-python?tab=readme-ov-file#microsoft-azure-openai
### Idea or request for content:
Additionally, since the docs show using Azure OpenAI, we should probably create an instance of `AzureOpenAIEmbeddings` instead of `OpenAIEmbeddings` in the example. | DOC: AzureSearch doc should be updated to use openai>=1.x | https://api.github.com/repos/langchain-ai/langchain/issues/17818/comments | 1 | 2024-02-20T18:59:31Z | 2024-05-31T23:56:16Z | https://github.com/langchain-ai/langchain/issues/17818 | 2,145,072,704 | 17,818 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langchain.chains import SequentialChain
from langchain.chains.router import MultiPromptChain
from langchain.chains.router.llm_router import LLMRouterChain,RouterOutputParser
from langchain.prompts import PromptTemplate, ChatPromptTemplate
from langchain.chains import LLMChain
from langchain.chat_models import ChatVertexAI
from langchain.chains.router.multi_prompt_prompt import MULTI_PROMPT_ROUTER_TEMPLATE
chat=ChatVertexAI(
model_name="chat-bison@001",
max_output_tokens=1000,
temperature=0.0,
top_k=10,
top_p=0.8
)
def create_sequential_chain(llm):
first_prompt = ChatPromptTemplate.from_template(
"Translate the following review to french:"
"\n\n{input}"
)
# chain 1: input= Review and output= English_Review
chain_one = LLMChain(llm=llm, prompt=first_prompt,
output_key="French_Review"
)
second_prompt = ChatPromptTemplate.from_template(
"Can you summarize the following review in 1 sentence:"
"\n\n{French_Review}"
)
# chain 2: input= English_Review and output= summary
chain_two = LLMChain(llm=llm, prompt=second_prompt,
output_key="summary"
)
# prompt template 3: translate to english
third_prompt = ChatPromptTemplate.from_template(
"What language is the following review:\n\n{input}"
)
# chain 3: input= Review and output= language
chain_three = LLMChain(llm=llm, prompt=third_prompt,
output_key="language"
)
# prompt template 4: follow up message
fourth_prompt = ChatPromptTemplate.from_template(
"Write a follow up response to the following "
"summary in the specified language:"
"\n\nSummary: {summary}\n\nLanguage: {language}"
)
# chain 4: input= summary, language and output= followup_message
chain_four = LLMChain(llm=llm, prompt=fourth_prompt,
output_key="followup_message"
)
# overall_chain: input= Review
# and output= English_Review,summary, followup_message
sequential_chain = SequentialChain(
chains=[chain_one, chain_two, chain_three, chain_four],
input_variables=["input"],
output_variables=["French_Review", "summary","followup_message"],
verbose=True
)
return sequential_chain
sequential_chain = create_sequential_chain(chat)
####### The sequeential chain works fine
sequential_chain_response = sequential_chain("I find the taste mediocre. The foam doesn't hold, it's strange. I buy the same ones in stores and the taste is much better.")
print(sequential_chain_response)
######## Doesn't work in the router chain
analysis_template = """You are a data analyst. You are great at analyze and summarize product reviews.
When you don't know the answer to a question you admit that you don't know. \nFor example:
I find the taste mediocre. The foam doesn't hold, it's strange.
""\nHere is a question:
{input}"""
prompt_infos = [
{
"name": "analysis",
"description": "great at analyze and summarize product reviews",
"prompt_template": analysis_template,
"destination_chain": sequential_chain
}
]
destination_chains = {}
for p_info in prompt_infos:
name = p_info["name"]
destination_chains[name] = p_info["destination_chain"]
destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
destinations_str = "\n".join(destinations)
default_prompt = ChatPromptTemplate.from_template("{input}")
default_chain = LLMChain(llm=chat, prompt=default_prompt)
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(
destinations=destinations_str
)
router_prompt = PromptTemplate(
template=router_template,
input_variables=["input"],
output_parser=RouterOutputParser(),
)
router_chain = LLMRouterChain.from_llm(chat, router_prompt)
chain = MultiPromptChain(router_chain=router_chain,
destination_chains=destination_chains,
default_chain=default_chain,
verbose=True
)
chain.run("I find the taste mediocre. The foam doesn't hold, it's strange. I buy the same ones in stores and the taste is much better.")
```
### Error Message and Stack Trace (if applicable)
> Entering new MultiPromptChain chain...
/opt/conda/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `run` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
/opt/conda/lib/python3.10/site-packages/langchain/chains/llm.py:316: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.
warnings.warn(
analysis: {'input': "I find the taste mediocre. The foam doesn't hold, it's strange. I buy the same ones in stores and the taste is much better."}
> Entering new SequentialChain chain...
> Finished chain.
> Finished chain.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[1], line 112
106 router_chain = LLMRouterChain.from_llm(chat, router_prompt)
107 chain = MultiPromptChain(router_chain=router_chain,
108 destination_chains=destination_chains,
109 default_chain=default_chain,
110 verbose=True
111 )
--> 112 chain.run("I find the taste mediocre. The foam doesn't hold, it's strange. I buy the same ones in stores and the taste is much better.")
File /opt/conda/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:145, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
143 warned = True
144 emit_warning()
--> 145 return wrapped(*args, **kwargs)
File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:538, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
536 if len(args) != 1:
537 raise ValueError("`run` supports only one positional argument.")
--> 538 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
539 _output_key
540 ]
542 if kwargs and not args:
543 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
544 _output_key
545 ]
File /opt/conda/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:145, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
143 warned = True
144 emit_warning()
--> 145 return wrapped(*args, **kwargs)
File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:363, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
331 """Execute the chain.
332
333 Args:
(...)
354 `Chain.output_keys`.
355 """
356 config = {
357 "callbacks": callbacks,
358 "tags": tags,
359 "metadata": metadata,
360 "run_name": run_name,
361 }
--> 363 return self.invoke(
364 inputs,
365 cast(RunnableConfig, {k: v for k, v in config.items() if v is not None}),
366 return_only_outputs=return_only_outputs,
367 include_run_info=include_run_info,
368 )
File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:164, in Chain.invoke(self, input, config, **kwargs)
162 raise e
163 run_manager.on_chain_end(outputs)
--> 164 final_outputs: Dict[str, Any] = self.prep_outputs(
165 inputs, outputs, return_only_outputs
166 )
167 if include_run_info:
168 final_outputs[RUN_KEY] = RunInfo(run_id=run_manager.run_id)
File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:438, in Chain.prep_outputs(self, inputs, outputs, return_only_outputs)
420 def prep_outputs(
421 self,
422 inputs: Dict[str, str],
423 outputs: Dict[str, str],
424 return_only_outputs: bool = False,
425 ) -> Dict[str, str]:
426 """Validate and prepare chain outputs, and save info about this run to memory.
427
428 Args:
(...)
436 A dict of the final chain outputs.
437 """
--> 438 self._validate_outputs(outputs)
439 if self.memory is not None:
440 self.memory.save_context(inputs, outputs)
File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:269, in Chain._validate_outputs(self, outputs)
267 missing_keys = set(self.output_keys).difference(outputs)
268 if missing_keys:
--> 269 raise ValueError(f"Missing some output keys: {missing_keys}")
ValueError: Missing some output keys: {'text'}
### Description
I am trying to use a router chain to route human input to a sequential chain or a default llm.
Given the same prompt, the sequential chain worked fine, but the router chain returned "ValueError: Missing some output keys: {'text'}". From the output I can see the error occured after the chain finished, and I don't have any variable named "text". Can you take a look at this? Thank you!
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Debian 5.10.209-2 (2024-01-31)
> Python Version: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.1.23
> langchain: 0.1.7
> langchain_community: 0.0.20
> langsmith: 0.0.87
> langchain_google_vertexai: 0.0.5
> langchainplus_sdk: 0.0.20
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | ValueError: Missing some output keys: {'text'} even if the chain finished. | https://api.github.com/repos/langchain-ai/langchain/issues/17816/comments | 6 | 2024-02-20T16:42:11Z | 2024-03-11T00:58:07Z | https://github.com/langchain-ai/langchain/issues/17816 | 2,144,815,632 | 17,816 |
[
"hwchase17",
"langchain"
] | i didn't understand please provide me full code
_Originally posted by @shraddhaa26 in https://github.com/langchain-ai/langchain/discussions/17801#discussioncomment-8530449_ | i didn't understand please provide me full code | https://api.github.com/repos/langchain-ai/langchain/issues/17804/comments | 1 | 2024-02-20T14:03:19Z | 2024-02-20T16:04:26Z | https://github.com/langchain-ai/langchain/issues/17804 | 2,144,455,799 | 17,804 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
llm = GoogleGenerativeAI(
model="gemini-pro",
temperature=0.3,
max_output_tokens=2048,
)
chain = ConversationalRetrievalChain.from_llm(
llm=llm,
chain_type="stuff",
retriever= st.session_state.compression_retriever_reordered,
verbose=True,
combine_docs_chain_kwargs={"prompt": st.session_state.prompt},
return_source_documents=True,
)
conversation = get_conversation_string(st.session_state.messages)
res = chain({"question":user_question,"chat_history":chat_history})
answer =res["answer"]
### Error Message and Stack Trace (if applicable)
IndexError: list index out of range
Traceback:
File "/home/user/.local/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 534, in _run_script
exec(code, module.__dict__)
File "/home/user/app/app.py", line 210, in <module>
res = chain({"question":user_question,"chat_history":chat_history})
File "/home/user/.local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 383, in __call__
return self.invoke(
File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 168, in invoke
raise e
File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 158, in invoke
self._call(inputs, run_manager=run_manager)
File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/conversational_retrieval/base.py", line 166, in _call
answer = self.combine_docs_chain.run(
File "/home/user/.local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 555, in run
return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
File "/home/user/.local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 383, in __call__
return self.invoke(
File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 168, in invoke
raise e
File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 158, in invoke
self._call(inputs, run_manager=run_manager)
File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py", line 136, in _call
output, extra_return_dict = self.combine_docs(
File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py", line 244, in combine_docs
return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/llm.py", line 293, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
File "/home/user/.local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 383, in __call__
return self.invoke(
File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 168, in invoke
raise e
File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 158, in invoke
self._call(inputs, run_manager=run_manager)
File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/llm.py", line 104, in _call
return self.create_outputs(response)[0]
File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/llm.py", line 258, in create_outputs
result = [
File "/home/user/.local/lib/python3.10/site-packages/langchain/chains/llm.py", line 261, in <listcomp>
self.output_key: self.output_parser.parse_result(generation),
File "/home/user/.local/lib/python3.10/site-packages/langchain_core/output_parsers/base.py", line 219, in parse_result
return self.parse(result[0].text)
### Description
im trying to use gemini pro, but got error list index out of range
### System Info
python = 3.11
langchain-google-gen-ai = 0.0.9
langchain = 0.1.5 | List Index out of range gemini | https://api.github.com/repos/langchain-ai/langchain/issues/17800/comments | 9 | 2024-02-20T13:15:32Z | 2024-07-29T16:06:37Z | https://github.com/langchain-ai/langchain/issues/17800 | 2,144,358,183 | 17,800 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
https://python.langchain.com/docs/modules/agents/quick_start#retriever
https://python.langchain.com/docs/use_cases/chatbots/retrieval#creating-a-retriever
https://python.langchain.com/docs/use_cases/chatbots/quickstart#retrievers
https://python.langchain.com/docs/get_started/quickstart#server
### Idea or request for content:
_No response_ | DOC: https://docs.smith.langchain.com/overview -> 404 | https://api.github.com/repos/langchain-ai/langchain/issues/17799/comments | 2 | 2024-02-20T13:04:28Z | 2024-05-31T23:52:14Z | https://github.com/langchain-ai/langchain/issues/17799 | 2,144,337,299 | 17,799 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The code I took comes from https://redis.com/blog/build-ecommerce-chatbot-with-redis/ blogpost.
```py
import json
from langchain.schema import BaseRetriever
from langchain.vectorstores import VectorStore
from langchain.schema import Document
from pydantic import BaseModel
class RedisProductRetriever(BaseRetriever, BaseModel):
vectorstore: VectorStore
class Config:
arbitrary_types_allowed = True
def combine_metadata(self, doc) -> str:
metadata = doc.metadata
return (
"Item Name: " + metadata["item_name"] + ". " +
"Item Description: " + metadata["bullet_point"] + ". " +
"Item Keywords: " + metadata["item_keywords"] + "."
)
def get_relevant_documents(self, query):
docs = []
for doc in self.vectorstore.similarity_search(query):
content = self.combine_metadata(doc)
docs.append(Document(
page_content=content,
metadata=doc.metadata
))
return docs
```
### Error Message and Stack Trace (if applicable)
on class definition it fail with:
```
TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases
```
### Description
I'm searching for a way to create a custom retriver but the instruction from the https://redis.com/blog/build-ecommerce-chatbot-with-redis/ blogpost doesn't work.
### System Info
```
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.16
langchain-openai==0.0.5
```
Python 3.11.7
macOS | metaclass conflict error when trying to set up a custom retriever | https://api.github.com/repos/langchain-ai/langchain/issues/17796/comments | 2 | 2024-02-20T11:58:44Z | 2024-05-31T23:46:25Z | https://github.com/langchain-ai/langchain/issues/17796 | 2,144,212,734 | 17,796 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
The following line of code
```python
from langchain.agents import create_react_agent
```
raises the error reported below.
### Error Message and Stack Trace (if applicable)
```python
TypeError Traceback (most recent call last)
Cell In[6], line 1
----> 1 from langchain.agents import create_react_agent
File ~\anaconda3\envs\langchain_env_for_reviews\lib\site-packages\langchain\agents\__init__.py:34
31 from pathlib import Path
32 from typing import Any
---> 34 from langchain_community.agent_toolkits import (
35 create_json_agent,
36 create_openapi_agent,
37 create_pbi_agent,
38 create_pbi_chat_agent,
39 create_spark_sql_agent,
40 create_sql_agent,
41 )
42 from langchain_core._api.path import as_import_path
44 from langchain.agents.agent import (
45 Agent,
46 AgentExecutor,
(...)
50 LLMSingleActionAgent,
51 )
File ~\anaconda3\envs\langchain_env_for_reviews\lib\site-packages\langchain_community\agent_toolkits\__init__.py:46
44 from langchain_community.agent_toolkits.spark_sql.base import create_spark_sql_agent
45 from langchain_community.agent_toolkits.spark_sql.toolkit import SparkSQLToolkit
---> 46 from langchain_community.agent_toolkits.sql.base import create_sql_agent
47 from langchain_community.agent_toolkits.sql.toolkit import SQLDatabaseToolkit
48 from langchain_community.agent_toolkits.steam.toolkit import SteamToolkit
File ~\anaconda3\envs\langchain_env_for_reviews\lib\site-packages\langchain_community\agent_toolkits\sql\base.py:29
19 from langchain_core.prompts.chat import (
20 ChatPromptTemplate,
21 HumanMessagePromptTemplate,
22 MessagesPlaceholder,
23 )
25 from langchain_community.agent_toolkits.sql.prompt import (
26 SQL_FUNCTIONS_SUFFIX,
27 SQL_PREFIX,
28 )
---> 29 from langchain_community.agent_toolkits.sql.toolkit import SQLDatabaseToolkit
30 from langchain_community.tools.sql_database.tool import (
31 InfoSQLDatabaseTool,
32 ListSQLDatabaseTool,
33 )
35 if TYPE_CHECKING:
File ~\anaconda3\envs\langchain_env_for_reviews\lib\site-packages\langchain_community\agent_toolkits\sql\toolkit.py:9
7 from langchain_community.agent_toolkits.base import BaseToolkit
8 from langchain_community.tools import BaseTool
----> 9 from langchain_community.tools.sql_database.tool import (
10 InfoSQLDatabaseTool,
11 ListSQLDatabaseTool,
12 QuerySQLCheckerTool,
13 QuerySQLDataBaseTool,
14 )
15 from langchain_community.utilities.sql_database import SQLDatabase
18 class SQLDatabaseToolkit(BaseToolkit):
File ~\anaconda3\envs\langchain_env_for_reviews\lib\site-packages\langchain_community\tools\sql_database\tool.py:33
29 class _QuerySQLDataBaseToolInput(BaseModel):
30 query: str = Field(..., description="A detailed and correct SQL query.")
---> 33 class QuerySQLDataBaseTool(BaseSQLDatabaseTool, BaseTool):
34 """Tool for querying a SQL database."""
36 name: str = "sql_db_query"
File ~\anaconda3\envs\langchain_env_for_reviews\lib\site-packages\langchain_community\tools\sql_database\tool.py:47, in QuerySQLDataBaseTool()
36 name: str = "sql_db_query"
37 description: str = """
38 Execute a SQL query against the database and get back the result..
39 If the query is not correct, an error message will be returned.
40 If an error is returned, rewrite the query, check the query, and try again.
41 """
43 def _run(
44 self,
45 query: str,
46 run_manager: Optional[CallbackManagerForToolRun] = None,
---> 47 ) -> Union[str, Sequence[Dict[str, Any]], Result[Any]]:
48 """Execute the query, return the results or an error message."""
49 return self.db.run_no_throw(query)
TypeError: 'type' object is not subscriptable
```
### Description
I'm trying to import the `create_react_agent` function but encounter a `TypeError`.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python version: 3.8.17
Package Information
-------------------
> langchain_core: 0.1.24
> langchain: 0.1.8
> langchain_community: 0.0.21
> langsmith: 0.1.3
> langchain_openai: 0.0.2
> langchainhub: 0.1.14 | Error when importing create_react_agent: TypeError: 'type' object is not subscriptable | https://api.github.com/repos/langchain-ai/langchain/issues/17786/comments | 4 | 2024-02-20T09:32:19Z | 2024-02-20T14:55:29Z | https://github.com/langchain-ai/langchain/issues/17786 | 2,143,940,690 | 17,786 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
I tried RAG using GraphIndexCreator instead of conventional VectorDB. So the process o VectorDB is to load the file, split it using RecursiveCharacterTextSplitter, then save it to FAISS vectordb, so that it'll not exceed the context limit of the LLM. But for the Knowledge Graph i.e. GraphIndexCreator, i've used below code
```
from langchain.indexes import GraphIndexCreator
from langchain_openai import OpenAI
with open("/content/1.txt") as f:
all_text = f.read()
text = all_text
graph = index_creator.from_text(text)
```
it is returning the below issue, that it is exceeding the context limit
```
BadRequestError Traceback (most recent call last)
[<ipython-input-12-e007a66c39f1>](https://localhost:8080/#) in <cell line: 1>()
----> 1 graph = index_creator.from_text(text)
18 frames
[/usr/local/lib/python3.10/dist-packages/openai/_base_client.py](https://localhost:8080/#) in _request(self, cast_to, options, remaining_retries, stream, stream_cls)
978
979 log.debug("Re-raising status error")
--> 980 raise self._make_status_error_from_response(err.response) from None
981
982 return self._process_response(
BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 4097 tokens, however you requested 17145 tokens (16889 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
```
How to overcome this issue? | BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 4097 tokens when using GraphIndexCreator | https://api.github.com/repos/langchain-ai/langchain/issues/17783/comments | 3 | 2024-02-20T09:09:23Z | 2024-02-20T15:42:37Z | https://github.com/langchain-ai/langchain/issues/17783 | 2,143,893,557 | 17,783 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```python
from pymongo.mongo_client import MongoClient
from pymongo.server_api import ServerApi
from langchain_openai.embeddings import AzureOpenAIEmbeddings
from langchain.schema import Document
from langchain.vectorstores.mongodb_atlas import MongoDBAtlasVectorSearch
import os
os.environ["OPENAI_API_KEY"] = "asd"
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_VERSION"] = "2023-05-15"
ATLAS_CONNECTION_STRING = "blabla"
COLLECTION_NAME = "documents"
DB_NAME = "FraDev"
embeddings = AzureOpenAIEmbeddings(
deployment="text-embedding-ada-002",
chunk_size=1, # we need to use one because azure is poop
azure_endpoint="asd",
)
# Create a new client and connect to the server
client = MongoClient(ATLAS_CONNECTION_STRING, server_api=ServerApi("1"))
collection = client["FraDev"][COLLECTION_NAME]
def create_vector_search():
"""
Creates a MongoDBAtlasVectorSearch object using the connection string, database, and collection names, along with the OpenAI embeddings and index configuration.
:return: MongoDBAtlasVectorSearch object
"""
vector_search = MongoDBAtlasVectorSearch.from_connection_string(
ATLAS_CONNECTION_STRING,
f"{DB_NAME}.{COLLECTION_NAME}",
embeddings,
index_name="default",
)
return vector_search
docs = [
Document(page_content="foo", metadata={"id": 123, "file": {"name": "test.txt"}})
]
vector_search = MongoDBAtlasVectorSearch.from_documents(
documents=docs,
embedding=embeddings,
collection=collection,
index_name="default4", # Use a predefined index name
)
vector_search = create_vector_search()
results = vector_search.similarity_search_with_score(
query="foo", k=1, pre_filter={"name": {"$eq": "test.txt"}}
)
print(results)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Should add all the metadata into a `metadata` object inside the document stored into mongo, but it doesn't

Thanks a lot
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:33:31 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T8112
> Python Version: 3.11.6 (main, Nov 2 2023, 04:39:43) [Clang 14.0.3 (clang-1403.0.22.14.1)]
Package Information
-------------------
> langchain_core: 0.1.23
> langchain: 0.1.7
> langchain_community: 0.0.20
> langsmith: 0.0.87
> langchain_openai: 0.0.6
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | MongoAtlas doesn't add metadata in metadata | https://api.github.com/repos/langchain-ai/langchain/issues/17782/comments | 1 | 2024-02-20T09:08:18Z | 2024-05-31T23:46:27Z | https://github.com/langchain-ai/langchain/issues/17782 | 2,143,890,706 | 17,782 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_community.llms import HuggingFaceEndpoint
from langchain.chat_models import ChatHuggingFace
llm = HuggingFaceEndpoint(
repo_id="HuggingFaceH4/zephyr-7b-beta",
)
agent = ChatHuggingFace(llm=llm)
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[/home/ubuntu/benchmark_agents/test_prompt_optimizer.ipynb](https://vscode-remote+ssh-002dremote-002bec2.vscode-resource.vscode-cdn.net/home/ubuntu/benchmark_agents/test_prompt_optimizer.ipynb) Cell 11 line 9
[2](vscode-notebook-cell://ssh-remote%2Bec2/home/ubuntu/benchmark_agents/test_prompt_optimizer.ipynb#X10sdnNjb2RlLXJlbW90ZQ%3D%3D?line=1) from langchain.chat_models import ChatHuggingFace
[4](vscode-notebook-cell://ssh-remote%2Bec2/home/ubuntu/benchmark_agents/test_prompt_optimizer.ipynb#X10sdnNjb2RlLXJlbW90ZQ%3D%3D?line=3) llm = HuggingFaceEndpoint(
[5](vscode-notebook-cell://ssh-remote%2Bec2/home/ubuntu/benchmark_agents/test_prompt_optimizer.ipynb#X10sdnNjb2RlLXJlbW90ZQ%3D%3D?line=4) repo_id="HuggingFaceH4/zephyr-7b-beta",
[6](vscode-notebook-cell://ssh-remote%2Bec2/home/ubuntu/benchmark_agents/test_prompt_optimizer.ipynb#X10sdnNjb2RlLXJlbW90ZQ%3D%3D?line=5) max_new_tokens=100,
[7](vscode-notebook-cell://ssh-remote%2Bec2/home/ubuntu/benchmark_agents/test_prompt_optimizer.ipynb#X10sdnNjb2RlLXJlbW90ZQ%3D%3D?line=6) )
----> [9](vscode-notebook-cell://ssh-remote%2Bec2/home/ubuntu/benchmark_agents/test_prompt_optimizer.ipynb#X10sdnNjb2RlLXJlbW90ZQ%3D%3D?line=8) agent = ChatHuggingFace(llm=llm)
File [~/langchain_fuse_hf_endpoints/libs/community/langchain_community/chat_models/huggingface.py:55](https://vscode-remote+ssh-002dremote-002bec2.vscode-resource.vscode-cdn.net/home/ubuntu/benchmark_agents/~/langchain_fuse_hf_endpoints/libs/community/langchain_community/chat_models/huggingface.py:55), in ChatHuggingFace.__init__(self, **kwargs)
[51](https://vscode-remote+ssh-002dremote-002bec2.vscode-resource.vscode-cdn.net/home/ubuntu/benchmark_agents/~/langchain_fuse_hf_endpoints/libs/community/langchain_community/chat_models/huggingface.py:51) super().__init__(**kwargs)
[53](https://vscode-remote+ssh-002dremote-002bec2.vscode-resource.vscode-cdn.net/home/ubuntu/benchmark_agents/~/langchain_fuse_hf_endpoints/libs/community/langchain_community/chat_models/huggingface.py:53) from transformers import AutoTokenizer
---> [55](https://vscode-remote+ssh-002dremote-002bec2.vscode-resource.vscode-cdn.net/home/ubuntu/benchmark_agents/~/langchain_fuse_hf_endpoints/libs/community/langchain_community/chat_models/huggingface.py:55) self._resolve_model_id()
[57](https://vscode-remote+ssh-002dremote-002bec2.vscode-resource.vscode-cdn.net/home/ubuntu/benchmark_agents/~/langchain_fuse_hf_endpoints/libs/community/langchain_community/chat_models/huggingface.py:57) self.tokenizer = (
[58](https://vscode-remote+ssh-002dremote-002bec2.vscode-resource.vscode-cdn.net/home/ubuntu/benchmark_agents/~/langchain_fuse_hf_endpoints/libs/community/langchain_community/chat_models/huggingface.py:58) AutoTokenizer.from_pretrained(self.model_id)
[59](https://vscode-remote+ssh-002dremote-002bec2.vscode-resource.vscode-cdn.net/home/ubuntu/benchmark_agents/~/langchain_fuse_hf_endpoints/libs/community/langchain_community/chat_models/huggingface.py:59) if self.tokenizer is None
[60](https://vscode-remote+ssh-002dremote-002bec2.vscode-resource.vscode-cdn.net/home/ubuntu/benchmark_agents/~/langchain_fuse_hf_endpoints/libs/community/langchain_community/chat_models/huggingface.py:60) else self.tokenizer
[61](https://vscode-remote+ssh-002dremote-002bec2.vscode-resource.vscode-cdn.net/home/ubuntu/benchmark_agents/~/langchain_fuse_hf_endpoints/libs/community/langchain_community/chat_models/huggingface.py:61) )
File [~/langchain_fuse_hf_endpoints/libs/community/langchain_community/chat_models/huggingface.py:155](https://vscode-remote+ssh-002dremote-002bec2.vscode-resource.vscode-cdn.net/home/ubuntu/benchmark_agents/~/langchain_fuse_hf_endpoints/libs/community/langchain_community/chat_models/huggingface.py:155), in ChatHuggingFace._resolve_model_id(self)
[152](https://vscode-remote+ssh-002dremote-002bec2.vscode-resource.vscode-cdn.net/home/ubuntu/benchmark_agents/~/langchain_fuse_hf_endpoints/libs/community/langchain_community/chat_models/huggingface.py:152) self.model_id = endpoint.repository
[154](https://vscode-remote+ssh-002dremote-002bec2.vscode-resource.vscode-cdn.net/home/ubuntu/benchmark_agents/~/langchain_fuse_hf_endpoints/libs/community/langchain_community/chat_models/huggingface.py:154) if not self.model_id:
--> [155](https://vscode-remote+ssh-002dremote-002bec2.vscode-resource.vscode-cdn.net/home/ubuntu/benchmark_agents/~/langchain_fuse_hf_endpoints/libs/community/langchain_community/chat_models/huggingface.py:155) raise ValueError(
[156](https://vscode-remote+ssh-002dremote-002bec2.vscode-resource.vscode-cdn.net/home/ubuntu/benchmark_agents/~/langchain_fuse_hf_endpoints/libs/community/langchain_community/chat_models/huggingface.py:156) "Failed to resolve model_id:"
[157](https://vscode-remote+ssh-002dremote-002bec2.vscode-resource.vscode-cdn.net/home/ubuntu/benchmark_agents/~/langchain_fuse_hf_endpoints/libs/community/langchain_community/chat_models/huggingface.py:157) f"Could not find model id for inference server: {endpoint_url}"
[158](https://vscode-remote+ssh-002dremote-002bec2.vscode-resource.vscode-cdn.net/home/ubuntu/benchmark_agents/~/langchain_fuse_hf_endpoints/libs/community/langchain_community/chat_models/huggingface.py:158) "Make sure that your Hugging Face token has access to the endpoint."
[159](https://vscode-remote+ssh-002dremote-002bec2.vscode-resource.vscode-cdn.net/home/ubuntu/benchmark_agents/~/langchain_fuse_hf_endpoints/libs/community/langchain_community/chat_models/huggingface.py:159) )
ValueError: Failed to resolve model_id:Could not find model id for inference server: Make sure that your Hugging Face token has access to the endpoint.
### Description
The `model_id` cannot be resolved in [_resolve_model_id](https://github.com/langchain-ai/langchain/blob/865cabff052fe74996bef45faaf00df6f322c215/libs/community/langchain_community/chat_models/huggingface.py#L134), due to the `self.llm` attribute of the ChatHuggingFace object incorrectly being identified as a `HuggingFaceTextGenInference`.
But if we switch the order of [this type hint](https://github.com/langchain-ai/langchain/blob/865cabff052fe74996bef45faaf00df6f322c215/libs/community/langchain_community/chat_models/huggingface.py#L45) to `Union[HuggingFaceEndpoint, HuggingFaceTextGenInference, HuggingFaceHub]`, the llm type is correctly detected again. This seems really fucked up.
### System Info
langchain==0.1.8
langchain-benchmarks==0.0.10
langchain-community==0.0.21
langchain-core==0.1.24
langchainhub==0.1.14 | Cannot resolve model_id on ChatHuggingFace, depending on the order of type hints | https://api.github.com/repos/langchain-ai/langchain/issues/17780/comments | 7 | 2024-02-20T09:04:44Z | 2024-03-12T15:36:47Z | https://github.com/langchain-ai/langchain/issues/17780 | 2,143,880,914 | 17,780 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
import os
from langchain_community.llms import HuggingFaceTextGenInference
ENDPOINT_URL = "<YOUR_ENDPOINT_URL_HERE>"
HF_TOKEN = os.getenv("HUGGINGFACEHUB_API_TOKEN")
llm = HuggingFaceTextGenInference(
inference_server_url=ENDPOINT_URL,
max_new_tokens=512,
top_k=50,
temperature=0.1,
repetition_penalty=1.03,
server_kwargs={
"headers": {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": "application/json",
}
},
)
```
### Error Message and Stack Trace (if applicable)
```
File "/app/test_lang.py", line 36, in
chat_model = ChatHuggingFace(llm=llm)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_community/chat_models/huggingface.py", line 54, in init
self._resolve_model_id()
File "/usr/local/lib/python3.11/site-packages/langchain_community/chat_models/huggingface.py", line 158, in _resolve_model_id
raise ValueError(
ValueError: Failed to resolve model_id Could not find model id for inference server provided: http://xx.xx.xx.xxx/
Make sure that your Hugging Face token has access to the endpoint.
```
### Description
I have hosted text-generation-inference on a seperate instance and i am trying to call it from langchain server hosted on another server.
But i am getting this error.
### System Info
```
langchain==0.1.7
langchain-community==0.0.20
langchain-core==0.1.23
``` | ValueError: Failed to resolve model_id when calling text-generation-inference service from Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/17779/comments | 11 | 2024-02-20T08:45:14Z | 2024-08-04T16:06:35Z | https://github.com/langchain-ai/langchain/issues/17779 | 2,143,831,835 | 17,779 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
session = boto3.Session()
credentials = session.get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, 'es', session_token=credentials.token)
es_client = Elasticsearch(
hosts=[{'host': 'aws Es url ', 'port': 443, 'scheme': 'https'}],
http_auth=awsauth,
verify_certs=True,
use_ssl=True
)
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[79], line 1
----> 1 es_client = Elasticsearch(
2 hosts=[{'host': 'vpc-builddemo-dik6sgzdbwneff6v3ophihop24.us-east-1.es.amazonaws.com', 'port': 443, 'scheme': 'https'}],
3 http_auth=awsauth,
4 verify_certs=True,
5 use_ssl=True
6 )
TypeError: Elasticsearch.__init__() got an unexpected keyword argument 'use_ssl'
### Description
1. I am tring to connect Elasticsearch using my aws credential but not able to connect as it throght error "TypeError:
Elasticsearch.__init__() got an unexpected keyword argument 'use_ssl' "
2. if I remove use_ssl then it gets error "ValueError: Using a custom 'requests.auth.AuthBase' class for 'http_auth' must be used with node_class='requests' "
### System Info
from langchain_community.vectorstores import ElasticsearchStore
from langchain.schema import Document | unable to connect opensearch using Elasticsearch libary | https://api.github.com/repos/langchain-ai/langchain/issues/17777/comments | 1 | 2024-02-20T07:40:37Z | 2024-05-31T23:48:54Z | https://github.com/langchain-ai/langchain/issues/17777 | 2,143,726,236 | 17,777 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
File "/home/house365ai/xxm/langchain/examples/demo.py", line 42, in <module>
docsearch = Chroma.from_documents(texts, embedding=embedding)
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py", line 778, in from_documents
return cls.from_texts(
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py", line 736, in from_texts
chroma_collection.add_texts(
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py", line 275, in add_texts
embeddings = self._embedding_function.embed_documents(texts)
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/embeddings/localai.py", line 302, in embed_documents
return [self._embedding_func(text, engine=self.deployment) for text in texts]
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/embeddings/localai.py", line 302, in <listcomp>
return [self._embedding_func(text, engine=self.deployment) for text in texts]
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/embeddings/localai.py", line 267, in _embedding_func
return embed_with_retry(
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/embeddings/localai.py", line 98, in embed_with_retry
retry_decorator = _create_retry_decorator(embeddings)
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/embeddings/localai.py", line 45, in _create_retry_decorator
retry_if_exception_type(openai.error.Timeout)
AttributeError: module 'openai' has no attribute 'error'
### Error Message and Stack Trace (if applicable)
2
### Description
File "/home/house365ai/xxm/langchain/examples/demo.py", line 42, in <module>
docsearch = Chroma.from_documents(texts, embedding=embedding)
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py", line 778, in from_documents
return cls.from_texts(
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py", line 736, in from_texts
chroma_collection.add_texts(
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py", line 275, in add_texts
embeddings = self._embedding_function.embed_documents(texts)
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/embeddings/localai.py", line 302, in embed_documents
return [self._embedding_func(text, engine=self.deployment) for text in texts]
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/embeddings/localai.py", line 302, in <listcomp>
return [self._embedding_func(text, engine=self.deployment) for text in texts]
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/embeddings/localai.py", line 267, in _embedding_func
return embed_with_retry(
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/embeddings/localai.py", line 98, in embed_with_retry
retry_decorator = _create_retry_decorator(embeddings)
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/embeddings/localai.py", line 45, in _create_retry_decorator
retry_if_exception_type(openai.error.Timeout)
AttributeError: module 'openai' has no attribute 'error'
### System Info
File "/home/house365ai/xxm/langchain/examples/demo.py", line 42, in <module>
docsearch = Chroma.from_documents(texts, embedding=embedding)
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py", line 778, in from_documents
return cls.from_texts(
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py", line 736, in from_texts
chroma_collection.add_texts(
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py", line 275, in add_texts
embeddings = self._embedding_function.embed_documents(texts)
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/embeddings/localai.py", line 302, in embed_documents
return [self._embedding_func(text, engine=self.deployment) for text in texts]
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/embeddings/localai.py", line 302, in <listcomp>
return [self._embedding_func(text, engine=self.deployment) for text in texts]
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/embeddings/localai.py", line 267, in _embedding_func
return embed_with_retry(
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/embeddings/localai.py", line 98, in embed_with_retry
retry_decorator = _create_retry_decorator(embeddings)
File "/home/house365ai/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_community/embeddings/localai.py", line 45, in _create_retry_decorator
retry_if_exception_type(openai.error.Timeout)
AttributeError: module 'openai' has no attribute 'error'
| module 'openai' has no attribute 'error' | https://api.github.com/repos/langchain-ai/langchain/issues/17775/comments | 3 | 2024-02-20T06:37:04Z | 2024-06-03T01:17:06Z | https://github.com/langchain-ai/langchain/issues/17775 | 2,143,643,123 | 17,775 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
a
### Error Message and Stack Trace (if applicable)
a
### Description
Is it possible to create an sql agent to make queries on Google BigQuery on latests versions of langchain?. It was possible on older versions.
### System Info
a | SQL Agent for Google Big Query | https://api.github.com/repos/langchain-ai/langchain/issues/17762/comments | 6 | 2024-02-19T20:47:16Z | 2024-07-24T07:46:13Z | https://github.com/langchain-ai/langchain/issues/17762 | 2,143,121,768 | 17,762 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Sometimes I come across simple issues in documentation like [this](https://github.com/langchain-ai/langchain/issues/17758) I would like to report. I find going to the github page and filling out an entire issue to be a bit too much effort for these small bugs, and will typically just move on and ignore it
### Idea or request for content:
I think it would be nice to have a simple button somewhere on the documentation page to submit simple issues. Personally an easier way to report these kinds of things would give me more of an incentive to do it | Simple submit feedback option in documentation | https://api.github.com/repos/langchain-ai/langchain/issues/17759/comments | 1 | 2024-02-19T20:37:48Z | 2024-05-31T23:46:24Z | https://github.com/langchain-ai/langchain/issues/17759 | 2,143,109,902 | 17,759 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The parser links in [here](https://python.langchain.com/docs/modules/model_io/output_parsers#output-parser-types) are no longer valid. I see their URLs got moved, but the links in here have not been updated
### Idea or request for content:
_No response_ | Output parser links broken | https://api.github.com/repos/langchain-ai/langchain/issues/17758/comments | 1 | 2024-02-19T20:29:53Z | 2024-05-31T23:46:24Z | https://github.com/langchain-ai/langchain/issues/17758 | 2,143,100,260 | 17,758 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
When trying to follow the links to specific guides about prompt usage on [this docs page](https://python.langchain.com/docs/modules/model_io/prompts), all of the links were broken. Here is the relevant part of the documents:
<img width="537" alt="image" src="https://github.com/langchain-ai/langchain/assets/3274/abca83c0-47d5-45d8-8224-26abab4712f7">
### Idea or request for content:
_No response_ | DOC: Broken links on Prompts docs page: links to all How-To Guides are broken | https://api.github.com/repos/langchain-ai/langchain/issues/17753/comments | 2 | 2024-02-19T19:22:16Z | 2024-06-13T19:46:23Z | https://github.com/langchain-ai/langchain/issues/17753 | 2,143,011,204 | 17,753 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
-
### Idea or request for content:
below's the code
```
examples = [
{
"data": """
cricket is going to be crucial these days,
""",
"question": "how's cricket going to be?",
"answer": "yes",
"reason": """
cricket in this country is good
""",
}
]
# Define your prompt template
prompt_template = """This template is designed to streamline the process of responding to user queries by focusing on delivering concise and direct answers. Follow these steps for effective use:
Carefully read the user's question to fully grasp the nature of the inquiry.
Review the provided context, if any, to gather relevant information.
Based on the understanding of the question and the context, determine the most appropriate answer.
Respond with a simple 'Yes' or 'No', ensuring clarity and precision in addressing the user's query.
In cases where no context is provided, abstain from giving an answer.
Ensure your response is structured as follows for clarity:
User Question: {question} (Repeat the user's question here.)
Direct Answer: (Provide a straightforward 'Yes' or 'No' based on the query.)
This approach ensures that responses are not only relevant and to the point but also structured in a way that is easy for users to understand."""
example_prompt = PromptTemplate(
input_variables=["data", "question", "answer", "reason"], template=prompt_template
)
from langchain.prompts.few_shot import FewShotPromptTemplate
prompt_template = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
suffix="Question: {input}",
input_variables=["input"],
)
```
is the above correct way of using the examples, example_prompt, input etc in fewshottemplate? If not, what's the right way to use it? Can you help me with the code? If the above code is wrong, return with the correct code | how to add examples and example_prompt to fewshot? | https://api.github.com/repos/langchain-ai/langchain/issues/17741/comments | 1 | 2024-02-19T16:22:52Z | 2024-02-20T02:30:13Z | https://github.com/langchain-ai/langchain/issues/17741 | 2,142,745,040 | 17,741 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
below are the examples
```
examples = [
{
"data": """
cricket is going to be crucial these days,
""",
"question": "how's cricket going to be?",
"answer": "yes",
"reason": """
cricket in this country is good
""",
}
]
# Define your prompt template
prompt_template = """This template is designed to streamline the process of responding to user queries by focusing on delivering concise and direct answers. Follow these steps for effective use:
Carefully read the user's question to fully grasp the nature of the inquiry.
Review the provided context, if any, to gather relevant information.
Based on the understanding of the question and the context, determine the most appropriate answer.
Respond with a simple 'Yes' or 'No', ensuring clarity and precision in addressing the user's query.
In cases where no context is provided, abstain from giving an answer.
Ensure your response is structured as follows for clarity:
User Question: {question} (Repeat the user's question here.)
Direct Answer: (Provide a straightforward 'Yes' or 'No' based on the query.)
This approach ensures that responses are not only relevant and to the point but also structured in a way that is easy for users to understand."""
example_prompt = PromptTemplate(
input_variables=["data", "question", "answer", "reason"], template=prompt_template
)
from langchain.prompts.few_shot import FewShotPromptTemplate
prompt_template = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
suffix="Question: {input}",
input_variables=["input"],
)
```
is the above correct way of using the examples, example_prompt, input etc in fewshottemplate? If not, what's the right way to use it? Can you help me with the code?
### Idea or request for content:
_No response_ | how to add examples and example_prompt to fewshort? | https://api.github.com/repos/langchain-ai/langchain/issues/17737/comments | 3 | 2024-02-19T15:35:41Z | 2024-02-20T02:30:12Z | https://github.com/langchain-ai/langchain/issues/17737 | 2,142,649,529 | 17,737 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
docsearch_in_os = OpenSearchVectorSearch(
opensearch_url=os.environ.get("OPENSEARCH_URL"),
index_name=index_name,
embedding_function=bedrock_embeddings,
http_auth=auth,
timeout=200,
use_ssl=True,
verify_certs=True,
connection_class=RequestsHttpConnection,
is_aoss=True,
)
retriever = docsearch_in_os.as_retriever()
chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
chain_type_kwargs={"prompt":PROMPT},
)
result = chain.invoke({"query": user_input})
```
### Error Message and Stack Trace (if applicable)
result = chain.invoke({"query": user_input})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/python/langchain/chains/base.py", line 162, in invoke
raise e
File "/opt/python/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/opt/python/langchain/chains/retrieval_qa/base.py", line 141, in _call
docs = self._get_docs(question, run_manager=_run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/python/langchain/chains/retrieval_qa/base.py", line 221, in _get_docs
return self.retriever.get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/python/langchain_core/retrievers.py", line 224, in get_relevant_documents
raise e
File "/opt/python/langchain_core/retrievers.py", line 217, in get_relevant_documents
result = self._get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/python/langchain_core/vectorstores.py", line 654, in _get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/python/langchain_community/vectorstores/opensearch_vector_search.py", line 516, in similarity_search
docs_with_scores = self.similarity_search_with_score(query, k, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/python/langchain_community/vectorstores/opensearch_vector_search.py", line 543, in similarity_search_with_score
documents_with_scores = [
^
File "/opt/python/langchain_community/vectorstores/opensearch_vector_search.py", line 545, in <listcomp>
Document(
File "/opt/python/langchain_core/documents/base.py", line 22, in __init__
super().__init__(page_content=page_content, **kwargs)
File "/opt/python/langchain_core/load/serializable.py", line 107, in __init__
super().__init__(**kwargs)
File "/opt/python/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for Document
metadata
value is not a valid dict (type=type_error.dict)
1 validation error for Document
metadata
value is not a valid dict (type=type_error.dict)
'ValidationError' object is not subscriptable
### Description
I am trying to implement RAG using langchain. The above code works perfectly, when the collection I use is created using langchain `from_documents` function. But when I create the OpenSearch collection using the AWS Bedrock console (from the "Create Knowledge base"), the above code fails and throws the error I have shared.
### System Info
I am running this on an AWS lambda function, x86_64 architecture. | value is not a valid dict (type=type_error.dict) | https://api.github.com/repos/langchain-ai/langchain/issues/17736/comments | 2 | 2024-02-19T14:32:34Z | 2024-06-08T16:10:15Z | https://github.com/langchain-ai/langchain/issues/17736 | 2,142,507,312 | 17,736 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
I followed this [langchain examples notebook](https://github.com/langchain-ai/langgraph/blob/main/examples/multi_agent/agent_supervisor.ipynb) to the letter and just changed the llm initialization from
```
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model=<model_name>)
```
to
```
from langchain_community.chat_models import AzureChatOpenAI
llm = AzureChatOpenAI(
model_name=<model_name>,
deployment_name=<deployment_name>,
temperature=0,
verbose=True,
api_key=<api_key>,
azure_endpoint=<api_base>,
)
```
I get the error when invoking the graph:
```
for s in graph.stream(
{
"messages": [
HumanMessage(content="Code hello world and print it to the terminal")
]
}
):
if "__end__" not in s:
print(s)
print("----")
```
It works fine when using `ChatOpenAI` but returns an error when using `AzureChatOpenAI`
### Error Message and Stack Trace (if applicable)
```
{
"name": "NotFoundError",
"message": "Error code: 404 - {'error': {'message': 'Unrecognized request argument supplied: functions', 'type': 'invalid_request_error', 'param': None, 'code': None}}",
"stack": "---------------------------------------------------------------------------
NotFoundError Traceback (most recent call last)
Cell In[10], line 1
----> 1 for s in graph.stream(
2 {
3 \"messages\": [
4 HumanMessage(content=\"Code hello world and print it to the terminal\")
5 ]
6 }
7 ):
8 if \"__end__\" not in s:
9 print(s)
File ~/exp-venv/lib/python3.10/site-packages/langgraph/pregel/__init__.py:615, in Pregel.transform(self, input, config, output_keys, input_keys, **kwargs)
606 def transform(
607 self,
608 input: Iterator[Union[dict[str, Any], Any]],
(...)
613 **kwargs: Any,
614 ) -> Iterator[Union[dict[str, Any], Any]]:
--> 615 for chunk in self._transform_stream_with_config(
616 input,
617 self._transform,
618 config,
619 output_keys=output_keys,
620 input_keys=input_keys,
621 **kwargs,
622 ):
623 yield chunk
File ~/exp-venv/lib/python3.10/site-packages/langchain_core/runnables/base.py:1497, in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
1495 try:
1496 while True:
-> 1497 chunk: Output = context.run(next, iterator) # type: ignore
1498 yield chunk
1499 if final_output_supported:
File ~/exp-venv/lib/python3.10/site-packages/langgraph/pregel/__init__.py:355, in Pregel._transform(self, input, run_manager, config, input_keys, output_keys, interrupt)
348 done, inflight = concurrent.futures.wait(
349 futures,
350 return_when=concurrent.futures.FIRST_EXCEPTION,
351 timeout=self.step_timeout,
352 )
354 # interrupt on failure or timeout
--> 355 _interrupt_or_proceed(done, inflight, step)
357 # apply writes to channels
358 _apply_writes(
359 checkpoint, channels, pending_writes, config, step + 1
360 )
File ~/exp-venv/lib/python3.10/site-packages/langgraph/pregel/__init__.py:698, in _interrupt_or_proceed(done, inflight, step)
696 inflight.pop().cancel()
697 # raise the exception
--> 698 raise exc
699 # TODO this is where retry of an entire step would happen
701 if inflight:
702 # if we got here means we timed out
File /usr/lib/python3.10/concurrent/futures/thread.py:58, in _WorkItem.run(self)
55 return
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)
File ~/exp-venv/lib/python3.10/site-packages/langchain_core/runnables/base.py:4064, in RunnableBindingBase.invoke(self, input, config, **kwargs)
4058 def invoke(
4059 self,
4060 input: Input,
4061 config: Optional[RunnableConfig] = None,
4062 **kwargs: Optional[Any],
4063 ) -> Output:
-> 4064 return self.bound.invoke(
4065 input,
4066 self._merge_configs(config),
4067 **{**self.kwargs, **kwargs},
4068 )
File ~/exp-venv/lib/python3.10/site-packages/langchain_core/runnables/base.py:2053, in RunnableSequence.invoke(self, input, config)
2051 try:
2052 for i, step in enumerate(self.steps):
-> 2053 input = step.invoke(
2054 input,
2055 # mark each step as a child run
2056 patch_config(
2057 config, callbacks=run_manager.get_child(f\"seq:step:{i+1}\")
2058 ),
2059 )
2060 # finish the root run
2061 except BaseException as e:
File ~/exp-venv/lib/python3.10/site-packages/langchain_core/runnables/base.py:4064, in RunnableBindingBase.invoke(self, input, config, **kwargs)
4058 def invoke(
4059 self,
4060 input: Input,
4061 config: Optional[RunnableConfig] = None,
4062 **kwargs: Optional[Any],
4063 ) -> Output:
-> 4064 return self.bound.invoke(
4065 input,
4066 self._merge_configs(config),
4067 **{**self.kwargs, **kwargs},
4068 )
File ~/exp-venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:166, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
155 def invoke(
156 self,
157 input: LanguageModelInput,
(...)
161 **kwargs: Any,
162 ) -> BaseMessage:
163 config = ensure_config(config)
164 return cast(
165 ChatGeneration,
--> 166 self.generate_prompt(
167 [self._convert_input(input)],
168 stop=stop,
169 callbacks=config.get(\"callbacks\"),
170 tags=config.get(\"tags\"),
171 metadata=config.get(\"metadata\"),
172 run_name=config.get(\"run_name\"),
173 **kwargs,
174 ).generations[0][0],
175 ).message
File ~/exp-venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:544, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
536 def generate_prompt(
537 self,
538 prompts: List[PromptValue],
(...)
541 **kwargs: Any,
542 ) -> LLMResult:
543 prompt_messages = [p.to_messages() for p in prompts]
--> 544 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File ~/exp-venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:408, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
406 if run_managers:
407 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 408 raise e
409 flattened_outputs = [
410 LLMResult(generations=[res.generations], llm_output=res.llm_output)
411 for res in results
412 ]
413 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File ~/exp-venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:398, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
395 for i, m in enumerate(messages):
396 try:
397 results.append(
--> 398 self._generate_with_cache(
399 m,
400 stop=stop,
401 run_manager=run_managers[i] if run_managers else None,
402 **kwargs,
403 )
404 )
405 except BaseException as e:
406 if run_managers:
File ~/exp-venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:577, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
573 raise ValueError(
574 \"Asked to cache, but no cache found at `langchain.cache`.\"
575 )
576 if new_arg_supported:
--> 577 return self._generate(
578 messages, stop=stop, run_manager=run_manager, **kwargs
579 )
580 else:
581 return self._generate(messages, stop=stop, **kwargs)
File ~/exp-venv/lib/python3.10/site-packages/langchain_community/chat_models/openai.py:439, in ChatOpenAI._generate(self, messages, stop, run_manager, stream, **kwargs)
433 message_dicts, params = self._create_message_dicts(messages, stop)
434 params = {
435 **params,
436 **({\"stream\": stream} if stream is not None else {}),
437 **kwargs,
438 }
--> 439 response = self.completion_with_retry(
440 messages=message_dicts, run_manager=run_manager, **params
441 )
442 return self._create_chat_result(response)
File ~/exp-venv/lib/python3.10/site-packages/langchain_community/chat_models/openai.py:356, in ChatOpenAI.completion_with_retry(self, run_manager, **kwargs)
354 \"\"\"Use tenacity to retry the completion call.\"\"\"
355 if is_openai_v1():
--> 356 return self.client.create(**kwargs)
358 retry_decorator = _create_retry_decorator(self, run_manager=run_manager)
360 @retry_decorator
361 def _completion_with_retry(**kwargs: Any) -> Any:
File ~/exp-venv/lib/python3.10/site-packages/openai/_utils/_utils.py:275, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
273 msg = f\"Missing required argument: {quote(missing[0])}\"
274 raise TypeError(msg)
--> 275 return func(*args, **kwargs)
File ~/exp-venv/lib/python3.10/site-packages/openai/resources/chat/completions.py:663, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, stream, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
611 @required_args([\"messages\", \"model\"], [\"messages\", \"model\", \"stream\"])
612 def create(
613 self,
(...)
661 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
662 ) -> ChatCompletion | Stream[ChatCompletionChunk]:
--> 663 return self._post(
664 \"/chat/completions\",
665 body=maybe_transform(
666 {
667 \"messages\": messages,
668 \"model\": model,
669 \"frequency_penalty\": frequency_penalty,
670 \"function_call\": function_call,
671 \"functions\": functions,
672 \"logit_bias\": logit_bias,
673 \"logprobs\": logprobs,
674 \"max_tokens\": max_tokens,
675 \"n\": n,
676 \"presence_penalty\": presence_penalty,
677 \"response_format\": response_format,
678 \"seed\": seed,
679 \"stop\": stop,
680 \"stream\": stream,
681 \"temperature\": temperature,
682 \"tool_choice\": tool_choice,
683 \"tools\": tools,
684 \"top_logprobs\": top_logprobs,
685 \"top_p\": top_p,
686 \"user\": user,
687 },
688 completion_create_params.CompletionCreateParams,
689 ),
690 options=make_request_options(
691 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
692 ),
693 cast_to=ChatCompletion,
694 stream=stream or False,
695 stream_cls=Stream[ChatCompletionChunk],
696 )
File ~/exp-venv/lib/python3.10/site-packages/openai/_base_client.py:1200, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
1186 def post(
1187 self,
1188 path: str,
(...)
1195 stream_cls: type[_StreamT] | None = None,
1196 ) -> ResponseT | _StreamT:
1197 opts = FinalRequestOptions.construct(
1198 method=\"post\", url=path, json_data=body, files=to_httpx_files(files), **options
1199 )
-> 1200 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File ~/exp-venv/lib/python3.10/site-packages/openai/_base_client.py:889, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
880 def request(
881 self,
882 cast_to: Type[ResponseT],
(...)
887 stream_cls: type[_StreamT] | None = None,
888 ) -> ResponseT | _StreamT:
--> 889 return self._request(
890 cast_to=cast_to,
891 options=options,
892 stream=stream,
893 stream_cls=stream_cls,
894 remaining_retries=remaining_retries,
895 )
File ~/exp-venv/lib/python3.10/site-packages/openai/_base_client.py:980, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
977 err.response.read()
979 log.debug(\"Re-raising status error\")
--> 980 raise self._make_status_error_from_response(err.response) from None
982 return self._process_response(
983 cast_to=cast_to,
984 options=options,
(...)
987 stream_cls=stream_cls,
988 )
NotFoundError: Error code: 404 - {'error': {'message': 'Unrecognized request argument supplied: functions', 'type': 'invalid_request_error', 'param': None, 'code': None}}"
}
```
### Description
I am trying to use langchain agents (specifically the langgraph implementation in this example notebook: https://github.com/langchain-ai/langgraph/blob/main/examples/multi_agent/agent_supervisor.ipynb.
The code is working fine when using `ChatOpenAI` but it fails when using `AzureChatOpenAI`
### System Info
```
langchain==0.1.6
langchain-community==0.0.19
langchain-core==0.1.22
langchain-experimental==0.0.50
langchain-openai==0.0.5
langchainhub==0.1.14
``` | Agents returning an error when using AzureChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/17735/comments | 1 | 2024-02-19T13:56:33Z | 2024-05-31T23:46:31Z | https://github.com/langchain-ai/langchain/issues/17735 | 2,142,433,760 | 17,735 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Let's say I have a spreadsheet with 30 rows and need to find specific answers for each one. Typically, the RetrievalQAChain method relies on a retriever to select the top-k results, which can overlook details in some rows. I'm looking to circumvent the retriever step by directly embedding the data, saving it into a vector store, and then extracting answers using the RetrievalQAChain. This approach aims to replicate the benefits of the RAG (Retrieval-Augmented Generation) model without missing out on any information due to the limitations of the retriever. How can this be achieved?
### Idea or request for content:
Below is the code
```
# Iterate over the sorted file paths and create a loader for each file
loaders = [CSVLoader(file_path=file_path, metadata_columns=['cricket'], encoding="utf-8") for file_path in csv_files_sorted]
# Optional: If you need to combine the data from all loaders
documents = []
for loader in loaders:
data = loader.load()
documents.extend(data)
# Instantiate the OpenAIEmbeddings class
openai = OpenAIEmbeddings()
# Create a FAISS vector store from the embeddings
vectorstore = FAISS.from_documents(documents, openai)
retriever = vector_store.as_retriever(search_kwargs={"k": 5})
# Define your prompt template
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else. If no context, then no answer.
Helpful Answer:"""
# Answer a question related to 'Cricket'
category = 'engie'
qa_chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# Format the prompt using the template
context = ""
question = "what for it strives?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
Let's say I have a spreadsheet with 30 rows and need to find specific answers for each one. Typically, the RetrievalQAChain method relies on a retriever to select the top-k results, which can overlook details in some rows. I'm looking to circumvent the retriever step by directly embedding the data, saving it into a vector store, and then extracting answers using the RetrievalQAChain. This approach aims to replicate the benefits of the RAG (Retrieval-Augmented Generation) model without missing out on any information due to the limitations of the retriever. can you help me with the code? | how to achieve all the answers for all the rows present in the excel using LLM? | https://api.github.com/repos/langchain-ai/langchain/issues/17731/comments | 1 | 2024-02-19T12:45:12Z | 2024-02-20T02:30:12Z | https://github.com/langchain-ai/langchain/issues/17731 | 2,142,291,204 | 17,731 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Let's say I have an Excel file containing 30 rows, and I need to find answers for each row individually. When using the RetrievalQAChain approach, the retriever typically selects only the top-k results, potentially missing information from other rows. To address this, I'd like to bypass the retriever by uploading the Excel data into a vector store and directly query the Large Language Model (LLM) to obtain answers for each of the 30 rows. How can this be accomplished?
### Idea or request for content:
Below's the code
```
# Iterate over the sorted file paths and create a loader for each file
loaders = [CSVLoader(file_path=file_path, metadata_columns=['cricket'], encoding="utf-8") for file_path in csv_files_sorted]
# Optional: If you need to combine the data from all loaders
documents = []
for loader in loaders:
data = loader.load()
documents.extend(data)
# Instantiate the OpenAIEmbeddings class
openai = OpenAIEmbeddings()
# Create a FAISS vector store from the embeddings
vectorstore = FAISS.from_documents(documents, openai)
retriever = vector_store.as_retriever(search_kwargs={"k": 5})
# Define your prompt template
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else. If no context, then no answer.
Helpful Answer:"""
# Answer a question related to 'Cricket'
category = 'engie'
qa_chain = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0.2),
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# Format the prompt using the template
context = ""
question = "what for it strives?"
formatted_prompt = prompt_template.format(context=context, question=question)
# Pass the formatted prompt to the RetrievalQA function
llm_response = qa_chain(formatted_prompt)
process_llm_response(llm_response)
```
Let's say I have an Excel file containing 30 rows, and I need to find answers for each row individually. When using the RetrievalQAChain approach, the retriever typically selects only the top-k results, potentially missing information from other rows. To address this, I'd like to bypass the retriever by uploading the Excel data into a vector store and directly query the Large Language Model (LLM) to obtain answers for each of the 30 rows. Can you help me with the code? | how to get the answers for all the rows present in the excel using LLM? | https://api.github.com/repos/langchain-ai/langchain/issues/17730/comments | 1 | 2024-02-19T12:34:13Z | 2024-02-20T02:30:11Z | https://github.com/langchain-ai/langchain/issues/17730 | 2,142,270,974 | 17,730 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Let's say I have an Excel file containing 30 rows, and I need to find answers for each row individually. When using the RetrievalQAChain approach, the retriever typically selects only the top-k results, potentially missing information from other rows. To address this, I'd like to bypass the retriever by uploading the Excel data into a vector store and directly query the Large Language Model (LLM) to obtain answers for each of the 30 rows. How can this be accomplished?
### Idea or request for content:
Below is the code which i used will upload csv data and get embeddings for data and store them into vectordb. Now, how to use QAChain to get the answer for every row?
```
# Iterate over the sorted file paths and create a loader for each file
loaders = [CSVLoader(file_path=file_path, metadata_columns=['cricket'], encoding="utf-8") for file_path in csv_files_sorted]
# Optional: If you need to combine the data from all loaders
documents = []
for loader in loaders:
data = loader.load()
documents.extend(data)
# Instantiate the OpenAIEmbeddings class
openai = OpenAIEmbeddings()
# Create a FAISS vector store from the embeddings
vectorstore = FAISS.from_documents(documents, openai)
```
Let's say I have an Excel file containing 30 rows, and I need to find answers for each row individually. When using the RetrievalQAChain approach, the retriever typically selects only the top-k results, potentially missing information from other rows. To address this, I'd like to bypass the retriever by uploading the Excel data into a vector store and directly query the Large Language Model (LLM) to obtain answers for each of the 30 rows. How can this be accomplished? | how to get answer for the question without using retriever? | https://api.github.com/repos/langchain-ai/langchain/issues/17729/comments | 3 | 2024-02-19T12:28:39Z | 2024-02-20T02:30:11Z | https://github.com/langchain-ai/langchain/issues/17729 | 2,142,260,464 | 17,729 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
python ...
```
logging.info("Loading documents from Azure Blob Storage.")
docs = load_documents_from_blob()
logging.info("Splitting the loaded documents into chunks.")
splits = partition_text_into_chunks(docs)
embeddings = AzureOpenAIEmbeddings(
azure_deployment=os.environ["EMBEDDING_MODEL_DEPLOYMENT"],
openai_api_version="2023-05-15",
)
logging.info("Connecting to Azure Cognitive Search...")
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=azure_search_endpoint,
azure_search_key=azure_search_key,
index_name=index_name,
embedding_function=embeddings.embed_query,
)
logging.info(
"Indexing the split documents into Azure Cognitive Search for documents."
)
vector_store.add_documents(documents=splits)
```
### Error Message and Stack Trace (if applicable)
Error:
```
INFO:root:Loading documents from Azure Blob Storage.
INFO:root:Preparing to load data from Azure Blob Storage.
INFO:pikepdf._core:pikepdf C++ to Python logger bridge initialized
INFO:root:Successfully loaded 39 documents from Azure Blob Storage.
INFO:root:Splitting the loaded documents into chunks.
INFO:root:Initializing text splitter with chunk size of 1000 and overlap of 100 characters.
WARNING:langchain.text_splitter:Created a chunk of size 1370, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 1235, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 2133, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 6548, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 1901, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 5381, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 2180, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 1978, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 3180, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 3180, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 3180, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 6581, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 2482, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 1266, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 1266, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 1424, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 1353, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 1264, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 1782, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 1285, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 1317, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 6141, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 1719, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 6119, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 1025, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 3017, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 1080, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 1140, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 1365, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 1166, which is longer than the specified 1000
WARNING:langchain.text_splitter:Created a chunk of size 1006, which is longer than the specified 1000
INFO:root:Successfully split 39 documents.
INFO:root:Initializing the Azure Cognitive Search model for documents.
INFO:root:Initializing embeddings...
INFO:root:Connecting to Azure Cognitive Search...
INFO:httpx:HTTP Request: POST https://mobiz-gpt-4-deployment.openai.azure.com//openai/deployments/ada-002/embeddings?api-version=2023-05-15 "HTTP/1.1 200 OK"
Traceback (most recent call last):
File "/home/ayaz/Desktop/dev/env_sqllatest/lib/python3.10/site-packages/langchain_community/vectorstores/azuresearch.py", line 111, in _get_search_client
index_client.get_index(name=index_name)
File "/home/user/Desktop/dev/env_sqllatest/lib/python3.10/site-packages/azure/core/tracing/decorator.py", line 78, in wrapper_use_tracer
return func(*args, **kwargs)
File "/home/user/Desktop/dev/env_sqllatest/lib/python3.10/site-packages/azure/search/documents/indexes/_search_index_client.py", line 144, in get_index
result = self._client.indexes.get(name, **kwargs)
File "/home/user/Desktop/dev/env_sqllatest/lib/python3.10/site-packages/azure/core/tracing/decorator.py", line 78, in wrapper_use_tracer
return func(*args, **kwargs)
File "/home/user/Desktop/dev/env_sqllatest/lib/python3.10/site-packages/azure/search/documents/indexes/_generated/operations/_indexes_operations.py", line 864, in get
map_error(status_code=response.status_code, response=response, error_map=error_map)
File "/home/user/Desktop/dev/env_sqllatest/lib/python3.10/site-packages/azure/core/exceptions.py", line 164, in map_error
raise error
azure.core.exceptions.ResourceNotFoundError: () No index with the name 'apollo-knowledge-base' was found in the service 'knowledge-bot-basic-15'.
Code:
Message: No index with the name 'apollo-knowledge-base' was found in the service 'knowledge-bot-basic-15'.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/Desktop/dev/ALL DEV CODE/hybrid-sql-agent/backend/main.py", line 3, in <module>
from api.routes import hybrid_agent
File "/home/user/Desktop/dev/ALL DEV CODE/hybrid-sql-agent/backend/api/routes/hybrid_agent.py", line 7, in <module>
from ai.main import apollo_conversation_chain
File "/home/user/Desktop/dev/ALL DEV CODE/hybrid-sql-agent/backend/ai/main.py", line 28, in <module>
acs_documents, acs_fewshots = process_and_index_data_to_azure()
File "/home/user/Desktop/dev/ALL DEV CODE/hybrid-sql-agent/backend/ai/documents_processing.py", line 486, in process_and_index_data_to_azure
acs_documents = configure_azure_search_for_documents()
File "/home/user/Desktop/dev/ALL DEV CODE/hybrid-sql-agent/backend/ai/documents_processing.py", line 406, in configure_azure_search_for_documents
vector_store: AzureSearch = AzureSearch(
File "/home/user/Desktop/dev/env_sqllatest/lib/python3.10/site-packages/langchain_community/vectorstores/azuresearch.py", line 268, in __init__
self.client = _get_search_client(
File "/home/user/Desktop/dev/env_sqllatest/lib/python3.10/site-packages/langchain_community/vectorstores/azuresearch.py", line 144, in _get_search_client
vector_search = VectorSearch(
NameError: name 'VectorSearch' is not defined. Did you mean: 'vector_search'?
```
### Description
My env spec: i have used latested version of langchain and azure's SDK
everything is working but when i try to create index it give me this error
```
langchain==0.1.7
langchain-community==0.0.20
langchain-core==0.1.23
langchain-openai==0.0.6
azure-common==1.1.28
azure-core==1.30.0
azure-identity==1.15.0
azure-search-documents==11.4.0
azure-storage-blob==12.19.0
fastapi==0.109.2
uvicorn==0.27.1
python-dotenv==1.0.1
pandas==2.2.0
unstructured==0.12.4
python-docx==1.1.0
unstructured[pdf]
```
### System Info
`System Information
------------------
> OS: Linux
> OS Version: #18~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Feb 7 11:40:03 UTC 2
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.23
> langchain: 0.1.7
> langchain_community: 0.0.20
> langsmith: 0.0.87
> langchain_openai: 0.0.6
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
`
**pip freeze output:**
`aiohttp==3.9.3
aiosignal==1.3.1
annotated-types==0.6.0
antlr4-python3-runtime==4.9.3
anyio==4.3.0
async-timeout==4.0.3
attrs==23.2.0
azure-common==1.1.28
azure-core==1.30.0
azure-identity==1.15.0
azure-search-documents==11.4.0
azure-storage-blob==12.19.0
backoff==2.2.1
beautifulsoup4==4.12.3
certifi==2024.2.2
cffi==1.16.0
chardet==5.2.0
charset-normalizer==3.3.2
click==8.1.7
coloredlogs==15.0.1
contourpy==1.2.0
cryptography==42.0.3
cycler==0.12.1
dataclasses-json==0.6.4
dataclasses-json-speakeasy==0.5.11
Deprecated==1.2.14
distro==1.9.0
effdet==0.4.1
emoji==2.10.1
exceptiongroup==1.2.0
fastapi==0.109.2
filelock==3.13.1
filetype==1.2.0
flatbuffers==23.5.26
fonttools==4.49.0
frozenlist==1.4.1
fsspec==2024.2.0
greenlet==3.0.3
h11==0.14.0
httpcore==1.0.3
httpx==0.26.0
huggingface-hub==0.20.3
humanfriendly==10.0
idna==3.6
iopath==0.1.10
isodate==0.6.1
Jinja2==3.1.3
joblib==1.3.2
jsonpatch==1.33
jsonpath-python==1.0.6
jsonpointer==2.4
kiwisolver==1.4.5
langchain==0.1.7
langchain-community==0.0.20
langchain-core==0.1.23
langchain-openai==0.0.6
langdetect==1.0.9
langsmith==0.0.87
layoutparser==0.3.4
lxml==5.1.0
MarkupSafe==2.1.5
marshmallow==3.20.2
matplotlib==3.8.3
mpmath==1.3.0
msal==1.26.0
msal-extensions==1.1.0
multidict==6.0.5
mypy-extensions==1.0.0
networkx==3.2.1
nltk==3.8.1
numpy==1.26.4
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-nccl-cu12==2.19.3
nvidia-nvjitlink-cu12==12.3.101
nvidia-nvtx-cu12==12.1.105
omegaconf==2.3.0
onnx==1.15.0
onnxruntime==1.15.1
openai==1.12.0
opencv-python==4.9.0.80
packaging==23.2
pandas==2.2.0
pdf2image==1.17.0
pdfminer.six==20221105
pdfplumber==0.10.4
pikepdf==8.13.0
pillow==10.2.0
pillow_heif==0.15.0
portalocker==2.8.2
protobuf==4.25.3
pycocotools==2.0.7
pycparser==2.21
pydantic==2.6.1
pydantic_core==2.16.2
PyJWT==2.8.0
pyparsing==3.1.1
pypdf==4.0.2
pypdfium2==4.27.0
pytesseract==0.3.10
python-dateutil==2.8.2
python-docx==1.1.0
python-dotenv==1.0.1
python-iso639==2024.2.7
python-magic==0.4.27
python-multipart==0.0.9
pytz==2024.1
PyYAML==6.0.1
rapidfuzz==3.6.1
regex==2023.12.25
requests==2.31.0
safetensors==0.4.2
scipy==1.12.0
six==1.16.0
sniffio==1.3.0
soupsieve==2.5
SQLAlchemy==2.0.27
starlette==0.36.3
sympy==1.12
tabulate==0.9.0
tenacity==8.2.3
tiktoken==0.6.0
timm==0.9.12
tokenizers==0.15.2
torch==2.2.0
torchvision==0.17.0
tqdm==4.66.2
transformers==4.37.2
triton==2.2.0
typing-inspect==0.9.0
typing_extensions==4.9.0
tzdata==2024.1
unstructured==0.12.4
unstructured-client==0.18.0
unstructured-inference==0.7.23
unstructured.pytesseract==0.3.12
urllib3==2.2.1
uvicorn==0.27.1
wrapt==1.16.0
yarl==1.9.4
`
| AzureSearch giving error during creation of index (NameError: name 'VectorSearch' is not defined. Did you mean: 'vector_search'? ) | https://api.github.com/repos/langchain-ai/langchain/issues/17725/comments | 3 | 2024-02-19T11:03:00Z | 2024-05-31T23:49:25Z | https://github.com/langchain-ai/langchain/issues/17725 | 2,142,104,102 | 17,725 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
memory = ConversationBufferWindowMemory(
memory_key="memory",
return_messages=True,
k=config.INTERACTIONS_IN_MEMORY,
chat_memory=PostgresChatMessageHistory(
connection_string=config.CONNECTION_STRING,
session_id=request.args.get("session"),
),
)
prompt = ChatPromptTemplate.from_template(GENERAL_SYSTEM_PROMPT)
chain = (
{
"context": get_collection(
request.args.get("collection"), embeddings
).as_retriever(),
"input": RunnablePassthrough(),
"history": RunnableLambda(memory.load_memory_variables)
}
| prompt
| llm
| StrOutputParser()
)
ai_response = await chain.ainvoke(request.form.get("input"))
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Memory is not referenced when invoking chain. The chain should be able to reference previous messages.
### System Info
aiofiles==23.2.1
aiohttp==3.9.2
aiosignal==1.3.1
aiosqlite==0.17.0
annotated-types==0.6.0
anyio==4.2.0
async-timeout==4.0.3
asyncpg==0.29.0
attrs==23.2.0
black==24.1.1
certifi==2023.11.17
charset-normalizer==3.3.2
click==8.1.7
colorama==0.4.6
dataclasses-json==0.6.3
distro==1.9.0
exceptiongroup==1.2.0
frozenlist==1.4.1
greenlet==3.0.3
h11==0.14.0
html5tagger==1.3.0
httpcore==1.0.2
httptools==0.6.1
httpx==0.26.0
idna==3.6
iso8601==1.1.0
jsonpatch==1.33
jsonpointer==2.4
langchain==0.1.4
langchain-community==0.0.16
langchain-core==0.1.17
langchain-openai==0.0.5
langsmith==0.0.84
marshmallow==3.20.2
multidict==6.0.4
mypy-extensions==1.0.0
numpy==1.26.3
openai==1.10.0
packaging==23.2
pathspec==0.12.1
pgvector==0.2.4
platformdirs==4.1.0
psycopg==3.1.18
psycopg-binary==3.1.18
psycopg-pool==3.2.1
psycopg2-binary==2.9.9
pydantic==2.6.0
pydantic_core==2.16.1
pypdf==4.0.1
pypika-tortoise==0.1.6
pytz==2023.4
PyYAML==6.0.1
regex==2023.12.25
requests==2.31.0
sanic==23.12.1
sanic-routing==23.12.0
sniffio==1.3.0
SQLAlchemy==2.0.25
tenacity==8.2.3
tiktoken==0.5.2
tomli==2.0.1
tortoise-orm==0.20.0
tqdm==4.66.1
tracerite==1.1.1
typing-inspect==0.9.0
typing_extensions==4.9.0
tzdata==2024.1
urllib3==2.1.0
websockets==12.0
yarl==1.9.4 | Having trouble implementing memory. | https://api.github.com/repos/langchain-ai/langchain/issues/17719/comments | 5 | 2024-02-19T09:03:23Z | 2024-02-19T10:12:10Z | https://github.com/langchain-ai/langchain/issues/17719 | 2,141,861,804 | 17,719 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
from langserve import RemoteRunnable
database_answer = RemoteRunnable("http://localhost:2031/database_answer/")
database_answer.invoke({"input": "nba2k23 email address"}, {"configurable": {"session_id": "6666"}})
# get return
{'input': 'nba2k23 email address',
'answer': 'nba2k23 email address is [email protected]。'}
```
behind the RemoteRunnable, the code just like blow, so i will not show up
```
local_embedding = HuggingFaceEmbeddings(model_name="/root/autodl-tmp/bge-large-zh-v1.5", multi_process=True)
local_vdb = FAISS.load_local("/xg/packages/database/vector", local_embedding, "default")
llm = ChatOpenAI(
api_key="EMPTY",
base_url="http://127.0.0.1:8000/v1",
temperature=0,
max_tokens=600,
model="qwen-14b-chat"
)
prompt_1 = ChatPromptTemplate.from_messages(
[
("system", "you are a helpful assiant"),
MessagesPlaceholder(variable_name="history"),
("human", "请根据数据内容作答,若问题与数据无关,则自行作答。数据:{context} 问题:{input}")
]
)
retriever = local_vdb.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={"k": 3, 'score_threshold': 0.3}
)
database_answer = RunnableWithMessageHistory(
create_retrieval_chain(retriever, create_stuff_documents_chain(llm, prompt_1)),
RedisChatMessageHistory,
input_messages_key="input",
history_messages_key="history",
output_messages_key="answer"
)
database_answer.invoke({"input": "nba2k23 email address"}, {"configurable": {"session_id": "asdasd"}})
# get return
{'input': 'nba2k23 email address',
'history': [],
'context': [Document(page_content='# 【 NBA 2K23 】游戏官方联系方式是什么\n参考资料( https://too.st/80p )\n欲联系客服,请发邮件至 [email protected]。', metadata={'一级标题': '【 NBA 2K23 】游戏官方联系方式是什么'}),
Document(page_content='# 【 NBA 2K23 】注册教程( 怎么注册 )\n参考资料( https://too.st/80i )\nGoogle( 谷歌账号 ):登录【 谷歌官网 】> 点击【 注册谷歌账号 】> 输入所要求填写的信息( 注册时会有填空框 )> 按照指引操作即可。Xbox( 微软账号 ):百度搜索 "Xbox 官网" 进入官网( 港服网页后缀带 "HK",国服则带 "CN" )> 按照提示【 创建账户 】即可。PS( 索尼 PlayStation 账号 ):百度搜索 "PlayStation 官网" 进入官网( 港服网页后缀带 "HK" )> 按照提示【 创建账户 】即可。', metadata={'一级标题': '【 NBA 2K23 】注册教程( 怎么注册 )'}),
Document(page_content='# 【 NBA 2K23 】登不上怎么办( 怎么登录 )\n参考资料( https://too.st/80j )\n可能的原因:Xbox、PS、Google 平台服务器问题、自身网络环境不稳定、未安装 "谷歌套件"、使用【 Xbox账号 】 登录时未进行验证。方法一:使用【 网络加速器 】,如 biubiu 加速器 > 加速【 NBA 2K MyTEAM 】> 重启游戏 > 再次登录( 可多次尝试 )。方法二;在光环助手 APP 下载【 谷歌安装器 】,启动安装器安装谷歌套件并重启游戏。方法三;切换登录账号,如【 Google 账号 】无法登录,则换成【 Xbox 账号】或【 PS 账号 】再登录试试。方法四:WIFI 换成数据网络或数据网络换成 WIFI 之后再重启游戏( 可多次尝试 ),不行可再尝试切换加速游戏或节点。方法五:换个时间段再试( 如早上时间段无法登录,则等到下午或晚上再试 )。', metadata={'一级标题': '【 NBA 2K23 】登不上怎么办( 怎么登录 )'})],
'answer': 'nba2k23 email address is [email protected]。'}
```
### Error Message and Stack Trace (if applicable)
no error, just not my expect
### Description
use RemoteRunnable invoke & local invoke, the return information not the same amount
### System Info
python 3.9.18
ubuntu 20.04 | use RemoteRunnable invoke & local invoke, the return information not the same amount | https://api.github.com/repos/langchain-ai/langchain/issues/17703/comments | 12 | 2024-02-18T15:42:58Z | 2024-04-24T10:15:20Z | https://github.com/langchain-ai/langchain/issues/17703 | 2,141,049,422 | 17,703 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
```
# Self file
import os
import sys
current_dir = os.path.dirname(os.path.abspath(__file__)) # 获取当前文件的绝对路径
parent_dir = os.path.dirname(current_dir) # 获取父目录的路径
sys.path.append(parent_dir) # 把父目录添加到 sys.path
from packages.core.api_invoke import return_mode_choose, base_answer, database_answer, database_answer_nolimit, clean_history, better_query
# Basic
from fastapi import FastAPI
from fastapi.responses import RedirectResponse
from langserve import add_routes
app = FastAPI(
title="XiaoGuangLLM",
version="1.2.0"
)
add_routes(
app,
return_mode_choose,
path="/return_mode_choose",
disabled_endpoints=["playground"]
)
add_routes(
app,
base_answer,
path="/base_answer",
disabled_endpoints=["playground"]
)
add_routes(
app,
database_answer,
path="/database_answer",
disabled_endpoints=["playground"]
)
add_routes(
app,
database_answer_nolimit,
path="/database_answer_nolimit",
disabled_endpoints=["playground"]
)
add_routes(
app,
clean_history,
path="/clean_history",
disabled_endpoints=["playground"]
)
add_routes(
app,
better_query,
path="/better_query",
disabled_endpoints=["playground"]
)
@app.get("/")
async def redirect_root_to_docs():
return RedirectResponse("/docs")
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
```
### Error Message and Stack Trace (if applicable)
```
INFO: 123.185.63.166:0 - "POST /base_answer/invoke HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
RuntimeError: super(): __class__ cell not found
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/miniconda3/envs/xg/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/root/miniconda3/envs/xg/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
File "/root/miniconda3/envs/xg/lib/python3.9/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/root/miniconda3/envs/xg/lib/python3.9/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/root/miniconda3/envs/xg/lib/python3.9/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/root/miniconda3/envs/xg/lib/python3.9/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/root/miniconda3/envs/xg/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/root/miniconda3/envs/xg/lib/python3.9/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/root/miniconda3/envs/xg/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/root/miniconda3/envs/xg/lib/python3.9/site-packages/starlette/routing.py", line 758, in __call__
await self.middleware_stack(scope, receive, send)
File "/root/miniconda3/envs/xg/lib/python3.9/site-packages/starlette/routing.py", line 778, in app
await route.handle(scope, receive, send)
File "/root/miniconda3/envs/xg/lib/python3.9/site-packages/starlette/routing.py", line 299, in handle
await self.app(scope, receive, send)
File "/root/miniconda3/envs/xg/lib/python3.9/site-packages/starlette/routing.py", line 79, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/root/miniconda3/envs/xg/lib/python3.9/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/root/miniconda3/envs/xg/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/root/miniconda3/envs/xg/lib/python3.9/site-packages/starlette/routing.py", line 74, in app
response = await func(request)
File "/root/miniconda3/envs/xg/lib/python3.9/site-packages/fastapi/routing.py", line 299, in app
raise e
File "/root/miniconda3/envs/xg/lib/python3.9/site-packages/fastapi/routing.py", line 294, in app
raw_response = await run_endpoint_function(
File "/root/miniconda3/envs/xg/lib/python3.9/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
File "/root/miniconda3/envs/xg/lib/python3.9/site-packages/langserve/server.py", line 464, in invoke
return await api_handler.invoke(request)
File "/root/miniconda3/envs/xg/lib/python3.9/site-packages/langserve/api_handler.py", line 720, in invoke
output=self._serializer.dumpd(output),
File "/root/miniconda3/envs/xg/lib/python3.9/site-packages/langserve/serialization.py", line 164, in dumpd
return orjson.loads(orjson.dumps(obj, default=default))
TypeError: Type is not JSON serializable: ModelMetaclass
```
### Description
is anyone knows how to fix this?
### System Info
ubuntu 20.4 LTS | langchain serve TypeError: Type is not JSON serializable: ModelMetaclass | https://api.github.com/repos/langchain-ai/langchain/issues/17700/comments | 10 | 2024-02-18T11:19:23Z | 2024-02-18T15:35:05Z | https://github.com/langchain-ai/langchain/issues/17700 | 2,140,935,720 | 17,700 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
vectordb = Chroma.from_documents(documents=chunks
, embedding=embeddings) I am using this code to solve problem but I am not able to solve its showing ttributeError Traceback (most recent call last)
[<ipython-input-46-6f2359e6b5b4>](https://ii3889p2cn-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20240214-060113_RC00_606917857#) in <cell line: 1>()
----> 1 vectordb = Chroma.from_documents(documents=chunks
2 , embedding=embeddings)
1 frames
[/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/chroma.py](https://ii3889p2cn-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20240214-060113_RC00_606917857#) in <listcomp>(.0)
774 Chroma: Chroma vectorstore.
775 """
--> 776 texts = [doc.page_content for doc in documents]
777 metadatas = [doc.metadata for doc in documents]
778 return cls.from_texts(
AttributeError: 'str' object has no attribute 'page_content'
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am using langchain to create chunks and perform vector embedding throughcgroma db , How can I change it into
### System Info
I am using Mac | str' object has no attribute 'page_content' | https://api.github.com/repos/langchain-ai/langchain/issues/17699/comments | 1 | 2024-02-18T10:58:13Z | 2024-06-01T00:19:31Z | https://github.com/langchain-ai/langchain/issues/17699 | 2,140,924,488 | 17,699 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.