issue_owner_repo
sequencelengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_pinecone import PineconeVectorStore
from langchain_community.embeddings.huggingface import HuggingFaceEmbeddings
index_name = "langchain-test-index"
embeddings = HuggingFaceEmbeddings()
docsearch = PineconeVectorStore(index_name, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
print(docs[0].page_content)
```
### Error Message and Stack Trace (if applicable)
```python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-9-87c0f7711524>](https://localhost:8080/#) in <cell line: 2>()
1 query = "What did the president say about Ketanji Brown Jackson"
----> 2 docs = docsearch.similarity_search(query)
3 print(docs[0].page_content)
2 frames
[/usr/local/lib/python3.10/dist-packages/langchain_pinecone/vectorstores.py](https://localhost:8080/#) in similarity_search_by_vector_with_score(self, embedding, k, filter, namespace)
207 namespace = self._namespace
208 docs = []
--> 209 results = self._index.query(
210 vector=embedding,
211 top_k=k,
AttributeError: 'str' object has no attribute 'query'
```
### Description
I was just looking through the documentation in pinecone section, the loading part work just fine but the retrieving using any function give the error above, but with the pinecone-client it works fine. To reproduce the error, I used the code from the documentation.
### System Info
langchain==0.1.16
langchain-community==0.0.34
langchain-core==0.1.46
langchain-openai==0.1.4
langchain-pinecone==0.1.0
langchain-text-splitters==0.0.1 | langchain-pinecone retreive functions error : 'str' object has no attribute 'query' | https://api.github.com/repos/langchain-ai/langchain/issues/20993/comments | 3 | 2024-04-28T16:30:01Z | 2024-04-29T19:25:31Z | https://github.com/langchain-ai/langchain/issues/20993 | 2,267,669,257 | 20,993 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_openai import OpenAI
### Error Message and Stack Trace (if applicable)
from langchain_openai import OpenAI
2024-04-28T15:21:34.177321153Z File "/workspace/venv/lib/python3.10/site-packages/langchain_openai/__init__.py", line 1, in <module>
2024-04-28T15:21:34.177325586Z from langchain_openai.chat_models import (
2024-04-28T15:21:34.177330011Z File "/workspace/venv/lib/python3.10/site-packages/langchain_openai/chat_models/__init__.py", line 1, in <module>
2024-04-28T15:21:34.177335117Z from langchain_openai.chat_models.azure import AzureChatOpenAI
2024-04-28T15:21:34.177339632Z File "/workspace/venv/lib/python3.10/site-packages/langchain_openai/chat_models/azure.py", line 13, in <module>
2024-04-28T15:21:34.177344081Z from langchain_openai.chat_models.base import ChatOpenAI
2024-04-28T15:21:34.177349992Z File "/workspace/venv/lib/python3.10/site-packages/langchain_openai/chat_models/base.py", line 42, in <module>
2024-04-28T15:21:34.177354420Z from langchain_core.messages import (
2024-04-28T15:21:34.177358965Z ImportError: cannot import name 'InvalidToolCall' from 'langchain_core.messages' (/workspace/venv/lib/python3.10/site-packages/langchain_core/messages/__init__.py)
### Description
I ma trying to import Openai from langchain_openai .
### System Info
langchain latest version | ImportError: cannot import name 'InvalidToolCall' from 'langchain_core.messages' (/workspace/venv/lib/python3.10/site-packages/langchain_core/messages/__init__.py | https://api.github.com/repos/langchain-ai/langchain/issues/20991/comments | 16 | 2024-04-28T15:26:34Z | 2024-08-07T01:19:21Z | https://github.com/langchain-ai/langchain/issues/20991 | 2,267,635,125 | 20,991 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
no at the moment
### Error Message and Stack Trace (if applicable)
_No response_
### Description
In langraph we use chain.ainvoke() in inner node and app.astream_events() for the whole graph app. But we found out that the chain.ainvoke() output streaming tokens.
We I dived into the code, and found maybe because of the following code:
https://github.com/langchain-ai/langchain/blob/804390ba4bcc306b90cb6d75b7f01a4231ab6463/libs/core/langchain_core/language_models/chat_models.py#L684-L701
`type(self)._astream` is `ChatOpenAI._astream`
`kwargs` is a empty {}, which dose not have `stream` key from params.
The langgraph app have a `LogStreamCallbackHandler` in the run_manager.handlers.
So the `if statement` is `True` and the code generated streaming output which is not expected.
May be you should add key 'stream' to `kwargs` from `params`. And I have tried it solved my problem.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:53:34 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T8103
> Python Version: 3.11.2 (main, Feb 21 2024, 12:24:36) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.1.46
> langchain: 0.1.16
> langchain_community: 0.0.34
> langsmith: 0.1.22
> langchain_openai: 0.0.6
> langchain_text_splitters: 0.0.1
> langgraph: 0.0.39
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve | chain.ainvoke() will result in streaming output | https://api.github.com/repos/langchain-ai/langchain/issues/20980/comments | 5 | 2024-04-28T06:41:22Z | 2024-07-04T04:19:09Z | https://github.com/langchain-ai/langchain/issues/20980 | 2,267,382,986 | 20,980 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The documentation here: https://python.langchain.com/docs/integrations/llms/huggingface_endpoint/ has incorrect import code (not sure if recent change caused it to not work).
Currently states: from langchain_community.llms import HuggingFaceEndpoint
Correct: from langchain_community.llms**.huggingface_endpoint** import HuggingFaceEndpoint
Found while trying to follow the documentation
### Idea or request for content:
A minor change needs to be made to first code block on page | DOC: HuggingFaceEndpoint documentation incorrect for importing | https://api.github.com/repos/langchain-ai/langchain/issues/20977/comments | 1 | 2024-04-27T23:04:39Z | 2024-05-12T13:14:39Z | https://github.com/langchain-ai/langchain/issues/20977 | 2,267,228,854 | 20,977 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```bash
cd langchain
poetry install --with lint,docs --no-root
make clean
make api_docs_build
```
### Error Message and Stack Trace (if applicable)
Running `make api_docs_build` as is results in this error
```bash
poetry run python docs/api_reference/create_api_rst.py
Starting to build API reference files.
Building package: community
Building package: text-splitters
Building package: core
Building package: experimental
Building package: standard-tests
pyproject.toml not found in /home/karim/Projects/langchain/libs/partners/standard-tests.
You are either attempting to build a directory which is not a package or the package is missing a pyproject.toml file which should be added.Aborting the build.
make: *** [Makefile:37: api_docs_build] Error 1
```
Excluding `standard-tests` and running the same command results in this error:
```bash
API reference files built.
cd docs/api_reference && poetry run make html
Running Sphinx v4.5.0
Extension error:
Could not import extension sphinxcontrib.autodoc_pydantic (exception: `BaseSettings` has been moved to the `pydantic-settings` package. See https://docs.pydantic.dev/2.6/migration/#basesettings-has-moved-to-pydantic-settings for more details.
For further information visit https://errors.pydantic.dev/2.6/u/import-error)
make[1]: *** [html] Error 2
make: *** [api_docs_build] Error 2
```
### Description
* When I run `make api_docs_build` I get an error regarding `lib/standard-tests` folder. This is likely causing the first problem.
* When I exclude `lib/standard-tests` folder, I get a migration error from pydantic
* Adding `standard-tests` to the exclusions list and upgrading `autodoc_pydantic` to `1.9.0` eliminates both issues
### System Info
```bash
pip freeze | grep langchain
-e git+ssh://[email protected]/langchain-ai/langchain@f1a0614f3ba896b3168f0faad79ffb97df91ba6e#egg=langchain&subdirectory=libs/langchain
-e git+ssh://[email protected]/langchain-ai/langchain@f1a0614f3ba896b3168f0faad79ffb97df91ba6e#egg=langchain_community&subdirectory=libs/community
-e git+ssh://[email protected]/langchain-ai/langchain@f1a0614f3ba896b3168f0faad79ffb97df91ba6e#egg=langchain_core&subdirectory=libs/core
-e git+ssh://[email protected]/langchain-ai/langchain@f1a0614f3ba896b3168f0faad79ffb97df91ba6e#egg=langchain_experimental&subdirectory=libs/experimental
-e git+ssh://[email protected]/langchain-ai/langchain@f1a0614f3ba896b3168f0faad79ffb97df91ba6e#egg=langchain_openai&subdirectory=libs/partners/openai
-e git+ssh://[email protected]/langchain-ai/langchain@f1a0614f3ba896b3168f0faad79ffb97df91ba6e#egg=langchain_text_splitters&subdirectory=libs/text-splitters
```
platform mac and wsl
```bash
python --version
Python 3.10.12
``` | [build] make api_docs_build fails - standard-tests being pulled into api_docs_build and olded version of autodoc_pydantic being used | https://api.github.com/repos/langchain-ai/langchain/issues/20972/comments | 0 | 2024-04-27T19:44:08Z | 2024-08-03T16:08:31Z | https://github.com/langchain-ai/langchain/issues/20972 | 2,267,160,246 | 20,972 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from dotenv import load_dotenv
import duckdb
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import DuckDB
from langchain_core.documents import Document
from time import time
load_dotenv()
TABLE_NAME = "embeddings"
documents = [Document(page_content="Jacek is the best software engineer in the world", metadata={"id": "1"})]
db_conn = duckdb.connect('./test.DUCKDB')
try:
start_exists = time()
print("Checking table exists")
table = db_conn.table(TABLE_NAME)
table.show()
vector_store = DuckDB(connection=db_conn, table_name=TABLE_NAME, embedding=OpenAIEmbeddings(), vector_key="embedding")
print(f"Table exists check took {time() - start_exists} seconds")
except Exception as e:
start_not_exists = time()
print(f"Table does not exist, create it from documents")
vector_store = DuckDB.from_documents(documents, connection=db_conn, table_name=TABLE_NAME, embedding=OpenAIEmbeddings(), vector_key="embedding")
print(f"Table does not exist, took {time() - start_not_exists} seconds")
start_search = time()
query = "Who is the best software engineer in the world?"
docs = vector_store.similarity_search(query)
print(f"Search result: {docs}")
print(f"Search took {time() - start_search} seconds")
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I use DuckDB as vector store.
When I execute similarity_search, I expect `distance` property is returned as a (metadata) part of result documents.
I discussed this issue in DuckDB community and we agreed that it is a bug and it should be returned.
I am going to fix it.
### System Info
- Ubuntu 22.04
- python 3.10.12
- langchain==0.1.16
- openai==1.13.3
- langchain_openai==0.1.1
- duckdb==0.10.2
| DuckDB: distance/similarity property not reported to documents returned by similarity_search | https://api.github.com/repos/langchain-ai/langchain/issues/20969/comments | 0 | 2024-04-27T15:16:26Z | 2024-05-24T22:17:53Z | https://github.com/langchain-ai/langchain/issues/20969 | 2,267,058,281 | 20,969 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.llms import Ollama
llm = Ollama(model=params['model'],num_ctx=2048)#, num_predict=1100)
with open("./prompt.txt", encoding="utf-8") as fd:
doc_text=fd.read()
result = llm(doc_text)
print(result)
''' Setup prompt chains '''
from langchain.chains.mapreduce import MapReduceChain
from langchain_text_splitters import CharacterTextSplitter, RecursiveCharacterTextSplitter
from langchain.chains import ReduceDocumentsChain, MapReduceDocumentsChain
# Map
map_prompt = PromptTemplate.from_template(params['map_template'])
map_chain = LLMChain(llm=llm, prompt=map_prompt)
# Reduce
reduce_prompt = PromptTemplate.from_template(params['reduce_template'])
# Run chain
reduce_chain = LLMChain(llm=llm, prompt=reduce_prompt)
# Takes a list of documents, combines them into a single string, and passes this to an LLMChain
combine_documents_chain = StuffDocumentsChain(
llm_chain=reduce_chain, document_variable_name="doc_summaries"
)
# Combines and iteravely reduces the mapped documents
reduce_documents_chain = ReduceDocumentsChain(
# This is final chain that is called.
combine_documents_chain=combine_documents_chain,
# If documents exceed context for `StuffDocumentsChain`
collapse_documents_chain=combine_documents_chain,
# The maximum number of tokens to group documents into.
token_max=params['reduce_token_max'],
)
# Combining documents by mapping a chain over them, then combining results
map_reduce_chain = MapReduceDocumentsChain(
# Map chain
llm_chain=map_chain,
# Reduce chain
reduce_documents_chain=reduce_documents_chain,
# The variable name in the llm_chain to put the documents in
document_variable_name="docs",
# Return the results of the map steps in the output
return_intermediate_steps=True,
)
print("Running map reduce summarisation...")
result = map_reduce_chain.invoke(docs)
```
### Error Message and Stack Trace (if applicable)
Running map reduce summarisation...
Token indices sequence length is longer than the specified maximum sequence length for this model (1993 > 1024). Running this sequence through the model will result in indexing errors
### Description
I am running MapReduceDocumentsChain with Ollama and the "llama3" model.
Llama3 has a context window of 8k tokens. Ollama is given the argument "num_ctx = 4096".
However, the message produced indicates that the maximum sequence length being accepted is 1024. This must be happening with the ReduceDocumentsChain. Why is this the case? Any solutions? Thank you.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.12.2 | packaged by Anaconda, Inc. | (main, Feb 27 2024, 17:28:07) [MSC v.1916 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.46
> langchain: 0.1.16
> langchain_community: 0.0.34
> langsmith: 0.1.43
> langchain_openai: 0.1.2
> langchain_text_splitters: 0.0.1 | Llama3 context window is 8k but Langchain with Ollama shows "Token indices sequence length is longer than the specified maximum sequence length for this model (1916 > 1024). " | https://api.github.com/repos/langchain-ai/langchain/issues/20967/comments | 1 | 2024-04-27T14:19:09Z | 2024-05-16T15:55:55Z | https://github.com/langchain-ai/langchain/issues/20967 | 2,267,035,278 | 20,967 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.sql_database import SQLDatabase
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder,PromptTemplate
from langchain.tools import BaseTool
from langchain.tools.render import format_tool_to_openai_function
from langchain.schema.runnable import Runnable,RunnableLambda,RunnableParallel
from langchain.chat_models import ChatOpenAI
from langchain.agents.format_scratchpad import format_to_openai_function_messages
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain.agents import AgentExecutor
from pydantic import BaseModel, Field
import os
from secret_key import openapi_key
from sqlalchemy import create_engine
import constants
from datetime import datetime
import re
from typing import Optional
# os.environ['OPENAI_API_KEY'] = 'sk-xxxxxxxxxxxx'
os.environ['OPENAI_API_KEY'] = openapi_key
def chat5(question: str):
# Define the SQL DML Chain Prompt Template
SQL_DML_CHAIN_PROMPT = """You are an expert in SQLITE. Your main objective is to construct Data manipulation SQLITE query given the
user question: {user_question}.
You need to construct the Data manipulation SQLITE query for the following Database Schema:
{table_info}
Only Output the final SQL-Query and nothing else.
SQL-Query:"""
SQL_DML_CHAIN_PROMPT = """You are expert in MS SQL. Your main objective is to construct Data manipulation MS SQL query give the
user question: {user_question}.
You need to construct the Data manipulation SQLITE query for the following Database Schema:
{table_info}
Check user query for INSERT, UPDATE or DELETE operation, based on that perform the sql query.
Wrapped column names: All column names should be wrapped in square brackets [] as delimiters.
Use GETDATE() to get the current date and time instead of DATETIME('now').
Consider the column name carefully from the PAY_transaction_settingallow table where the modification is done.
Take 'euid' in PAY_transaction_settingallow table from employee_details table which is 'employeeEuid' in employee_details table based on the matching 'employeeName' or 'employeeId' from employee_details table and stores it as 'euid' in the PAY_transaction_settingallow table. Similarly, to get the allowance ID ('alw_id') from the pay_mst_allowance table based on the matching allowance description ('alw_desc') or short name ('short_name'), check both 'alw_desc' and 'short_name'.
Pay attention to the column names in user query and sql query.
Perform JOIN operation to fetch euid and alw_id from respective tables not INNER JOIN.
Selected table: Specify PAY_transaction_settingallow as the table to update.
Employee and allowance selection: Use the WHERE clause to filter employees based on employeeName and allowances based on alw_desc.
Date handling:
'createDate' should be the date on which the values are inserted, current date, it can not be NULL.
'effect_date' should be the start date of month.
'to_date' should be the end date of month.
'updatedDate' should be the date on which values are updated and it should be included only when UPDATE keywoed is used i query.
UPDATE query should not change the 'createdDate'
Currency: Assume the amount to be in rupees.
If the users gives 'given by' in query enter that value in 'givenBy' column in PAY_transaction_settingallow table
Modify the 'givenBy' column only when given by is specified by user, if not dont take givenBy column.
Removed newlines: Write the query as a single string without newlines (\n).
Ensure the query executes efficiently and without errors.
If data or Values are already present in the table, dont again run the sql query.
If no modifcation is made in table dont not display the message.
If no rows are modifyed output its as "0 rows affected"
Only Output the message after execution of SQL-Query
If duplicate value is encountred with check_duplicate function Output the message like "The SQL query has been skipped due to duplicate data.".
-- Replace DATETIME('now') with GETDATE() for SQL Server compatibility
REPLACE(CAST('' AS VARCHAR(MAX)), 'DATETIME(''now'')', 'GETDATE()')
SQL-Query:
{user_question}
"""
# Define the prompt template
prompt = PromptTemplate(template=SQL_DML_CHAIN_PROMPT, input_variables=['user_question', 'table_info'])
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
engine = create_engine(connection_uri)
model_name = "get-3.5-turbo-16k"
# db = SQLDatabase(engine, view_support=True, include_tables=['HR_Testing'])
# db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
db = SQLDatabase(engine, view_support=True, include_tables=['PAY_transaction_settingallow', 'PAY_mst_allowance','employee_details'],sample_rows_in_table_info=5)
def check_duplicate(query: str, db: SQLDatabase) -> Optional[str]:
"""
Check for duplicate data before executing an INSERT query.
Args:
query (str): The SQL query to be checked.
db (SQLDatabase): The SQLDatabase object.
Returns:
Optional[str]: The original query if no duplicate is found, or None if a duplicate is detected.
"""
# Define a regular expression pattern to match the INSERT statement
pattern = r'INSERT INTO (\w+) \((.*?)\) VALUES \((.*?)\)'
# Attempt to match the pattern in the query
match = re.match(pattern, query)
# Check if the pattern matched successfully
if match:
# Extract table name, columns, and values from the matched groups
table_name = match.group(1)
columns = [col.strip() for col in match.group(2).split(',')]
values = [val.strip() for val in match.group(3).split(',')]
# Check if the table name matches the expected table name (e.g., 'PAY_transaction_settingallow')
if table_name != 'PAY_transaction_settingallow':
# If the table name doesn't match, return the original query
return query
# Extract the values of euid, alw_id, amount, and effect_date from the values list
euid_index = columns.index('euid')
alw_id_index = columns.index('alw_id')
amount_index = columns.index('amount')
# effect_date_index = columns.index('effect_date')
euid = values[euid_index]
alw_id = values[alw_id_index]
amount = values[amount_index]
# effect_date = values[effect_date_index]
# Construct a SELECT query to check for duplicates based on euid, alw_id, and effect_date
check_query = f"SELECT 1 FROM {table_name} WHERE euid = ? AND alw_id = ? AND amount = ?"
# Execute the check query with the values of euid, alw_id, and effect_date
cursor = db.cursor()
cursor.execute(check_query, (euid, alw_id, amount))
row = cursor.fetchone()
# If a row is found, return None to indicate that the query should not be executed
if row:
return "The SQL query has been skipped due to duplicate data."
else:
# No duplicates found, return the original query
return query
# If the pattern didn't match or the table name wasn't recognized, return the original query
return query
# Define the SQL DML Chain with the modified check_duplicate function
sql_dml_chain = RunnableParallel({
"user_question": lambda x: question,
"table_info": lambda _: db.get_table_info()
}) | \
prompt | \
ChatOpenAI().bind(stop='SQL-Query:') | \
RunnableLambda(lambda x:
check_duplicate(x.content, db) if 'INSERT' in x.content else x.content)
# Define the Chat Prompt Template
agent_prompt = ChatPromptTemplate.from_messages(
[
("system", """
You are expert in SQL whose main objective is to mainpulate the Database for which you have
been given access. You can use the tool `sql_db_manipulation` to interact with Database and
mainpulate the database as per the user requirement.
Check user query for INSERT, UPDATE or DELETE operation, based on that perform the sql query.
Wrapped column names: All column names should be wrapped in square brackets [] as delimiters.
Use GETDATE() to get the current date and time instead of DATETIME('now').
Consider the column name carefully from the PAY_transaction_settingallow table where the modification is done.
Take 'euid' in PAY_transaction_settingallow table from employee_details table which is 'employeeEuid' in employee_details table based on the matching 'employeeName' or 'employeeId' from employee_details table and stores it as 'euid' in the PAY_transaction_settingallow table. Similarly, to get the allowance ID ('alw_id') from the pay_mst_allowance table based on the matching allowance description ('alw_desc') or short name ('short_name'), check both 'alw_desc' and 'short_name'.
Pay attention to the column names in user query and sql query.
Perform JOIN operation to fetch euid and alw_id from respective tables not INNER JOIN.
Selected table: Specify PAY_transaction_settingallow as the table to update.
Employee and allowance selection: Use the WHERE clause to filter employees based on employeeName and allowances based on alw_desc.
Date handling:
'createDate' should be the date on which the values are inserted, current date, it can not be NULL.
'effect_date' should be the start date of month.
'to_date' should be the end date of month.
'updatedDate' should be the date on which values are updated and it should be included only when UPDATE keywoed is used i query.
UPDATE query should not change the 'createdDate'
Currency: Assume the amount to be in rupees.
If the users gives 'given by' in query enter that value in 'givenBy' column in PAY_transaction_settingallow table
Modify the 'givenBy' column only when given by is specified by user, if not dont take givenBy column.
Removed newlines: Write the query as a single string without newlines (\n).
Ensure the query executes efficiently and without errors.
If data or Values are already present in the table, dont again run the sql query.
If no modifcation is made in table dont not display the message.
If no rows are modifyed output its as "0 rows affected"
Only Output the message after execution of SQL-Query
If duplicate value is encountred with check_duplicate function Output the message like "The SQL query has been skipped due to duplicate data.".
"""),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
# Define the data model for SQLDBMANIPULATION tool
class SQLDBMANIPULATION(BaseModel):
user_query: str = Field(description='User question which will be translated to a Data Manipulation SQL Query and will be executed on the underlying database')
class SQLDBMANIPULATIONTool(BaseTool):
name = "sql_db_manipulation"
description = "Use this tool to convert and execute DML queries given the user question, Use GETDATE() to get the current date and time instead of DATETIME('now')"
args_schema: type[SQLDBMANIPULATION] = SQLDBMANIPULATION
sql_dml_chain: Runnable
def _run(
self, user_query: str
) -> str:
"""Use the tool."""
try:
if "The SQL query has been skipped due to duplicate data." not in user_query:
# Execute the SQL query
affected_rows = db._execute(user_query)
if affected_rows == 0:
# If no rows were affected, return the appropriate message
return "0 rows affected"
else:
# Otherwise, return the success message
return "The SQL query has been executed successfully."
else:
# If the query was skipped due to duplicate data, return the message
return "The SQL query has been skipped due to duplicate data."
except Exception as e:
# If an error occurs during execution
error_message = f"Error executing SQL query: {str(e)}"
if "duplicate data" not in error_message.lower():
# If the error is not due to duplicate data, return the specific error message
return error_message
elif "Invalid column name" in error_message:
# If the error is due to an invalid column name, provide a more informative error message
return "Error: Invalid column name used in the SQL query."
else:
# If the error is due to duplicate data, return the standard message
return "The SQL query has been skipped due to duplicate data."
tools = [SQLDBMANIPULATIONTool(sql_dml_chain=sql_dml_chain)]
llm_with_tools = ChatOpenAI().bind(functions=[format_tool_to_openai_function(t) for t in tools])
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| agent_prompt
| llm_with_tools
| OpenAIFunctionsAgentOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
answer = agent_executor.invoke({"input": question})
return answer["output"]
result= chat5("insert personal allowance of 6000 to employee id TA******* for feb 2024, given by 3003")
print(result)
### Error Message and Stack Trace (if applicable)
> Entering new AgentExecutor chain...
Invoking: `sql_db_manipulation` with `{'user_query': "INSERT INTO PAY_transaction_settingallow (euid, alw_id, amount, effect_date, to_date, createDate, givenBy) SELECT ed.employeeEuid, pma.alw_id, 6000, '2024-02-01', '2024-02-29', GETDATE(), '3003' FROM employee_details ed JOIN pay_mst_allowance pma ON ed.employeeName = 'TA*******' AND (pma.alw_desc = 'personal allowance' OR pma.short_name = 'personal allowance') WHERE NOT EXISTS (SELECT 1 FROM PAY_transaction_settingallow WHERE euid = ed.employeeEuid AND alw_id = pma.alw_id AND effect_date = '2024-02-01' AND to_date = '2024-02-29' AND amount = 6000)"}`
The SQL query has been executed successfully.The personal allowance of 6000 has been inserted for employee ID TA******* for February 2024, given by 3003.
### Description
in the output result of AgentExecutor its give the result as "The personal allowance of 6000 has been inserted for employee ID TA******* for February 2024, given by 3003. " but there is an error in the sql query written there as it's taking employee name instead of employeeID, in that case it should give the output something like "query is not executed" or "0 rows effectd" instead of giving as "The personal allowance of 6000 has been inserted for employee ID TA******* for February 2024, given by 3003." in lot of cases its happening the same when its not done any changes in table still its giving the output has "has been inserted" or "has been updated"
i have written a logic to handle it externally, but that doesn't seems to work, so how to resolve this issue?
### System Info
python: 3.11
langchain: latest
os: windows | Error in AgentExecutor output result | https://api.github.com/repos/langchain-ai/langchain/issues/20965/comments | 2 | 2024-04-27T08:31:32Z | 2024-08-09T16:08:28Z | https://github.com/langchain-ai/langchain/issues/20965 | 2,266,912,926 | 20,965 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_openai.embeddings import OpenAIEmbeddings
from langchain_text_splitters import CharacterTextSplitter
from langchain_community.vectorstores import DocArrayInMemorySearch
from langchain_community.document_loaders import TextLoader
import tempfile
import whisper
from pytube import YouTube
# Let's do this only if we haven't created the transcription file yet.
if not os.path.exists("transcription.txt"):
youtube = YouTube(YOUTUBE_VIDEO)
audio = youtube.streams.filter(only_audio=True).first()
# Let's load the base model. This is not the most accurate model but it's fast.
whisper_model = whisper.load_model("base")
with tempfile.TemporaryDirectory() as tmpdir:
file = audio.download(output_path=tmpdir)
transcription = whisper_model.transcribe(file, fp16=False)["text"].strip()
with open("transcription.txt", "w") as file:
file.write(transcription)
documents = TextLoader("transcription.txt").load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = DocArrayInMemorySearch.from_documents(docs, embeddings)
```
### Error Message and Stack Trace (if applicable)
```python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[27], [line 11](vscode-notebook-cell:?execution_count=27&line=11)
[7](vscode-notebook-cell:?execution_count=27&line=7) docs = text_splitter.split_documents(documents)
[9](vscode-notebook-cell:?execution_count=27&line=9) embeddings = OpenAIEmbeddings()
---> [11](vscode-notebook-cell:?execution_count=27&line=11) db = DocArrayInMemorySearch.from_documents(docs, embeddings)
File [c:\Users\astec\OneDrive\Documents\RAG_PROJECT\.venv\Lib\site-packages\langchain_core\vectorstores.py:550](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_core/vectorstores.py:550), in VectorStore.from_documents(cls, documents, embedding, **kwargs)
[548](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_core/vectorstores.py:548) texts = [d.page_content for d in documents]
[549](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_core/vectorstores.py:549) metadatas = [d.metadata for d in documents]
--> [550](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_core/vectorstores.py:550) return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File [c:\Users\astec\OneDrive\Documents\RAG_PROJECT\.venv\Lib\site-packages\langchain_community\vectorstores\docarray\in_memory.py:68](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:68), in DocArrayInMemorySearch.from_texts(cls, texts, embedding, metadatas, **kwargs)
[46](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:46) @classmethod
[47](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:47) def from_texts(
[48](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:48) cls,
(...)
[52](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:52) **kwargs: Any,
[53](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:53) ) -> DocArrayInMemorySearch:
[54](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:54) """Create an DocArrayInMemorySearch store and insert data.
[55](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:55)
[56](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:56) Args:
(...)
[66](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:66) DocArrayInMemorySearch Vector Store
[67](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:67) """
---> [68](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:68) store = cls.from_params(embedding, **kwargs)
[69](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:69) store.add_texts(texts=texts, metadatas=metadatas)
[70](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:70) return store
File [c:\Users\astec\OneDrive\Documents\RAG_PROJECT\.venv\Lib\site-packages\langchain_community\vectorstores\docarray\in_memory.py:39](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:39), in DocArrayInMemorySearch.from_params(cls, embedding, metric, **kwargs)
[21](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:21) @classmethod
[22](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:22) def from_params(
[23](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:23) cls,
(...)
[28](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:28) **kwargs: Any,
[29](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:29) ) -> DocArrayInMemorySearch:
[30](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:30) """Initialize DocArrayInMemorySearch store.
[31](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:31)
[32](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:32) Args:
(...)
[37](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:37) **kwargs: Other keyword arguments to be passed to the get_doc_cls method.
[38](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:38) """
---> [39](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:39) _check_docarray_import()
[40](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:40) from docarray.index import InMemoryExactNNIndex
[42](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/in_memory.py:42) doc_cls = cls._get_doc_cls(space=metric, **kwargs)
File [c:\Users\astec\OneDrive\Documents\RAG_PROJECT\.venv\Lib\site-packages\langchain_community\vectorstores\docarray\base.py:19](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/base.py:19), in _check_docarray_import()
[17](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/base.py:17) def _check_docarray_import() -> None:
[18](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/base.py:18) try:
---> [19](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/base.py:19) import docarray
[21](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/base.py:21) da_version = docarray.__version__.split(".")
[22](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/langchain_community/vectorstores/docarray/base.py:22) if int(da_version[0]) == 0 and int(da_version[1]) <= 31:
File [c:\Users\astec\OneDrive\Documents\RAG_PROJECT\.venv\Lib\site-packages\docarray\__init__.py:5](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/__init__.py:5)
[1](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/__init__.py:1) __version__ = '0.32.1'
[3](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/__init__.py:3) import logging
----> [5](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/__init__.py:5) from docarray.array import DocList, DocVec
[6](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/__init__.py:6) from docarray.base_doc.doc import BaseDoc
[7](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/__init__.py:7) from docarray.utils._internal.misc import _get_path_from_docarray_root_level
File [c:\Users\astec\OneDrive\Documents\RAG_PROJECT\.venv\Lib\site-packages\docarray\array\__init__.py:2](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/__init__.py:2)
[1](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/__init__.py:1) from docarray.array.any_array import AnyDocArray
----> [2](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/__init__.py:2) from docarray.array.doc_list.doc_list import DocList
[3](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/__init__.py:3) from docarray.array.doc_vec.doc_vec import DocVec
[5](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/__init__.py:5) __all__ = ['DocList', 'DocVec', 'AnyDocArray']
File [c:\Users\astec\OneDrive\Documents\RAG_PROJECT\.venv\Lib\site-packages\docarray\array\doc_list\doc_list.py:44](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/doc_list/doc_list.py:44)
[36](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/doc_list/doc_list.py:36) T = TypeVar('T', bound='DocList')
[37](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/doc_list/doc_list.py:37) T_doc = TypeVar('T_doc', bound=BaseDoc)
[40](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/doc_list/doc_list.py:40) class DocList(
[41](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/doc_list/doc_list.py:41) ListAdvancedIndexing[T_doc],
[42](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/doc_list/doc_list.py:42) PushPullMixin,
[43](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/doc_list/doc_list.py:43) IOMixinArray,
---> [44](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/doc_list/doc_list.py:44) AnyDocArray[T_doc],
[45](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/doc_list/doc_list.py:45) ):
[46](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/doc_list/doc_list.py:46) """
[47](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/doc_list/doc_list.py:47) DocList is a container of Documents.
[48](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/doc_list/doc_list.py:48)
(...)
[114](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/doc_list/doc_list.py:114)
[115](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/doc_list/doc_list.py:115) """
[117](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/doc_list/doc_list.py:117) doc_type: Type[BaseDoc] = AnyDoc
File [c:\Users\astec\OneDrive\Documents\RAG_PROJECT\.venv\Lib\site-packages\docarray\array\any_array.py:46](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/any_array.py:46), in AnyDocArray.__class_getitem__(cls, item)
[43](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/any_array.py:43) @classmethod
[44](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/any_array.py:44) def __class_getitem__(cls, item: Union[Type[BaseDoc], TypeVar, str]):
[45](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/any_array.py:45) if not isinstance(item, type):
---> [46](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/any_array.py:46) return Generic.__class_getitem__.__func__(cls, item) # type: ignore
[47](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/any_array.py:47) # this do nothing that checking that item is valid type var or str
[48](file:///C:/Users/astec/OneDrive/Documents/RAG_PROJECT/.venv/Lib/site-packages/docarray/array/any_array.py:48) if not issubclass(item, BaseDoc):
AttributeError: 'builtin_function_or_method' object has no attribute '__func__'
```
### Description
Im trying to use langchain's DocArrayInMemorySearch to create a vector database for my transcription text file, I've written code exactly as it is shown within the LangChain documentation but it does not work
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.12.3 (tags/v3.12.3:f6650f9, Apr 9 2024, 14:05:25) [MSC v.1938 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.46
> langchain: 0.1.16
> langchain_community: 0.0.34
> langsmith: 0.1.51
> langchain_openai: 0.1.3
> langchain_pinecone: 0.1.0
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Langchain DocArrayInMemorySearch not working | https://api.github.com/repos/langchain-ai/langchain/issues/20957/comments | 1 | 2024-04-26T21:56:36Z | 2024-05-15T21:17:11Z | https://github.com/langchain-ai/langchain/issues/20957 | 2,266,553,770 | 20,957 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
This is a problem encountered by checking a piece of code before committing it
### Error Message and Stack Trace (if applicable)
Before that I created a new virtual environment, git clone langchain's repository and use pip install -e. All dependencies are installed
### Description
This is a problem encountered by checking a piece of code before committing it.
Before that I created a new virtual environment, git clone langchain's repository and use pip install -e. All dependencies are installed.
An error occurred while executing make lint_diff
<img width="436" alt="image" src="https://github.com/langchain-ai/langchain/assets/164149097/8faf80fe-f743-4666-9bbf-5367872126af">
<img width="577" alt="image" src="https://github.com/langchain-ai/langchain/assets/164149097/4623df19-0c82-45c9-8a31-4e62420c06ff">
<img width="408" alt="image" src="https://github.com/langchain-ai/langchain/assets/164149097/42a2857d-15b2-4d6d-b6a9-e7f5fbba36b9">
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:12:41 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T8103
> Python Version: 3.11.9 (main, Apr 19 2024, 11:44:45) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.1.46
> langchain: 0.1.16
> langchain_community: 0.0.34
> langsmith: 0.1.51
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Command not found: ruff | https://api.github.com/repos/langchain-ai/langchain/issues/20934/comments | 2 | 2024-04-26T15:24:18Z | 2024-04-30T19:46:13Z | https://github.com/langchain-ai/langchain/issues/20934 | 2,266,018,752 | 20,934 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
retrieve_chain = self.cat_query_prompt | self.retriever_llm | StrOutputParser()
json_s = retrieve_chain.invoke({"question": query,"schema":self.cat_schema_json, "brand_list":list(df['brand_name'].unique()), "few_shot":self.cat_schema_fewshot})
```
### Error Message and Stack Trace (if applicable)
File "/workspace/selfquery_utils.py", line 151, in get_categorical_filters
json_s = retrieve_chain.invoke({"question": query,"schema":self.cat_schema_json, "brand_list":list(df['brand_name'].unique()), "few_shot":self.cat_schema_fewshot})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2499, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 276, in invoke
self.generate_prompt(
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 633, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 803, in generate
output = self._generate_helper(
^^^^^^^^^^^^^^^^^^^^^^
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 670, in _generate_helper
raise e
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 657, in _generate_helper
self._generate(
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/langchain_google_vertexai/llms.py", line 223, in _generate
res = _completion_with_retry(
^^^^^^^^^^^^^^^^^^^^^^^
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/langchain_google_vertexai/llms.py", line 72, in _completion_with_retry
with telemetry.tool_context_manager(llm._user_agent):
File "/layers/google.python.runtime/python/lib/python3.11/contextlib.py", line 144, in __exit__
next(self.gen)
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/google/cloud/aiplatform/telemetry.py", line 48, in tool_context_manager
_pop_tool_name(tool_name)
File "/layers/google.python.pip/pip/lib/python3.11/site-packages/google/cloud/aiplatform/telemetry.py", line 57, in _pop_tool_name
raise RuntimeError(
RuntimeError: Tool context error detected. This can occur due to parallelization.
### Description
The chain generates a json that is goint to be used as a filter for a vector database. I've been using this chain for months and its the first time that I get this error. I tried to replicate it but nothing happened.
### System Info
google-cloud-discoveryengine==0.11.2
google-cloud-aiplatform
langchain
langchain-core
langchain-experimental
langchainplus-sdk
langchain-google-genai
ipywidgets==7.7.2
pandas==2.0.3
google-cloud-bigquery
db-dtypes
langchain-google-vertexai
shortuuid
google-cloud-storage
redis | Getting "RuntimeError: Tool context error detected. This can occur due to parallelization." while invoking a chain using langchain_google_vertexai | https://api.github.com/repos/langchain-ai/langchain/issues/20929/comments | 8 | 2024-04-26T13:23:22Z | 2024-06-10T15:37:33Z | https://github.com/langchain-ai/langchain/issues/20929 | 2,265,784,123 | 20,929 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
prompt = ChatPromptTemplate.from_template(prompt)
model = HuggingFacePipeline(pipeline=LLM)
chain = prompt | model
results = chain.batch(
df.to_dict(
orient="records"
),
config={
"max_concurrency": 2
},
)
for response in results:
print(response.content)
```
### Error Message and Stack Trace (if applicable)
str does contain response
### Description
Ehe chain.batch method does not return an AImessage, as detailed in the documentation, instead it return a list of strings. This string contains the prompt + response, making it up to the user to seperate the two.
### System Info
langchain 0.1.14
python 3.10.5
system: ubuntu 22 | chain called through the batch method return list[str] instead of list[AIMessage] when using the HuggingFacePipeline | https://api.github.com/repos/langchain-ai/langchain/issues/20926/comments | 0 | 2024-04-26T11:38:04Z | 2024-08-02T16:08:07Z | https://github.com/langchain-ai/langchain/issues/20926 | 2,265,599,031 | 20,926 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.tools import DuckDuckGoSearchResults
from langchain import hub
from langchain.agents import create_openai_functions_agent
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from langgraph.prebuilt import create_agent_executor
tools = [DuckDuckGoSearchResults(max_results=3)]
# llm = OllamaFunctions(model="mixtral")
llm = OllamaFunctions(model="llama3:latest")
prompt = hub.pull("hwchase17/openai-functions-agent")
# Construct the OpenAI Functions agent
agent_runnable = create_openai_functions_agent(llm, tools, prompt)
agent_executor = create_agent_executor(agent_runnable, tools)
agent_executor.invoke(
{"input": "who is the winnner of the us open", "chat_history": []}
)
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[1], [line 21](vscode-notebook-cell:?execution_count=1&line=21)
[18](vscode-notebook-cell:?execution_count=1&line=18) agent_runnable = create_openai_functions_agent(llm, tools, prompt)
[20](vscode-notebook-cell:?execution_count=1&line=20) agent_executor = create_agent_executor(agent_runnable, tools)
---> [21](vscode-notebook-cell:?execution_count=1&line=21) agent_executor.invoke(
[22](vscode-notebook-cell:?execution_count=1&line=22) {"input": "who is the winnner of the us open", "chat_history": []}
[23](vscode-notebook-cell:?execution_count=1&line=23) )
...
[152](https://file+.vscode-resource.vscode-cdn.net/Users/bas/Development/HeadingFWD/langchain-playground/notebooks/langgraph/~/Development/HeadingFWD/langchain-playground/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:152) def _create_chat_stream(
[153](https://file+.vscode-resource.vscode-cdn.net/Users/bas/Development/HeadingFWD/langchain-playground/notebooks/langgraph/~/Development/HeadingFWD/langchain-playground/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:153) self,
[154](https://file+.vscode-resource.vscode-cdn.net/Users/bas/Development/HeadingFWD/langchain-playground/notebooks/langgraph/~/Development/HeadingFWD/langchain-playground/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:154) messages: List[BaseMessage],
[155](https://file+.vscode-resource.vscode-cdn.net/Users/bas/Development/HeadingFWD/langchain-playground/notebooks/langgraph/~/Development/HeadingFWD/langchain-playground/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:155) stop: Optional[List[str]] = None,
[156](https://file+.vscode-resource.vscode-cdn.net/Users/bas/Development/HeadingFWD/langchain-playground/notebooks/langgraph/~/Development/HeadingFWD/langchain-playground/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:156) **kwargs: Any,
[157](https://file+.vscode-resource.vscode-cdn.net/Users/bas/Development/HeadingFWD/langchain-playground/notebooks/langgraph/~/Development/HeadingFWD/langchain-playground/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:157) ) -> Iterator[str]:
[158](https://file+.vscode-resource.vscode-cdn.net/Users/bas/Development/HeadingFWD/langchain-playground/notebooks/langgraph/~/Development/HeadingFWD/langchain-playground/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:158) payload = {
[159](https://file+.vscode-resource.vscode-cdn.net/Users/bas/Development/HeadingFWD/langchain-playground/notebooks/langgraph/~/Development/HeadingFWD/langchain-playground/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:159) "model": self.model,
--> [160](https://file+.vscode-resource.vscode-cdn.net/Users/bas/Development/HeadingFWD/langchain-playground/notebooks/langgraph/~/Development/HeadingFWD/langchain-playground/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:160) "messages": self._convert_messages_to_ollama_messages(messages),
[161](https://file+.vscode-resource.vscode-cdn.net/Users/bas/Development/HeadingFWD/langchain-playground/notebooks/langgraph/~/Development/HeadingFWD/langchain-playground/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:161) }
[162](https://file+.vscode-resource.vscode-cdn.net/Users/bas/Development/HeadingFWD/langchain-playground/notebooks/langgraph/~/Development/HeadingFWD/langchain-playground/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:162) yield from self._create_stream(
[163](https://file+.vscode-resource.vscode-cdn.net/Users/bas/Development/HeadingFWD/langchain-playground/notebooks/langgraph/~/Development/HeadingFWD/langchain-playground/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:163) payload=payload, stop=stop, api_url=f"{self.base_url}[/api/chat](https://file+.vscode-resource.vscode-cdn.net/api/chat)", **kwargs
[164](https://file+.vscode-resource.vscode-cdn.net/Users/bas/Development/HeadingFWD/langchain-playground/notebooks/langgraph/~/Development/HeadingFWD/langchain-playground/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:164) )
File [~/Development/HeadingFWD/langchain-playground/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:112](https://file+.vscode-resource.vscode-cdn.net/Users/bas/Development/HeadingFWD/langchain-playground/notebooks/langgraph/~/Development/HeadingFWD/langchain-playground/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:112), in ChatOllama._convert_messages_to_ollama_messages(self, messages)
[110](https://file+.vscode-resource.vscode-cdn.net/Users/bas/Development/HeadingFWD/langchain-playground/notebooks/langgraph/~/Development/HeadingFWD/langchain-playground/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:110) role = "system"
[111](https://file+.vscode-resource.vscode-cdn.net/Users/bas/Development/HeadingFWD/langchain-playground/notebooks/langgraph/~/Development/HeadingFWD/langchain-playground/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:111) else:
--> [112](https://file+.vscode-resource.vscode-cdn.net/Users/bas/Development/HeadingFWD/langchain-playground/notebooks/langgraph/~/Development/HeadingFWD/langchain-playground/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:112) raise ValueError("Received unsupported message type for Ollama.")
[114](https://file+.vscode-resource.vscode-cdn.net/Users/bas/Development/HeadingFWD/langchain-playground/notebooks/langgraph/~/Development/HeadingFWD/langchain-playground/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:114) content = ""
[115](https://file+.vscode-resource.vscode-cdn.net/Users/bas/Development/HeadingFWD/langchain-playground/notebooks/langgraph/~/Development/HeadingFWD/langchain-playground/.venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py:115) images = []
ValueError: Received unsupported message type for Ollama.
```
### Description
I'm trying to get Function Calls working with `OllamaFunctions`. I tried this with several different models btw, mistral, llama3, dolphincoder, mixtral:8x22b. It will always respond with:
```
ValueError: Received unsupported message type for Ollama.
```
I've found these issues as well that might be related:
https://github.com/langchain-ai/langchain/issues/14360
https://github.com/langchain-ai/langchain/pull/20881
I found that OllamaFunctions returns a `FunctionMessage` but `_convert_messages_to_ollama_messages` from OllamaChat doesn't recognize that and is not able to call the function.
https://github.com/langchain-ai/langchain/blob/4c437ebb9c2fb532ce655ac1e0c354c82a715df7/libs/community/langchain_community/chat_models/ollama.py#L99
Any help would be greatly appreciated.
### System Info
```
python3 -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:12:37 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6031
> Python Version: 3.11.8 (v3.11.8:db85d51d3e, Feb 6 2024, 18:02:37) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.1.46
> langchain: 0.1.16
> langchain_community: 0.0.34
> langsmith: 0.1.49
> langchain_experimental: 0.0.57
> langchain_openai: 0.1.3
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
> langgraph: 0.0.39
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve
``` | OllamaFunctions does not work - Received unsupported message type for Ollama | https://api.github.com/repos/langchain-ai/langchain/issues/20924/comments | 4 | 2024-04-26T10:02:21Z | 2024-06-13T09:55:39Z | https://github.com/langchain-ai/langchain/issues/20924 | 2,265,435,854 | 20,924 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-3.5-turbo-16k", temperature=0)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human", "{input}"),
]
)
agent = prompt | llm
response = ""
async for token in agent.astream({
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}):
response += token.content
```
### Error Message and Stack Trace (if applicable)
File "/home/rcaulk/projects/flowdapt_new_schema/chat-service/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2900, in astream
async for chunk in self.atransform(input_aiter(), config, **kwargs):
File "/home/rcaulk/projects/flowdapt_new_schema/chat-service/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2883, in atransform
async for chunk in self._atransform_stream_with_config(
File "/home/rcaulk/projects/flowdapt_new_schema/chat-service/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1980, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rcaulk/projects/flowdapt_new_schema/chat-service/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2853, in _atransform
async for output in final_pipeline:
File "/home/rcaulk/projects/flowdapt_new_schema/chat-service/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1334, in atransform
async for output in self.astream(final, config, **kwargs):
File "/home/rcaulk/projects/flowdapt_new_schema/chat-service/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 319, in astream
raise e
File "/home/rcaulk/projects/flowdapt_new_schema/chat-service/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 297, in astream
async for chunk in self._astream(
File "/home/rcaulk/projects/flowdapt_new_schema/chat-service/.venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 560, in _astream
async with response:
'async_generator' object does not support the asynchronous context manager protocol
### Description
Using an async generator fails with 'async_generator' object does not support the asynchronous context manager protocol in langchain-openai>=0.1.2
Everything is functional in langchain-openai<=0.1.1.
### System Info
langchain-openai==0.1.2+
Ubuntu 22.04
Python 3.11.4
| [BUG] langchain-openai 0.1.2+ breaks async generation | https://api.github.com/repos/langchain-ai/langchain/issues/20923/comments | 10 | 2024-04-26T09:42:50Z | 2024-06-23T17:20:22Z | https://github.com/langchain-ai/langchain/issues/20923 | 2,265,390,401 | 20,923 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
(venv)LeideMacBook-Pro community % poetry run pytest tests/integration_tests/document_loaders/test_recursive_url_loader.py
================================================================================ test session starts ================================================================================
platform darwin -- Python 3.11.4, pytest-7.4.4, pluggy-1.4.0 -- /Users/zhanglei/Work/github/langchain/venv/bin/python
cachedir: .pytest_cache
rootdir: /Users/zhanglei/Work/github/langchain/libs/community
configfile: pyproject.toml
plugins: syrupy-4.6.1, asyncio-0.20.3, cov-4.1.0, vcr-1.0.2, mock-3.12.0, anyio-3.7.1, dotenv-0.5.2, requests-mock-1.11.0, socket-0.6.0
asyncio: mode=Mode.AUTO
collected 6 items
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader FAILED [ 16%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader_deterministic PASSED [ 33%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader PASSED [ 50%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_equivalent PASSED [ 66%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_loading_invalid_url PASSED [ 83%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_metadata_necessary_properties PASSED [100%]
===================================================================================== FAILURES ======================================================================================
__________________________________________________________________________ test_async_recursive_url_loader __________________________________________________________________________
def test_async_recursive_url_loader() -> None:
url = "https://docs.python.org/3.9/"
loader = RecursiveUrlLoader(
url,
extractor=lambda _: "placeholder",
use_async=True,
max_depth=3,
timeout=None,
check_response_status=True,
)
docs = loader.load()
> assert len(docs) == 513
E AssertionError: assert 512 == 513
E + where 512 = len([Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/', 'content_type': 'text/html', 'title': '3.9.18 Documentation', 'language': None}), Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/search.html', 'content_type': 'text/html', 'title': 'Search — Python 3.9.18 documentation', 'language': None}), Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/index.html', 'content_type': 'text/html', 'title': '3.9.18 Documentation', 'language': None}), Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/library/index.html', 'content_type': 'text/html', 'title': 'The Python Standard Library — Python 3.9.18 documentation', 'language': None}), Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/library/xml.sax.reader.html', 'content_type': 'text/html', 'title': 'xml.sax.xmlreader — Interface for XML parsers — Python 3.9.18 documentation', 'language': None}), Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/library/tkinter.colorchooser.html', 'content_type': 'text/html', 'title': 'tkinter.colorchooser — Color choosing dialog — Python 3.9.18 documentation', 'language': None}), ...])
tests/integration_tests/document_loaders/test_recursive_url_loader.py:15: AssertionError
================================================================================= warnings summary ==================================================================================
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader_deterministic
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_equivalent
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_metadata_necessary_properties
/Users/zhanglei/.pyenv/versions/3.11.4/lib/python3.11/html/parser.py:170: XMLParsedAsHTMLWarning: It looks like you're parsing an XML document using an HTML parser. If this really is an HTML document (maybe it's XHTML?), you can ignore or filter this warning. If it's XML, you should know that using an XML parser will be more reliable. To parse this document as XML, make sure you have the lxml package installed, and pass the keyword argument `features="xml"` into the BeautifulSoup constructor.
k = self.parse_starttag(i)
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
================================================================================ slowest 5 durations ================================================================================
48.34s call tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader_deterministic
31.59s call tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_equivalent
30.15s call tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_metadata_necessary_properties
25.80s call tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader
15.13s call tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader
============================================================================== short test summary info ==============================================================================
FAILED tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader - AssertionError: assert 512 == 513
================================================================ 1 failed, 5 passed, 5 warnings in 151.33s (0:02:31) ================================================================
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The integration tests for "test_async_recursive_url_loader.py" are failing
### System Info
langchain==0.1.14
langchain-community==0.0.31
langchain-core==0.1.40
langchain-experimental==0.0.56
langchain-openai==0.1.1
langchain-text-splitters==0.0.1
langchainhub==0.1.15
macOS 14.3.1
Python 3.11.4 | The integration tests for "test_async_recursive_url_loader.py" are failing | https://api.github.com/repos/langchain-ai/langchain/issues/20919/comments | 0 | 2024-04-26T08:52:32Z | 2024-07-09T08:15:46Z | https://github.com/langchain-ai/langchain/issues/20919 | 2,265,294,100 | 20,919 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
user_question = tracker.latest_message['text']
#metadata = tracker.latest_message.get('metadata', {})
text_field = "text"
vectorstore = PineconeVectorStore(
index_name ="rasarag", embeddings, text_field
)
response = rag_chain(vectorstore,user_question)
def rag_chain(vectorstore,user_question):
retriever = vectorstore.as_retriever()
prompt = hub.pull("rlm/rag-prompt")
rag_chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
return rag_chain.invoke(user_question)
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "handle_request", line 83, in handle_request
)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\RasaGPT\lib\site-packages\rasa_sdk\endpoint.py", line 113, in webhook
result = await executor.run(action_call)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\RasaGPT\lib\site-packages\rasa_sdk\executor.py", line 399, in run
action(dispatcher, tracker, domain)
File "C:\Users\prakotian\Desktop\Projects\GenAI Projects\RasaRAGGPT\actions\actions.py", line 30, in run
response = rag_chain(vectorstore,user_question)
File "C:\Users\prakotian\Desktop\Projects\GenAI Projects\RasaRAGGPT\actions\llm.py", line 62, in rag_chain
return rag_chain.invoke(user_question)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\RasaGPT\lib\site-packages\langchain_core\runnables\base.py", line 2499, in invoke
input = step.invoke(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\RasaGPT\lib\site-packages\langchain_core\runnables\base.py", line 3144, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\RasaGPT\lib\site-packages\langchain_core\runnables\base.py", line 3144, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\RasaGPT\lib\concurrent\futures\_base.py", line 458, in result
return self.__get_result()
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\RasaGPT\lib\concurrent\futures\_base.py", line 403, in __get_result
raise self._exception
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\RasaGPT\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\RasaGPT\lib\site-packages\langchain_core\runnables\base.py", line 2499, in invoke
input = step.invoke(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\RasaGPT\lib\site-packages\langchain_core\retrievers.py", line 193, in invoke
return self.get_relevant_documents(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\RasaGPT\lib\site-packages\langchain_core\retrievers.py", line 321, in get_relevant_documents
raise e
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\RasaGPT\lib\site-packages\langchain_core\retrievers.py", line 314, in get_relevant_documents
result = self._get_relevant_documents(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\RasaGPT\lib\site-packages\langchain_core\vectorstores.py", line 696, in _get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\RasaGPT\lib\site-packages\langchain_pinecone\vectorstores.py", line 247, in similarity_search
docs_and_scores = self.similarity_search_with_score(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\RasaGPT\lib\site-packages\langchain_pinecone\vectorstores.py", line 192, in similarity_search_with_score
return self.similarity_search_by_vector_with_score(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\RasaGPT\lib\site-packages\langchain_pinecone\vectorstores.py", line 209, in similarity_search_by_vector_with_score
results = self._index.query(
AttributeError: 'str' object has no attribute 'query'
### Description
I am trying to use RASA with langchain but I am facing this error
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.10.14 | packaged by Anaconda, Inc. | (main, Mar 21 2024, 16:20:14) [MSC v.1916 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.45
> langchain: 0.1.16
> langchain_community: 0.0.34
> langsmith: 0.1.50
> langchain_chroma: 0.1.0
> langchain_cohere: 0.1.4
> langchain_groq: 0.1.3
> langchain_openai: 0.1.3
> langchain_pinecone: 0.1.0
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15 | Pinecone RAG not working. | https://api.github.com/repos/langchain-ai/langchain/issues/20918/comments | 0 | 2024-04-26T08:26:29Z | 2024-08-02T16:08:02Z | https://github.com/langchain-ai/langchain/issues/20918 | 2,265,244,855 | 20,918 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
langfuse_handler = CallbackHandler(
secret_key="sk-lf-...",
public_key="pk-lf-...",
host="host.."
...
)
prompt_template = ChatPromptTemplate.from_template(
template=prompt,
)
llm = BedrockChat(
credentials_profile_name=profile,
model_id=model_id,
client=client,
model_kwargs=model_kwargs,
verbose=verbose,
callbacks=[langfuse_handler],
)
output_parser = StrOutputParser()
chain = prompt_template | llm | output_parser
output = chain.invoke(
input={"color": "red"},
config={"callbacks": [langfuse_handler]},
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I’ve encountered a persistent issue with the latest versions of langchain, langchain-core, and langchain community. The problem arises when the system fails to recognize the model, which results in the following warning message:
`WARNING:langfuse:Langfuse was not able to parse the LLM model. The LLM call will be recorded without model name. Please create an issue so we can fix your integration.`
This issue prevents the correct tracing of model usage and pricing on Langfuse.
Interestingly, when I revert to older versions of the packages (langchain v0.1.13, langchain community v0.0.31, and langchain-core v0.1.39), the warning no longer appears, and Langfuse operates as expected.
### System Info
langchain-core==0.1.46
langchain-community==0.0.34
langchain==0.1.16
| Langfuse was not able to parse the LLM model. | https://api.github.com/repos/langchain-ai/langchain/issues/20915/comments | 2 | 2024-04-26T07:39:01Z | 2024-08-08T16:06:45Z | https://github.com/langchain-ai/langchain/issues/20915 | 2,265,159,454 | 20,915 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Other values like `DATABRICKS_TOKEN` and `DATABRICKS_HOST` are set in env vars.
```python
from langchain_community.utilities.sql_database import SQLDatabase
db = SQLDatabase.from_databricks(
catalog="samples", schema="nyctaxi", warehouse_id="<redacted>"
)
result = db.run("SELECT 1")
print(result)
```
### Error Message and Stack Trace (if applicable)
No error message, just hangs indefinitely, I can't find a way to introspect what's going on.
### Description
I'm trying to create an instance of SQLDatabase from Databricks configs, would expect to get an error or for there to be some type of output. I've pared the code down from a more expansive agent workflow to just this and pinned it down to the `from_databricks` call that's just hanging. Super appreciate any insight. I don't see anybody else having this problem, but I also can't see anything wrong in my configs after hours of troubleshooting. Thanks in advance 🙏🏻 .
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:19:
22 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T8112
> Python Version: 3.11.8 (main, Apr 7 2024, 16:30:13) [Clang
15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.1.46
> langchain: 0.1.16
> langchain_community: 0.0.34
> langsmith: 0.1.51
> langchain_experimental: 0.0.57
> langchain_openai: 0.1.3
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | `SQLDatabase.from_databricks` seems to hang indefinitely | https://api.github.com/repos/langchain-ai/langchain/issues/20910/comments | 1 | 2024-04-25T21:58:34Z | 2024-05-15T06:13:15Z | https://github.com/langchain-ai/langchain/issues/20910 | 2,264,565,012 | 20,910 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.document_loaders import ArxivLoader
docs = ArxivLoader(query = "Advanced regression models for real estate price prediction", load_max_docs=100, load_all_available_meta=True).load()
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
Cell In[9], [line 7](vscode-notebook-cell:?execution_count=9&line=7)
[1](vscode-notebook-cell:?execution_count=9&line=1) from langchain_community.document_loaders import ArxivLoader
[3](vscode-notebook-cell:?execution_count=9&line=3) #ArxivLoader
[4](vscode-notebook-cell:?execution_count=9&line=4) #HTTPError: HTTP Error 404: Not Found
[5](vscode-notebook-cell:?execution_count=9&line=5)
[6](vscode-notebook-cell:?execution_count=9&line=6) # Load documents using ArxivLoader with the concise_query
----> [7](vscode-notebook-cell:?execution_count=9&line=7) docs = ArxivLoader(query = "Advanced regression models for real estate price prediction", load_max_docs=100, load_all_available_meta=True).load()
File [/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain_community/document_loaders/arxiv.py:27](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain_community/document_loaders/arxiv.py:27), in ArxivLoader.load(self)
[26](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain_community/document_loaders/arxiv.py:26) def load(self) -> List[Document]:
---> [27](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain_community/document_loaders/arxiv.py:27) return self.client.load(self.query)
File [/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain_community/utilities/arxiv.py:208](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain_community/utilities/arxiv.py:208), in ArxivAPIWrapper.load(self, query)
[206](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain_community/utilities/arxiv.py:206) for result in results:
[207](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain_community/utilities/arxiv.py:207) try:
--> [208](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain_community/utilities/arxiv.py:208) doc_file_name: str = result.download_pdf()
[209](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain_community/utilities/arxiv.py:209) with fitz.open(doc_file_name) as doc_file:
[210](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain_community/utilities/arxiv.py:210) text: str = "".join(page.get_text() for page in doc_file)
File [/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/arxiv/__init__.py:214](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/arxiv/__init__.py:214), in Result.download_pdf(self, dirpath, filename)
[212](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/arxiv/__init__.py:212) filename = self._get_default_filename()
[213](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/arxiv/__init__.py:213) path = os.path.join(dirpath, filename)
--> [214](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/arxiv/__init__.py:214) written_path, _ = urlretrieve(self.pdf_url, path)
[215](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/arxiv/__init__.py:215) return written_path
File [/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:241](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:241), in urlretrieve(url, filename, reporthook, data)
[224](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:224) """
[225](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:225) Retrieve a URL into a temporary location on disk.
[226](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:226)
(...)
[237](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:237) data file as well as the resulting HTTPMessage object.
[238](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:238) """
[239](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:239) url_type, path = _splittype(url)
--> [241](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:241) with contextlib.closing(urlopen(url, data)) as fp:
[242](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:242) headers = fp.info()
[244](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:244) # Just return the local path and the "headers" for file://
[245](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:245) # URLs. No sense in performing a copy unless requested.
File [/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:216](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:216), in urlopen(url, data, timeout, cafile, capath, cadefault, context)
[214](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:214) else:
[215](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:215) opener = _opener
--> [216](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:216) return opener.open(url, data, timeout)
File [/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:525](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:525), in OpenerDirector.open(self, fullurl, data, timeout)
[523](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:523) for processor in self.process_response.get(protocol, []):
[524](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:524) meth = getattr(processor, meth_name)
--> [525](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:525) response = meth(req, response)
[527](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:527) return response
File [/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:634](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:634), in HTTPErrorProcessor.http_response(self, request, response)
[631](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:631) # According to RFC 2616, "2xx" code indicates that the client's
[632](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:632) # request was successfully received, understood, and accepted.
[633](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:633) if not (200 <= code < 300):
--> [634](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:634) response = self.parent.error(
[635](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:635) 'http', request, response, code, msg, hdrs)
[637](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:637) return response
File [/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:563](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:563), in OpenerDirector.error(self, proto, *args)
[561](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:561) if http_err:
[562](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:562) args = (dict, 'default', 'http_error_default') + orig_args
--> [563](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:563) return self._call_chain(*args)
File [/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:496](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:496), in OpenerDirector._call_chain(self, chain, kind, meth_name, *args)
[494](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:494) for handler in handlers:
[495](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:495) func = getattr(handler, meth_name)
--> [496](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:496) result = func(*args)
[497](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:497) if result is not None:
[498](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:498) return result
File [/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:643](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:643), in HTTPDefaultErrorHandler.http_error_default(self, req, fp, code, msg, hdrs)
[642](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:642) def http_error_default(self, req, fp, code, msg, hdrs):
--> [643](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/urllib/request.py:643) raise HTTPError(req.full_url, code, msg, hdrs, fp)
HTTPError: HTTP Error 404: Not Found
### Description
I am trying to use ArxivLoader from langchain_community.document_loaders to load 100 academic articles from arxiv database for certain keywords with metadata. But, I am getting this error. I have successfully run the code beforehand, but this time, it gives 404 Error.
### System Info
MacOS, Python 3.10.13
pip freeze | grep langchain
langchain==0.1.16
langchain-community==0.0.34
langchain-core==0.1.46
langchain-openai==0.0.6
langchain-text-splitters==0.0.1
langchainhub==0.1.14 | Getting "HTTP Error 404: Not Found" Error Using ArxivLoader | https://api.github.com/repos/langchain-ai/langchain/issues/20909/comments | 2 | 2024-04-25T21:06:55Z | 2024-08-06T16:08:37Z | https://github.com/langchain-ai/langchain/issues/20909 | 2,264,497,053 | 20,909 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.text_splitter import CharacterTextSplitter
splitter = CharacterTextSplitter(
chunk_size=10,
chunk_overlap=0,
separator=". ",
keep_separator=True,
)
for d in splitter.create_documents(["Text 1. Text 2. Text 3. Text 4."]):
print(d.page_content)
print("---")
```
### Error Message and Stack Trace (if applicable)
```
Text 1
---
. Text 2
---
. Text 3
---
. Text 4.
---
```
### Description
I'm trying to split text by sentence, while keeping end-of-sentence punctuation. Instead of putting the punctuation back at the end of the corresponding chunk, the library adds it to the front of the following chunk.
This problem is quite critical if the output is used for text-to-speech input.
### System Info
```
langchain==0.1.16
langchain-community==0.0.34
langchain-core==0.1.46
langchain-text-splitters==0.0.1
```
MacOS, Python 3.10.13 | `CharacterTextSplitter` with `keep_separator=True` sets the separator to the beginning of each chunk instead of an end | https://api.github.com/repos/langchain-ai/langchain/issues/20908/comments | 1 | 2024-04-25T21:04:55Z | 2024-05-22T20:17:47Z | https://github.com/langchain-ai/langchain/issues/20908 | 2,264,494,166 | 20,908 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import duckdb as db
from langchain.vectorstores import DuckDB
from langchain_community.embeddings import HuggingFaceEmbeddings
embeddings= HuggingFaceEmbeddings(model_name= "avsolatorio/GIST-small-Embedding-v0")
example_text = embeddings.embed_query("this is example text")
# new database
conn = db.connect(database='data/example.db')
conn.sql("""
CREATE OR REPLACE TABLE t1 (
id VARCHAR PRIMARY KEY,
content VARCHAR,
embedding FLOAT[],
);
""")
conn.execute("""INSERT INTO t1 VALUES (
'id1',
'this is example text',
$1,
)""", [example_text])
vector_store = DuckDB(
connection= conn,
embedding= embeddings,
vector_key= 'embedding',
id_key = 'id',
text_key= 'content',
table_name = 't1'
)
vector_store.add_texts(['more example text', 'yet more again'])
# Results in --> BinderException: Binder Error: table t1 has 3 columns but 4 values were supplied
vector_store.similarity_search('more example text')
# Results in --> KeyError: 'metadata'
conn.sql("""
CREATE OR REPLACE TABLE t1 (
id VARCHAR PRIMARY KEY,
content VARCHAR,
embedding FLOAT[],
additional_field1 VARCHAR, -- not named metadata and works to add but not to search
);
""")
conn.execute("""INSERT INTO t1 VALUES (
'id1',
'this is example text',
$1,
'random thing',
)""", [example_text])
vector_store = DuckDB(
connection= conn,
embedding= embeddings,
vector_key= 'embedding',
id_key = 'id',
text_key= 'content',
table_name = 't1'
)
vector_store.add_texts(['more example text', 'yet more again'])
# Results in --> Works
vector_store.similarity_search('more example text')
# Results in --> KeyError: 'metadata'
conn.sql("""
CREATE OR REPLACE TABLE t1 (
id VARCHAR PRIMARY KEY,
content VARCHAR,
embedding FLOAT[],
metadata VARCHAR, --
);
""")
conn.execute("""INSERT INTO t1 VALUES (
'id1',
'this is example text',
$1,
'',
)""", [example_text])
vector_store = DuckDB(
connection= conn,
embedding= embeddings,
vector_key= 'embedding',
id_key = 'id',
text_key= 'content',
table_name = 't1'
)
vector_store.add_texts(['more example text', 'yet more again'])
# Results in --> works
vector_store.similarity_search('more example text')
# Results in --> works
conn.close()
```
### Error Message and Stack Trace (if applicable)
File index.pyx:196, in pandas._libs.index.IndexEngine.get_loc()
File pandas/_libs/hashtable_class_helper.pxi:7081, in pandas._libs.hashtable.PyObjectHashTable.get_item()
File pandas/_libs/hashtable_class_helper.pxi:7089, in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'metadata'
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
File [/Users/user/Documents/project/task/run.py:1
----> [1](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/task/run.py:1) vector_store.similarity_search('more example text')
File [~/Documents/project](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project) Playground/.venv/lib/python3.12/site-packages/langchain_community/vectorstores/duckdb.py:198, in DuckDB.similarity_search(self, query, k, **kwargs)
[175](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/.venv/lib/python3.12/site-packages/langchain_community/vectorstores/duckdb.py:175) list_cosine_similarity = self.duckdb.FunctionExpression(
...
[3815](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/.venv/lib/python3.12/site-packages/pandas/core/indexes/base.py:3815) # InvalidIndexError. Otherwise we fall through and re-raise
[3816](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/.venv/lib/python3.12/site-packages/pandas/core/indexes/base.py:3816) # the TypeError.
[3817](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/.venv/lib/python3.12/site-packages/pandas/core/indexes/base.py:3817) self._check_indexing_error(key)
KeyError: 'metadata'
---------------------------------------------------------------------------
BinderException Traceback (most recent call last)
File [/Users/user/Documents/project/task/run.py:24
[8](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/task/run.py:8) conn.execute("""INSERT INTO t1 VALUES (
[9](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/task/run.py:9) 'id1',
[10](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/task/run.py:10) 'this is example text',
[11](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/task/run.py:11) $1,
[12](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/task/run.py:12) )""", [example_text])
[15](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/task/run.py:15) vector_store = DuckDB(
[16](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/task/run.py:16) connection= conn,
[17](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/task/run.py:17) embedding= embeddings,
(...)
[21](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/task/run.py:21) table_name = 't1'
[22](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/task/run.py:22) )
---> [24](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/task/run.py:24) vector_store.add_texts(['more example text', 'yet more again'])
File [~/Documents/project](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project) Playground/.venv/lib/python3.12/site-packages/langchain_community/vectorstores/duckdb.py:156, in DuckDB.add_texts(self, texts, metadatas, **kwargs)
[150](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/.venv/lib/python3.12/site-packages/langchain_community/vectorstores/duckdb.py:150) # Serialize metadata if present, else default to None
[151](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/.venv/lib/python3.12/site-packages/langchain_community/vectorstores/duckdb.py:151) metadata = (
[152](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/.venv/lib/python3.12/site-packages/langchain_community/vectorstores/duckdb.py:152) json.dumps(metadatas[idx])
[153](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/.venv/lib/python3.12/site-packages/langchain_community/vectorstores/duckdb.py:153) if metadatas and idx < len(metadatas)
[154](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/.venv/lib/python3.12/site-packages/langchain_community/vectorstores/duckdb.py:154) else None
[155](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/.venv/lib/python3.12/site-packages/langchain_community/vectorstores/duckdb.py:155) )
--> [156](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/.venv/lib/python3.12/site-packages/langchain_community/vectorstores/duckdb.py:156) self._connection.execute(
[157](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/.venv/lib/python3.12/site-packages/langchain_community/vectorstores/duckdb.py:157) f"INSERT INTO {self._table_name} VALUES (?,?,?,?)",
[158](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/.venv/lib/python3.12/site-packages/langchain_community/vectorstores/duckdb.py:158) [ids[idx], text, embedding, metadata],
[159](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/.venv/lib/python3.12/site-packages/langchain_community/vectorstores/duckdb.py:159) )
[160](https://file+.vscode-resource.vscode-cdn.net/Users/user/Documents/project/.venv/lib/python3.12/site-packages/langchain_community/vectorstores/duckdb.py:160) return ids
BinderException: Binder Error: table t1 has 3 columns but 4 values were supplied
### Description
I'm trying to use the langchain.vectorstores DuckDB object to create a vector store for embeddings. I already had an existing table from a colleague and I thought to map the existing fields to DuckDB object initialization. Unfortunately, there is not a field for `metadata` and if you do *not* have this field, nothing works. Both adding text and similarity search either result in an error and stops the program or a hidden errors where (I was not able to reproduce this with the minimal example above) that doesn't add new text to the database or results in an empty list `[]` as a result for similarity search.
I only discovered the reason for the problem when I used a blank database and used the DuckDB vectorstore object create the table. I realized then that it also created a `metadata` field, which makes sense considering the pattern across other vectorstores.
I propose adding this as a mapping argument when users are adapting an existing table. This would at least highlight to the user that a metadata field is expected and map the table if it already exists under a different name. Additionally, adding the field if missing would also be an improvement imo.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:54:51 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T6030
> Python Version: 3.12.1 (main, Jan 25 2024, 14:02:10) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.1.44
> langchain: 0.1.16
> langchain_community: 0.0.33
> langsmith: 0.1.36
> langchain_openai: 0.1.1
> langchain_postgres: 0.0.3
> langchain_text_splitters: 0.0.1
> langgraph: 0.0.32
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve | duckdb vector store missing metadata field during initialization causing failure when connecting to existing tables | https://api.github.com/repos/langchain-ai/langchain/issues/20906/comments | 0 | 2024-04-25T20:48:56Z | 2024-08-01T16:07:14Z | https://github.com/langchain-ai/langchain/issues/20906 | 2,264,470,541 | 20,906 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.prompts import FewShotPromptTemplate, PromptTemplate
from langchain.chat_models import ChatOpenAI
from langchain.pydantic_v1 import BaseModel
from langchain_experimental.tabular_synthetic_data.base import SyntheticDataGenerator
from langchain_experimental.tabular_synthetic_data.openai import create_openai_data_generator, OPENAI_TEMPLATE
from langchain_experimental.tabular_synthetic_data.prompts import SYNTHETIC_FEW_SHOT_SUFFIX, SYNTHETIC_FEW_SHOT_PREFIX
class MedicalBilling(BaseModel):
patient_id: int
patient_name: str
diagnosis_code: str
procedure_code: str
total_charge: float
insurance_claim_amount: float
examples = [
{"example": """Patient ID: 123456, Patient Name: John Doe, Diagnosis Code:
J20.9, Procedure Code: 99203, Total Charge: $500, Insurance Claim Amount: $350"""},
{"example": """Patient ID: 789012, Patient Name: Johnson Smith, Diagnosis
Code: M54.5, Procedure Code: 99213, Total Charge: $150, Insurance Claim Amount: $120"""},
{"example": """Patient ID: 345678, Patient Name: Emily Stone, Diagnosis Code:
E11.9, Procedure Code: 99214, Total Charge: $300, Insurance Claim Amount: $250"""},
]
OPENAI_TEMPLATE = PromptTemplate(input_variables=["example"], template="{example}")
prompt_template = FewShotPromptTemplate(
prefix=SYNTHETIC_FEW_SHOT_PREFIX,
examples=examples,
suffix=SYNTHETIC_FEW_SHOT_SUFFIX,
input_variables=["subject", "extra"],
example_prompt=OPENAI_TEMPLATE,
)
from langchain_community.llms import VLLMOpenAI
llm = VLLMOpenAI(
openai_api_key="EMPTY",
openai_api_base="http://localhost:8000/v1",
model_name="meta-llama/meta-llama-3-8B-Instruct",
)
synthetic_data_generator = create_openai_data_generator(
output_schema=MedicalBilling,
llm=llm,
prompt=prompt_template,
)
synthetic_results = synthetic_data_generator.generate(
subject="medical_billing",
extra="the name must be chosen at random. Make it something you wouldn't normally choose.",
runs=1,
)
```
### Error Message and Stack Trace (if applicable)
```
{
"name": "TypeError",
"message": "Completions.create() got an unexpected keyword argument 'functions'",
"stack": "---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[10], line 21
13 # os.environ['OPENAI_API_KEY'] = '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'
15 synthetic_data_generator = create_openai_data_generator(
16 output_schema=MedicalBilling,
17 llm=llm,
18 prompt=prompt_template,
19 )
---> 21 synthetic_results = synthetic_data_generator.generate(
22 subject=\"medical_billing\",
23 extra=\"the name must be chosen at random. Make it something you wouldn't normally choose.\",
24 runs=1,
25 )
File ~/home/.venv/lib/python3.10/site-packages/langchain_experimental/tabular_synthetic_data/base.py:96, in SyntheticDataGenerator.generate(self, subject, runs, *args, **kwargs)
91 raise ValueError(
92 \"llm_chain is none, either set either llm_chain or llm at generator \"
93 \"construction\"
94 )
95 for _ in range(runs):
---> 96 result = self.llm_chain.run(subject=subject, *args, **kwargs)
97 self.results.append(result)
98 self._update_examples(result)
File ~/home/.venv/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:148, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
146 warned = True
147 emit_warning()
--> 148 return wrapped(*args, **kwargs)
File ~/home/.venv/lib/python3.10/site-packages/langchain/chains/base.py:574, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
569 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
570 _output_key
571 ]
573 if kwargs and not args:
--> 574 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
575 _output_key
576 ]
578 if not kwargs and not args:
579 raise ValueError(
580 \"`run` supported with either positional arguments or keyword arguments,\"
581 \" but none were provided.\"
582 )
File ~/home/.venv/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:148, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
146 warned = True
147 emit_warning()
--> 148 return wrapped(*args, **kwargs)
File ~/home/.venv/lib/python3.10/site-packages/langchain/chains/base.py:378, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
346 \"\"\"Execute the chain.
347
348 Args:
(...)
369 `Chain.output_keys`.
370 \"\"\"
371 config = {
372 \"callbacks\": callbacks,
373 \"tags\": tags,
374 \"metadata\": metadata,
375 \"run_name\": run_name,
376 }
--> 378 return self.invoke(
379 inputs,
380 cast(RunnableConfig, {k: v for k, v in config.items() if v is not None}),
381 return_only_outputs=return_only_outputs,
382 include_run_info=include_run_info,
383 )
File ~/home/.venv/lib/python3.10/site-packages/langchain/chains/base.py:163, in Chain.invoke(self, input, config, **kwargs)
161 except BaseException as e:
162 run_manager.on_chain_error(e)
--> 163 raise e
164 run_manager.on_chain_end(outputs)
166 if include_run_info:
File ~/home/.venv/lib/python3.10/site-packages/langchain/chains/base.py:153, in Chain.invoke(self, input, config, **kwargs)
150 try:
151 self._validate_inputs(inputs)
152 outputs = (
--> 153 self._call(inputs, run_manager=run_manager)
154 if new_arg_supported
155 else self._call(inputs)
156 )
158 final_outputs: Dict[str, Any] = self.prep_outputs(
159 inputs, outputs, return_only_outputs
160 )
161 except BaseException as e:
File ~/home/.venv/lib/python3.10/site-packages/langchain/chains/llm.py:103, in LLMChain._call(self, inputs, run_manager)
98 def _call(
99 self,
100 inputs: Dict[str, Any],
101 run_manager: Optional[CallbackManagerForChainRun] = None,
102 ) -> Dict[str, str]:
--> 103 response = self.generate([inputs], run_manager=run_manager)
104 return self.create_outputs(response)[0]
File ~/home/.venv/lib/python3.10/site-packages/langchain/chains/llm.py:115, in LLMChain.generate(self, input_list, run_manager)
113 callbacks = run_manager.get_child() if run_manager else None
114 if isinstance(self.llm, BaseLanguageModel):
--> 115 return self.llm.generate_prompt(
116 prompts,
117 stop,
118 callbacks=callbacks,
119 **self.llm_kwargs,
120 )
121 else:
122 results = self.llm.bind(stop=stop, **self.llm_kwargs).batch(
123 cast(List, prompts), {\"callbacks\": callbacks}
124 )
File ~/home/.venv/lib/python3.10/site-packages/langchain_core/language_models/llms.py:633, in BaseLLM.generate_prompt(self, prompts, stop, callbacks, **kwargs)
625 def generate_prompt(
626 self,
627 prompts: List[PromptValue],
(...)
630 **kwargs: Any,
631 ) -> LLMResult:
632 prompt_strings = [p.to_string() for p in prompts]
--> 633 return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File ~/home/.venv/lib/python3.10/site-packages/langchain_core/language_models/llms.py:803, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
788 if (self.cache is None and get_llm_cache() is None) or self.cache is False:
789 run_managers = [
790 callback_manager.on_llm_start(
791 dumpd(self),
(...)
801 )
802 ]
--> 803 output = self._generate_helper(
804 prompts, stop, run_managers, bool(new_arg_supported), **kwargs
805 )
806 return output
807 if len(missing_prompts) > 0:
File ~/home/.venv/lib/python3.10/site-packages/langchain_core/language_models/llms.py:670, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
668 for run_manager in run_managers:
669 run_manager.on_llm_error(e, response=LLMResult(generations=[]))
--> 670 raise e
671 flattened_outputs = output.flatten()
672 for manager, flattened_output in zip(run_managers, flattened_outputs):
File ~/home/.venv/lib/python3.10/site-packages/langchain_core/language_models/llms.py:657, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
647 def _generate_helper(
648 self,
649 prompts: List[str],
(...)
653 **kwargs: Any,
654 ) -> LLMResult:
655 try:
656 output = (
--> 657 self._generate(
658 prompts,
659 stop=stop,
660 # TODO: support multiple run managers
661 run_manager=run_managers[0] if run_managers else None,
662 **kwargs,
663 )
664 if new_arg_supported
665 else self._generate(prompts, stop=stop)
666 )
667 except BaseException as e:
668 for run_manager in run_managers:
File ~/home/.venv/lib/python3.10/site-packages/langchain_community/llms/openai.py:460, in BaseOpenAI._generate(self, prompts, stop, run_manager, **kwargs)
448 choices.append(
449 {
450 \"text\": generation.text,
(...)
457 }
458 )
459 else:
--> 460 response = completion_with_retry(
461 self, prompt=_prompts, run_manager=run_manager, **params
462 )
463 if not isinstance(response, dict):
464 # V1 client returns the response in an PyDantic object instead of
465 # dict. For the transition period, we deep convert it to dict.
466 response = response.dict()
File ~/home/.venv/lib/python3.10/site-packages/langchain_community/llms/openai.py:115, in completion_with_retry(llm, run_manager, **kwargs)
113 \"\"\"Use tenacity to retry the completion call.\"\"\"
114 if is_openai_v1():
--> 115 return llm.client.create(**kwargs)
117 retry_decorator = _create_retry_decorator(llm, run_manager=run_manager)
119 @retry_decorator
120 def _completion_with_retry(**kwargs: Any) -> Any:
File ~/home/.venv/lib/python3.10/site-packages/openai/_utils/_utils.py:275, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
273 msg = f\"Missing required argument: {quote(missing[0])}\"
274 raise TypeError(msg)
--> 275 return func(*args, **kwargs)
TypeError: Completions.create() got an unexpected keyword argument 'functions'"
}
```
### Description
The synthetic data generator seems to be working only when `llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)`. When I switched it to point to the vLLM API server as shown in the code is when it starts to fail.
### System Info
**Platform:** Linux
**Python:** 3.10
**Package versions:**
```langchain==0.1.16
langchain-community==0.0.34
langchain-core==0.1.45
langchain-experimental==0.0.57
langchain-openai==0.0.5
langchain-text-splitters==0.0.1``` | TypeError: Completions.create() got an unexpected keyword argument 'functions' when running Synthetic data generator over vLLM | https://api.github.com/repos/langchain-ai/langchain/issues/20895/comments | 1 | 2024-04-25T16:38:32Z | 2024-08-04T16:08:01Z | https://github.com/langchain-ai/langchain/issues/20895 | 2,264,034,401 | 20,895 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following Code doesn't register the cited_answer citations object:
```python
import os
from operator import itemgetter
from typing import List, Optional
from uuid import UUID
from langchain.chains import ConversationalRetrievalChain
from langchain.embeddings.ollama import OllamaEmbeddings
from langchain.llms.base import BaseLLM
from langchain.prompts import HumanMessagePromptTemplate, SystemMessagePromptTemplate
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import FlashrankRerank
from langchain.schema import format_document
from langchain_cohere import CohereRerank
from langchain_community.chat_models import ChatLiteLLM
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate, PromptTemplate
from langchain_core.pydantic_v1 import Field as FieldV1
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from logger import get_logger
from models import BrainSettings # Importing settings related to the 'brain'
from modules.brain.service.brain_service import BrainService
from modules.chat.service.chat_service import ChatService
from modules.prompt.service.get_prompt_to_use import get_prompt_to_use
from pydantic import BaseModel, ConfigDict
from pydantic_settings import BaseSettings
from supabase.client import Client, create_client
from vectorstore.supabase import CustomSupabaseVectorStore
logger = get_logger(__name__)
class cited_answer(BaseModel):
"""Answer the user question based only on the given sources, and cite the sources used."""
answer: str = FieldV1(
...,
description="The answer to the user question, which is based only on the given sources.",
)
citations: List[int] = FieldV1(
...,
description="The integer IDs of the SPECIFIC sources which justify the answer.",
)
```
### Error Message and Stack Trace (if applicable)
Here is the langsmith:
<img width="1330" alt="image" src="https://github.com/langchain-ai/langchain/assets/19614572/ff05012b-64b8-4d43-a33c-f1446c1c6386">
And if I don't put List but just Int it works
<img width="939" alt="image" src="https://github.com/langchain-ai/langchain/assets/19614572/0397cac7-7aac-40f4-89b1-1cb3f4ac70d2">
### Description
I'm trying to attach a function calling for citation, but List[int] doesn't work.
I tried with Pydantic V1 and V2. None seems to work
### System Info
langchain==0.1.16
langchain-community==0.0.34
langchain-core==0.1.45
langchain-openai==0.1.3
langchain-text-splitters==0.0.1
### Documentation Link
I'm based on this documentation https://python.langchain.com/docs/use_cases/question_answering/citations/#cite-documents
| List of Int doesn't work in Function Calling | https://api.github.com/repos/langchain-ai/langchain/issues/20890/comments | 1 | 2024-04-25T15:30:43Z | 2024-04-27T14:39:28Z | https://github.com/langchain-ai/langchain/issues/20890 | 2,263,907,640 | 20,890 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
**import asyncio
from semantic_kernel import Kernel as sk
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion, AzureChatCompletion
from semantic_kernel.prompt_template import PromptTemplateConfig
from semantic_kernel.utils.settings import openai_settings_from_dot_env, azure_openai_settings_from_dot_env**
### Error Message and Stack Trace (if applicable)
Exception has occurred: ImportError
cannot import name 'PromptTemplateConfig' from 'semantic_kernel.prompt_template' (unknown location)
File "C:\Users\TINGLE\Downloads\1\import asyncio.py", line 4, in <module>
from semantic_kernel.prompt_template import PromptTemplateConfig
ImportError: cannot import name 'PromptTemplateConfig' from 'semantic_kernel.prompt_template' (unknown location)
### Description
Getting this error while trying to import the below libraries:
### System Info
NA | cannot import name 'PromptTemplateConfig' from 'semantic_kernel.prompt_template' (unknown location) | https://api.github.com/repos/langchain-ai/langchain/issues/20885/comments | 1 | 2024-04-25T10:20:35Z | 2024-04-25T14:23:18Z | https://github.com/langchain-ai/langchain/issues/20885 | 2,263,221,771 | 20,885 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
llm = ChatOpenAI(temperature=0, model="gpt-4-turbo")
document_content_description = "Various documents"
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True, enable_limit=True, search_kwargs={
"k": 10
}
)
question = "NVIDIA"
context = retriever.invoke(question, verbose=True)
print(context)
### Error Message and Stack Trace (if applicable)
[Document(page_content='Source: NVIDIA', metadata={'title': 'NVIDIA 1'),
Document(page_content='Source: NVIDIA', metadata={'title': 'NVIDIA 2'),
Document(page_content='Source: NVIDIA', metadata={'title': 'NVIDIA 3'),
Document(page_content='Source: NVIDIA', metadata={'title': 'NVIDIA 4'),
Document(page_content='Source: NVIDIA', metadata={'title': 'NVIDIA 5'),
Document(page_content='Source: NVIDIA', metadata={'title': 'NVIDIA 6'),
Document(page_content='Source: NVIDIA', metadata={'title': 'NVIDIA 7'),
Document(page_content='Source: NVIDIA', metadata={'title': 'NVIDIA 8'),
Document(page_content='Source: NVIDIA', metadata={'title': 'NVIDIA 9'),
Document(page_content='Source: NVIDIA', metadata={'title': 'NVIDIA 10')
]
### Description
I am trying to retrieve relevant document about NVIDIA, but facing duplication. I am trying to find a way so that I can only receive unique documents from Chroma. I am also open for suggestions like: "use this vectorstore since it supports this".
### System Info
langchain==0.1.16
langchain-anthropic==0.1.11
langchain-chroma==0.1.0
langchain-cohere==0.1.4
langchain-community==0.0.33
langchain-core==0.1.45
langchain-experimental==0.0.57
langchain-google-genai==1.0.2
langchain-groq==0.1.3
langchain-mistralai==0.1.2
langchain-openai==0.1.3
langchain-text-splitters==0.0.1
langchainhub==0.1.15 | SelfQueryRetriever returning duplicate documents | https://api.github.com/repos/langchain-ai/langchain/issues/20884/comments | 0 | 2024-04-25T09:31:54Z | 2024-08-05T16:09:01Z | https://github.com/langchain-ai/langchain/issues/20884 | 2,263,124,611 | 20,884 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Example is taken from [this cookbook](https://github.com/langchain-ai/langchain/blob/master/cookbook/mongodb-langchain-cache-memory.ipynb)
```
import time
from langchain_mongodb import MongoDBAtlasVectorSearch
from pymongo import MongoClient
from langchain_openai import OpenAIEmbeddings
from langchain.callbacks.manager import get_openai_callback
from langchain_core.globals import set_llm_cache
from langchain_mongodb.cache import MongoDBAtlasSemanticCache
DB_NAME = "vector-search-db"
COLLECTION_NAME = "vector-search"
ATLAS_VECTOR_SEARCH_INDEX_NAME = "test-vector-index"
MONGODB_URI = ""
client = MongoClient(MONGODB_URI, appname="devrel.content.python")
collection = client[DB_NAME][COLLECTION_NAME]
OPENAI_API_KEY = ''
# Using the text-embedding-ada-002 since that's what was used to create embeddings in the movies dataset
embeddings = OpenAIEmbeddings(
openai_api_key=OPENAI_API_KEY, model="text-embedding-ada-002"
)
vector_store = MongoDBAtlasVectorSearch.from_connection_string(
connection_string=MONGODB_URI,
namespace=DB_NAME + "." + COLLECTION_NAME,
embedding=embeddings,
index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME,
)
retriever = vector_store.as_retriever(search_type="similarity", search_kwargs={"k": 5})
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI
# Generate context using the retriever, and pass the user question through
retrieve = {
"context": retriever | (lambda docs: "\n\n".join([d.page_content for d in docs])),
"question": RunnablePassthrough(),
}
template = """Answer the question based only on the following context: \
{context}
Question: {question}
"""
# Defining the chat prompt
prompt = ChatPromptTemplate.from_template(template)
# Defining the model to be used for chat completion
model = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY)
# Parse output as a string
parse_output = StrOutputParser()
# Naive RAG chain
naive_rag_chain = retrieve | prompt | model | parse_output
set_llm_cache(
MongoDBAtlasSemanticCache(
connection_string=MONGODB_URI,
embedding=embeddings,
collection_name="semantic_cache",
database_name=DB_NAME,
index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME,
wait_until_ready=True,
)
)
question = "When formally independence was declared?"
with get_openai_callback() as cb:
start = time.time()
res = naive_rag_chain.invoke(question)
end = time.time()
if naive_rag_chain:
print(res)
print("--- cb")
print(str(cb) + f"({end - start:.2f} seconds)")
else:
print('no compressed_docs')
with get_openai_callback() as cb:
start = time.time()
res = naive_rag_chain.invoke(question)
end = time.time()
if naive_rag_chain:
print(res)
print("--- cb")
print(str(cb) + f"({end - start:.2f} seconds)")
else:
print('no compressed_docs')
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/home/dmitry/Desktop/vector-search-from-cookbook.py", line 77, in <module>
res = naive_rag_chain.invoke(question)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dmitry/.pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2499, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "/home/dmitry/.pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 158, in invoke
self.generate_prompt(
File "/home/dmitry/.pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 560, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dmitry/.pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 421, in generate
raise e
File "/home/dmitry/.pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 411, in generate
self._generate_with_cache(
File "/home/dmitry/.pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 651, in _generate_with_cache
llm_cache.update(prompt, llm_string, result.generations)
File "/home/dmitry/.pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_mongodb/cache.py", line 293, in update
_wait_until(is_indexed, return_val)
File "/home/dmitry/.pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_mongodb/cache.py", line 119, in _wait_until
raise TimeoutError("Didn't ever %s" % success_description)
TimeoutError: Didn't ever [ChatGeneration(text='Independence was formally declared on July 4, 1776.', generation_info={'finish_reason': 'stop', 'logprobs': None}, message=AIMessage(content='Independence was formally declared on July 4, 1776.', response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 639, 'total_tokens': 653}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-0295bd5f-b95e-4b85-98ee-f3bcc8a1cdec-0'))]
```
### Description
I was following this Example from [this cookbook](https://github.com/langchain-ai/langchain/blob/master/cookbook/mongodb-langchain-cache-memory.ipynb)
I loaded own data (splitted text by paragraphs and uploaded to mongo db with this code (using the same index)
```
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=0,
separators=["\n\n", "\n", "(?<=\. )", " "],
length_function=len
)
# split by \n
docs = text_splitter.split_text(data)
print(docs)
client = MongoClient(mongodb_conn_string)
collection = client[db_name][collection_name]
# Insert the documents in MongoDB Atlas with their embedding
docsearch = MongoDBAtlasVectorSearch.from_texts(
docs,
embeddings,
collection=collection,
index_name=vector_index_name
)
```
So data is uploaded like this:

The problem is, that when I don't use `wait_until_ready` param in set_llm_cache, it returns relevant results, BUT without taking it from cache - the executing time is the same:
```
--- cb
Tokens Used: 653
Prompt Tokens: 639
Completion Tokens: 14
Successful Requests: 1
Total Cost (USD): $0.0009865(5.72 seconds)
Independence was formally declared on July 4, 1776.
--- cb
Tokens Used: 653
Prompt Tokens: 639
Completion Tokens: 14
Successful Requests: 1
Total Cost (USD): $0.0009865(3.00 seconds)
```
So I ran it multiple times, 5-6 secs remain.
I assume we don't wait cache indexing, but with `wait_until_ready` it raises error.
Though something is being written in cache collection

My vector settings

### System Info
```
langchain==0.1.16
langchain-anthropic==0.1.6
langchain-community==0.0.34
langchain-core==0.1.45
langchain-mongodb==0.1.3
langchain-openai==0.1.3
langchain-text-splitters==0.0.1
``` | set_llm_cache doesn't work. Returns TimeoutError: Didn't ever... | https://api.github.com/repos/langchain-ai/langchain/issues/20882/comments | 0 | 2024-04-25T09:19:37Z | 2024-08-01T16:06:59Z | https://github.com/langchain-ai/langchain/issues/20882 | 2,263,100,253 | 20,882 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
***importing all required libraries*****
spec_json="../..../.json"
spec_path=Path(spec)
openapi_path=OpenAPISpec.from_file(spec_path)
llm=ChatOpenAI(model="gpt-4-32k")
spec_chain=get_openapi_chain(spec=openapi_path,llm=llm,header=headers)
queryStr="......"
spec_chain.invoke(queryStr)
### Error Message and Stack Trace (if applicable)
SSLError: HTTPSConnectionPool(host={internal host}, port=443): Max retries exceeded with url : {/..../..../} (Caused by SSLError(SSLCertVerificationError (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed : unable to get the local issuer certificate (_ssl.c:1007)')))
### Description
As a developer, I am using the get_openai_chain function to invoke the OpenAPI spec endpoint depending on consumer query. The internal OpenAPI business endpoint hosted on HTTPS, requires SSL certificate verification.
**Expected Behavior:**
The consumer submits a query, and get_openai_chain uses this query to interact with the model along with a predefined set of functions specified in the functions parameter.
The model may choose to call one of these functions; if it does, the output will be a stringified JSON object that conforms to a custom schema (note: the model may include unexpected parameters).
The get_openai_chain function should then parse the string into JSON within your code, and execute your function using the provided arguments using SimpleRequestChain.
**Current Scenario:**
The consumer submits a query, and get_openai_chain uses this query to interact with the model along with a predefined set of functions specified in the functions parameter. --Passed
The model may choose to call one of these functions; if it does, the output will be a stringified JSON object that conforms to a custom schema (note: the model may include unexpected parameters).--Passed
The get_openai_chain function should then parse the string into JSON within your code, and execute your function using the provided arguments using SimpleRequestChain.- **Failling**
**Issue:**
SimpleRequestChain is failing to establish a connection with the OpenAPI spec endpoint, as there is no way to specify the certificate path for SSL verification.
### System Info
langchain==0.1.16
langchain-chroma==0.1.0
langchain-community==0.0.33
langchain-core==0.1.43
langchain-openai==0.1.3
langchain-text-splitters==0.0.1
OS: Windows
Python Version: 3.10.11 | SSL Certificate Verification Issue with get_openai_chain Function in SimpleRequestChain | https://api.github.com/repos/langchain-ai/langchain/issues/20870/comments | 2 | 2024-04-25T02:18:30Z | 2024-08-09T16:08:18Z | https://github.com/langchain-ai/langchain/issues/20870 | 2,262,499,287 | 20,870 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
When using FAISS, each vector store newly created with `from_texts()` is initialized with zero documents:
```python
import langchain_community.embeddings
embeddings = langchain_community.embeddings.FakeEmbeddings(size=4)
import langchain_community.vectorstores
def make_vs(texts):
return langchain_community.vectorstores.FAISS.from_texts(
texts=texts,
embedding=embeddings)
vs = make_vs(['a', 'b', 'c'])
print([d.page_content for d in vs.similarity_search('z', k=100)])
# >>> returns ['a', 'b', 'c']
vs = make_vs(['d', 'e', 'f'])
print([d.page_content for d in vs.similarity_search('z', k=100)])
# >>> returns ['d', 'e', 'f']
```
But when using Chroma, subsequent vector stores newly created with `from_texts()` still have documents from previous vector stores:
```python
import langchain_community.embeddings
embeddings = langchain_community.embeddings.FakeEmbeddings(size=4)
import langchain_community.vectorstores
def make_vs(texts):
return langchain_community.vectorstores.Chroma.from_texts(
texts=texts,
embedding=embeddings)
vs = make_vs(['a', 'b', 'c'])
print([d.page_content for d in vs.similarity_search('z', k=100)])
# >>> returns ['a', 'b', 'c']
vs = make_vs(['d', 'e', 'f'])
print([d.page_content for d in vs.similarity_search('z', k=100)])
# >>> returns ['a', 'b', 'c', 'd', 'e', 'f'] - INCORRECT
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
When I create a new Chroma vector store object with `from_texts()`, documents from previous vector stores are not deleted. The code above shows an example.
This occurs regardless of whether I assign to the same variable:
```python
vs = make_vs(['a', 'b', 'c'])
vs = make_vs(['d', 'e', 'f'])
```
or to different variables:
```python
vs1 = make_vs(['a', 'b', 'c'])
vs2 = make_vs(['d', 'e', 'f'])
```
or if I try to manually force object destruction:
```python
vs = make_vs(['a', 'b', 'c'])
del vs
vs = make_vs(['d', 'e', 'f'])
```
Only an explicit `delete_collection()` will delete the documents:
```python
vs = make_vs(['a', 'b', 'c'])
vs.delete_collection()
vs = make_vs(['d', 'e', 'f'])
```
but this is a workaround - the vector store is still incorrectly "sticky"; we're just deleting the documents. In addition, this not an easy workaround complex real-world cases where vector store operations are decentralized and called in different scopes.
If I want to add content to a vector store, I would use `add_texts()`. If I want to create a new vector store, then I would use `from-texts()` and any previous vector store content should be disregarded by construction.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Wed Aug 10 16:21:17 UTC 2022
> Python Version: 3.11.0 (main, Nov 10 2022, 08:24:18) [GCC 8.2.0]
Package Information
-------------------
> langchain_core: 0.1.45
> langchain: 0.1.16
> langchain_community: 0.0.34
> langsmith: 0.1.50
> langchain_openai: 0.0.6
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | When using Chroma, vector stores newly created with `from_texts()` do not delete previous documents | https://api.github.com/repos/langchain-ai/langchain/issues/20866/comments | 5 | 2024-04-24T23:23:46Z | 2024-08-02T16:07:42Z | https://github.com/langchain-ai/langchain/issues/20866 | 2,262,342,270 | 20,866 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Examples: https://github.com/search?q=repo%3Alangchain-ai%2Flangchain+%22model+%3D+chatAnthropic%22&type=code
In several places, we start using an Anthropic model likes this: `model = ChatAnthropic(model='claude-3-opus-20240229')`
This is confusing for someone reading the code/docs for 2 reasons:
1. The pattern we typically use elsewhere (sometimes even for using `ChatAnthropic`, is by assigning it to "llm" instead of "model"
2. "model" is a parameter that the downstream user often assigns a particular argument (the specific Anthropic model). This is the primary opportunity for confusion IMO. We are working with a "model" parameter, and also creating an object that is called "model".
This is by no means a major issue, but it's something that I came across, and it might be worth addressing for consistency and communication reasons.
I am happy to open a PR to address this throughout the repo.
### Idea or request for content:
Where possible with reasonable effort (so not messing with too many downstream stuff), change `model = ChatAnthropic(model='...')` to `llm = ChatAnthropic(model='...')` or similar. | DOC: example code, docs, and other spots involving invoking an anthropic model object can be confusing | https://api.github.com/repos/langchain-ai/langchain/issues/20865/comments | 1 | 2024-04-24T23:13:02Z | 2024-08-01T16:06:49Z | https://github.com/langchain-ai/langchain/issues/20865 | 2,262,333,908 | 20,865 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_chroma import Chroma
db = Chroma()
db.persist()
```
### Error Message and Stack Trace (if applicable)
```python
AttributeError: 'Chroma' object has no attribute 'persist'
```
### Description
I saw in a langchain-related issue on the Chroma GitHub page that they mention that in Chroma 0.4.x, the persist method no longer exists: https://github.com/chroma-core/chroma/issues/2012#issuecomment-2053917062
Therefore, I believe it should also be removed here:
https://github.com/langchain-ai/langchain/blob/87d31a3ec0d4aeb7fe3af90f00511677c38f3a3b/libs/community/langchain_community/vectorstores/chroma.py#L613
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:10:42 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6000
> Python Version: 3.9.11 (v3.9.11:2de452f8bf, Mar 16 2022, 10:44:40)
[Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.1.45
> langchain: 0.1.16
> langchain_community: 0.0.34
> langsmith: 0.1.49
> langchain_chroma: 0.1.0
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Persist method in Chroma no longer exists in Chroma 0.4.x | https://api.github.com/repos/langchain-ai/langchain/issues/20851/comments | 1 | 2024-04-24T18:57:51Z | 2024-04-26T07:36:19Z | https://github.com/langchain-ai/langchain/issues/20851 | 2,261,971,874 | 20,851 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
llm = ChatOpenAI(
model='gpt-4-0125-preview',
temperature=0,
max_tokens=4000
)
tools = [self.vector_db_tool]
agent = create_openai_tools_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
### Error Message and Stack Trace (if applicable)
peer closed connection without sending complete message body (incomplete chunked read)
### Description
I am using latest langchain version.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:22 PDT 2022; root:xnu-8020.121.3~4/RELEASE_X86_64
> Python Version: 3.12.2 (main, Feb 20 2024, 04:30:04) [Clang 14.0.0 (clang-1400.0.29.202)]
Package Information
-------------------
> langchain_core: 0.1.42
> langchain: 0.1.16
> langchain_community: 0.0.32
> langsmith: 0.1.47
> langchain_openai: 0.1.3
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | When chat history is long i am getting this error. peer closed connection without sending complete message body (incomplete chunked read) | https://api.github.com/repos/langchain-ai/langchain/issues/20846/comments | 0 | 2024-04-24T17:42:09Z | 2024-07-31T16:08:25Z | https://github.com/langchain-ai/langchain/issues/20846 | 2,261,844,431 | 20,846 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.agents.openai_assistant import OpenAIAssistantRunnable
assistant = OpenAIAssistantRunnable.create_assistant(
name="langchain assistant",
instructions="You are a personal math tutor. Write and run code to answer math questions.",
tools=[{"type": "code_interpreter"}],
model="gpt-4-1106-preview",
)
result = assistant.invoke({"content": "What's 10 - 4 raised to the 2.7"})
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/home/Library/Caches/pypoetry/virtualenvs/A5OhnBxC-py3.12/lib/python3.12/site-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/home/Library/Caches/pypoetry/virtualenvs/A5OhnBxC-py3.12/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/home/Library/Caches/pypoetry/virtualenvs/A5OhnBxC-py3.12/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/Users/home/Library/Caches/pypoetry/virtualenvs/A5OhnBxC-py3.12/lib/python3.12/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/home/Library/Caches/pypoetry/virtualenvs/A5OhnBxC-py3.12/lib/python3.12/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/Users/home/Library/Caches/pypoetry/virtualenvs/A5OhnBxC-py3.12/lib/python3.12/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/Users/home/Library/Caches/pypoetry/virtualenvs/A5OhnBxC-py3.12/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/Users/home/Library/Caches/pypoetry/virtualenvs/A5OhnBxC-py3.12/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/Users/home/Library/Caches/pypoetry/virtualenvs/A5OhnBxC-py3.12/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/Users/home/Library/Caches/pypoetry/virtualenvs/A5OhnBxC-py3.12/lib/python3.12/site-packages/starlette/routing.py", line 756, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/home/Library/Caches/pypoetry/virtualenvs/A5OhnBxC-py3.12/lib/python3.12/site-packages/starlette/routing.py", line 776, in app
await route.handle(scope, receive, send)
File "/Users/home/Library/Caches/pypoetry/virtualenvs/A5OhnBxC-py3.12/lib/python3.12/site-packages/starlette/routing.py", line 297, in handle
await self.app(scope, receive, send)
File "/Users/home/Library/Caches/pypoetry/virtualenvs/A5OhnBxC-py3.12/lib/python3.12/site-packages/starlette/routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/Users/home/Library/Caches/pypoetry/virtualenvs/A5OhnBxC-py3.12/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/Users/home/Library/Caches/pypoetry/virtualenvs/A5OhnBxC-py3.12/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/Users/home/Library/Caches/pypoetry/virtualenvs/A5OhnBxC-py3.12/lib/python3.12/site-packages/starlette/routing.py", line 72, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/Users/home/Library/Caches/pypoetry/virtualenvs/A5OhnBxC-py3.12/lib/python3.12/site-packages/fastapi/routing.py", line 278, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/home/Library/Caches/pypoetry/virtualenvs/A5OhnBxC-py3.12/lib/python3.12/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/home/Repo/ai/main.py", line 16, in read_root
return await interact(message.text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/home/Repo/ai/bot/bot.py", line 16, in interact
assistant = OpenAIAssistantRunnable.create_assistant(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/home/Library/Caches/pypoetry/virtualenvs/A5OhnBxC-py3.12/lib/python3.12/site-packages/langchain/agents/openai_assistant/base.py", line 247, in create_assistant
assistant = client.beta.assistants.create(
### Description
I am trying to create the Open AI Assistant and I se the error: "TypeError: Assistants.create() got an unexpected keyword argument 'file_ids'"
### System Info
macos
> Python Version: 3.12.3 (main, Apr 9 2024, 08:09:14) [Clang 15.0.0 (clang-1500.3.9.4)]
> langchain_core: 0.1.45
> langchain: 0.1.16
> langchain_community: 0.0.34
> langsmith: 0.1.50
> langchain_openai: 0.1.3
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
fastapi: 0.110.2
uvicorn: 0.29.0 | Assistants.create() got an unexpected keyword argument 'file_ids' | https://api.github.com/repos/langchain-ai/langchain/issues/20842/comments | 3 | 2024-04-24T17:11:28Z | 2024-07-18T12:43:29Z | https://github.com/langchain-ai/langchain/issues/20842 | 2,261,782,985 | 20,842 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I took [this code](https://github.com/langchain-ai/langgraph/blob/470718381407f1d8a7d23ab684eac370e138f2c6/examples/multi_agent/agent_supervisor.ipynb) and replaced OpenAI with anthropic [here](https://github.com/ChristianSch/agents/blob/main/multi-agent/agent_supervisor.py).
### Error Message and Stack Trace (if applicable)
This causes 'System message must be at beginning of message list' to be thrown in the `_format_messages` of `chat_models.py`. Full stacktrace:
```
Exception has occurred: ValueError (note: full exception trace is shown but execution is paused at: _run_module_as_main)
System message must be at beginning of message list.
File "/opt/homebrew/Caskroom/miniconda/base/envs/assistant/lib/python3.11/site-packages/langchain_anthropic/chat_models.py", line 149, in _format_messages
raise ValueError("System message must be at beginning of message list.")
File "/opt/homebrew/Caskroom/miniconda/base/envs/assistant/lib/python3.11/site-packages/langchain_anthropic/chat_models.py", line 321, in _format_params
system, formatted_messages = _format_messages(messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniconda/base/envs/assistant/lib/python3.11/site-packages/langchain_anthropic/chat_models.py", line 444, in _generate
params = self._format_params(messages=messages, stop=stop, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniconda/base/envs/assistant/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 626, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniconda/base/envs/assistant/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 417, in generate
raise e
File "/opt/homebrew/Caskroom/miniconda/base/envs/assistant/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 417, in generate
raise e
File "/opt/homebrew/Caskroom/miniconda/base/envs/assistant/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 556, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniconda/base/envs/assistant/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 154, in invoke
self.generate_prompt(
File "/opt/homebrew/Caskroom/miniconda/base/envs/assistant/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4511, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniconda/base/envs/assistant/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2499, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniconda/base/envs/assistant/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Caskroom/miniconda/base/envs/assistant/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 1102, in _panic_or_proceed
raise exc
File "/opt/homebrew/Caskroom/miniconda/base/envs/assistant/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 700, in stream
_panic_or_proceed(done, inflight, step)
File "/Users/christian/dev/agents/multi-agent/agent_supervisor.py", line 154, in <module>
for s in graph.stream(
File "/opt/homebrew/Caskroom/miniconda/base/envs/assistant/lib/python3.11/runpy.py", line 88, in _run_code
exec(code, run_globals)
File "/opt/homebrew/Caskroom/miniconda/base/envs/assistant/lib/python3.11/runpy.py", line 198, in _run_module_as_main (Current frame)
return _run_code(code, main_globals, None,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: System message must be at beginning of message list.
```
### Description
When using a router for example in a multi-agent scenario or any other scenario where ChatAnthropic is fed with more than one system message, an error is thrown. I deem this a valid scenario though:

### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:59 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6030
> Python Version: 3.11.5 | packaged by conda-forge | (main, Aug 27 2023, 03:33:12) [Clang 15.0.7 ]
Package Information
-------------------
> langchain_core: 0.1.42
> langchain: 0.1.16
> langchain_community: 0.0.32
> langsmith: 0.1.47
> langchain_anthropic: 0.1.8
> langchain_experimental: 0.0.57
> langchain_google_genai: 0.0.6
> langchain_openai: 0.0.6
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
> langgraph: 0.0.38
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve | ChatAnthropic can't handle mutliple system messages (i.e. for multi-agent scenarios, routers) | https://api.github.com/repos/langchain-ai/langchain/issues/20835/comments | 1 | 2024-04-24T15:02:27Z | 2024-07-31T16:08:20Z | https://github.com/langchain-ai/langchain/issues/20835 | 2,261,519,531 | 20,835 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
embedding = embed_with_retry(
self, input=text, text_type="query", model=self.model
)[0]["embedding"]
```
It should be :
```python
embedding = embed_with_retry(
self, input=[text], text_type="query", model=self.model
)[0]["embedding"]
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
If the parameter 'text' passed in is not a list, the string will be iterated continuously in the method _embed_with_retry, and finally the first embedding result will be returned. It seems meaningless, and when the 'text' is long, it will cause request limit, so I think this maybe a bug.
### System Info
not related | The embed_query func of DashScopeEmbeddings class has bug | https://api.github.com/repos/langchain-ai/langchain/issues/20830/comments | 0 | 2024-04-24T11:41:46Z | 2024-07-31T16:08:15Z | https://github.com/langchain-ai/langchain/issues/20830 | 2,261,092,145 | 20,830 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
MarkdownHeaderTextSplitter(headers_to_split_on=headers_to_split_on)
doc = """
```yaml
my:
foo: bar
a: b
```
"""
md_header_splits = markdown_splitter.split_text(markdown_document)
```
expected should be
```yaml
my:
foo: bar
a: b
```
actual is
```yaml
my:
foo: bar
a: b
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Here is a fix that works for whitespaces indents, not sure it covers the tabs though
markdown.py: split_text
```python
in_code_block = False
opening_fence = ""
code_block_indent = -1
for line in lines:
stripped_line = line.strip()
if not in_code_block:
# Exclude inline code spans
if stripped_line.startswith("```") and stripped_line.count("```") == 1:
in_code_block = True
opening_fence = "```"
code_block_indent = len(line) - len(stripped_line)
elif stripped_line.startswith("~~~"):
in_code_block = True
opening_fence = "~~~"
code_block_indent = len(line) - len(stripped_line)
else:
if stripped_line.startswith(opening_fence):
in_code_block = False
opening_fence = ""
if in_code_block:
total_line_indent = len(line) - len(stripped_line)
stripped_line = " "*max(0, total_line_indent - code_block_indent) + stripped_line
current_content.append(stripped_line)
continue
```
### System Info
- Installing typing-extensions (4.11.0)
- Downgrading urllib3 (2.2.1 -> 1.26.18)
- Installing exceptiongroup (1.2.1)
- Installing frozenlist (1.4.1)
- Installing h11 (0.14.0)
- Installing jsonpointer (2.4)
- Installing multidict (6.0.5)
- Installing mypy-extensions (1.0.0)
- Installing orjson (3.10.1)
- Downgrading packaging (24.0 -> 23.2)
- Installing pydantic (1.10.15)
- Installing six (1.16.0)
- Installing sniffio (1.3.1)
- Installing aiosignal (1.3.1)
- Installing anyio (4.3.0)
- Installing asttokens (2.4.1)
- Installing async-timeout (4.0.3)
- Installing attrs (23.2.0)
- Installing executing (2.0.1)
- Installing greenlet (3.0.3)
- Installing hpack (4.0.0)
- Installing httpcore (1.0.5)
- Installing hyperframe (6.0.1)
- Installing jsonpatch (1.33)
- Installing langsmith (0.1.50)
- Installing marshmallow (3.21.1)
- Installing parso (0.8.4)
- Installing pure-eval (0.2.2)
- Installing pyyaml (6.0.1)
- Installing tenacity (8.2.3)
- Installing traitlets (5.14.3)
- Installing typing-inspect (0.9.0)
- Installing wcwidth (0.2.13)
- Installing yarl (1.9.4)
- Installing aiohttp (3.9.5)
- Installing dataclasses-json (0.6.4)
- Installing distro (1.9.0)
- Installing decorator (5.1.1)
- Installing grpcio (1.62.2)
- Installing h2 (4.1.0)
- Installing httpx (0.27.0)
- Installing jedi (0.19.1)
- Installing jupyter-core (5.7.2)
- Installing langchain-core (0.1.45)
- Installing matplotlib-inline (0.1.7)
- Installing numpy (1.26.4)
- Installing prompt-toolkit (3.0.43)
- Installing protobuf (4.25.3)
- Installing pygments (2.17.2)
- Installing python-dateutil (2.9.0.post0)
- Installing pyzmq (26.0.2)
- Installing regex ([202](https://gitlab.com/neural-concept/product/llms/llmops/-/jobs/6703816413#L202)4.4.16)
- Updating setuptools (65.5.1 -> 69.5.1)
- Installing sqlalchemy (2.0.29)
- Installing stack-data (0.6.3)
- Installing tornado (6.4)
- Installing tqdm (4.66.2)
- Installing comm (0.2.2)
- Installing debugpy (1.8.1)
- Installing grpcio-tools (1.62.2)
- Installing ipython (8.23.0)
- Installing jupyter-client (8.6.1)
- Installing langchain-community (0.0.34)
- Installing langchain-text-splitters (0.0.1)
- Installing nest-asyncio (1.6.0)
- Installing openai (1.23.3)
- Installing portalocker (2.8.2)
- Installing psutil (5.9.8)
- Installing tiktoken (0.6.0)
- Installing click (8.1.7)
- Installing ipykernel (6.29.4)
- Installing langchain (0.1.11)
- Installing langchain-openai (0.0.8)
- Installing qdrant-client (1.6.4) | Markdown text splitter will remove code block indentation | https://api.github.com/repos/langchain-ai/langchain/issues/20823/comments | 0 | 2024-04-24T09:54:40Z | 2024-07-31T16:08:11Z | https://github.com/langchain-ai/langchain/issues/20823 | 2,260,891,406 | 20,823 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
ChatLiteLLM
max_tokens: int = 256
ChatOpenAI
max_tokens: Optional[int] = None
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
My current code cannot detect number of tokens so I don't want to set a specific number for max_tokens value. But in ChatLiteLLM I cannot set max_tokens = None
### System Info
langchain==0.1.0
langchain-community==0.0.10
langchain-core==0.1.10
langchain-openai==0.0.2.post1 | The max_tokens type of ChatLiteLLM is not consistency with ChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/20816/comments | 0 | 2024-04-24T07:23:41Z | 2024-07-31T16:08:05Z | https://github.com/langchain-ai/langchain/issues/20816 | 2,260,571,804 | 20,816 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
memory = ConversationBufferMemory(
memory_key="chat_history", return_messages=True
)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant, designed to use tools tell me the distance between cities.",
),
("placeholder", "{chat_history}"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)
tools=load_tools(['wolfram-alpha'])
agent = create_tool_calling_agent(
llm = ChatGroq(temperature=0,
model_name="llama3-8b-8192"),
tools=tools,
prompt=prompt,
)
agent_executor = AgentExecutor(agent=agent, tools=tools, memory=memory,verbose=True)
agent_executor.invoke({"input": "How far is New York City to Tokyo?"})
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/jay/pythonProject/Wolfram-lc-Groq-llama3.py", line 33, in <module>
agent_executor.invoke({"input": "How far is New York City to Tokyo?"})
File "/Users/jay/opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/Users/jay/opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/Users/jay/opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain/agents/agent.py", line 1432, in _call
next_step_output = self._take_next_step(
File "/Users/jay/opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain/agents/agent.py", line 1138, in _take_next_step
[
File "/Users/jay/opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain/agents/agent.py", line 1138, in <listcomp>
[
File "/Users/jay/opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain/agents/agent.py", line 1166, in _iter_next_step
output = self.agent.plan(
File "/Users/jay/opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain/agents/agent.py", line 514, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "/Users/jay/opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2875, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/Users/jay/opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2862, in transform
yield from self._transform_stream_with_config(
File "/Users/jay/opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1880, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "/Users/jay/opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2826, in _transform
for output in final_pipeline:
File "/Users/jay/opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1283, in transform
for chunk in input:
File "/Users/jay/opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4728, in transform
yield from self.bound.transform(
File "/Users/jay/opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1300, in transform
yield from self.stream(final, config, **kwargs)
File "/Users/jay/opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 249, in stream
raise e
File "/Users/jay/opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 229, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):
File "/Users/jay/opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain_groq/chat_models.py", line 290, in _stream
for rtc in message.additional_kwargs["tool_calls"]
KeyError: 'tool_calls'
### Description
The tool cannot be called correctly when calling the Groq model through langchain.
It would be perfectly fine to switch the model here to gpt3.5, but I would like to use other models like llama3, gemini, etc. It seems that only openai can call multiple tools through proxies very well
I guess it may be an agent problem, but I really don't know how to customize a suitable agent to solve the problem
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:12:41 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T8103
> Python Version: 3.10.12 (main, Jul 5 2023, 15:34:07) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.1.45
> langchain: 0.1.16
> langchain_community: 0.0.34
> langsmith: 0.1.38
> langchain_cli: 0.0.21
> langchain_decorators: 0.5.4
> langchain_experimental: 0.0.56
> langchain_google_genai: 1.0.1
> langchain_google_vertexai: 0.1.2
> langchain_groq: 0.1.2
> langchain_openai: 0.1.1
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
> langserve: 0.1.0 | The tool cannot be called correctly when calling the Groq model through langchain——KeyError: 'tool_calls' | https://api.github.com/repos/langchain-ai/langchain/issues/20811/comments | 9 | 2024-04-24T04:42:47Z | 2024-04-25T13:41:07Z | https://github.com/langchain-ai/langchain/issues/20811 | 2,260,303,319 | 20,811 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
vectorstore = OpenSearchVectorSearch(
embedding_function=embeddings,
index_name="sample-index",
opensearch_url="https://localhost:9200",
http_auth=("admin","admin"),
use_ssl = False, verify_certs = False
)
## Load Ollama LAMA2 LLM model
llm=Ollama(model="llama2")
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_template("""
Answer the following question based only on the provided context.
Think step by step before providing a detailed answer.
I will tip you $1000 if the user finds the answer helpful.
<context>
{context}
</context>
""")
from langchain.chains import RetrievalQA
qa_chain = RetrievalQA.from_chain_type(
llm, retriever=vectorstore.as_retriever(), chain_type_kwargs={"prompt": prompt}
)
question = "Hi"
result = qa_chain({"query": question})
print(result["result"])
### Error Message and Stack Trace (if applicable)
result = qa_chain({"query": question})
File "/usr/local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 378, in __call__
return self.invoke(
File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py", line 144, in _call
answer = self.combine_documents_chain.run(
File "/usr/local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 550, in run
return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
File "/usr/local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 378, in __call__
return self.invoke(
File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py", line 137, in _call
output, extra_return_dict = self.combine_docs(
File "/usr/local/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py", line 244, in combine_docs
return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
File "/usr/local/lib/python3.10/site-packages/langchain/chains/llm.py", line 293, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
File "/usr/local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 378, in __call__
return self.invoke(
File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.10/site-packages/langchain/chains/llm.py", line 103, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/usr/local/lib/python3.10/site-packages/langchain/chains/llm.py", line 115, in generate
return self.llm.generate_prompt(
File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 569, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 748, in generate
output = self._generate_helper(
File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 606, in _generate_helper
raise e
File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 593, in _generate_helper
self._generate(
File "/usr/local/lib/python3.10/site-packages/langchain_community/llms/ollama.py", line 421, in _generate
final_chunk = super()._stream_with_aggregation(
File "/usr/local/lib/python3.10/site-packages/langchain_community/llms/ollama.py", line 330, in _stream_with_aggregation
for stream_resp in self._create_generate_stream(prompt, stop, **kwargs):
File "/usr/local/lib/python3.10/site-packages/langchain_community/llms/ollama.py", line 172, in _create_generate_stream
yield from self._create_stream(
File "/usr/local/lib/python3.10/site-packages/langchain_community/llms/ollama.py", line 233, in _create_stream
response = requests.post(
File "/usr/local/lib/python3.10/site-packages/requests/api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
File "/usr/local/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.10/site-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2aa40c85b0>: Failed to establish a new connection: [Errno 111] Connection refused'))
### Description
If I do a similarity search, it works but with the retrieval QA it gives this error.
Seems bizzare.
### System Info
langchain==0.1.13
langchain-community==0.0.29
langchain-core==0.1.33
langchain-text-splitters==0.0.1
llama-index-embeddings-langchain==0.1.2
llama-index-llms-langchain==0.1.3 | Opensearch with Retrieval QA doesn't work | https://api.github.com/repos/langchain-ai/langchain/issues/20809/comments | 0 | 2024-04-24T00:56:19Z | 2024-07-31T16:08:00Z | https://github.com/langchain-ai/langchain/issues/20809 | 2,260,042,490 | 20,809 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
model_id = "meta.llama2-70b-chat-v1"
llm = BedrockChat(model_id=model_id, model_kwargs={"temperature": 0.5})
llm.get_num_tokens("this is a text")
```
### Error Message and Stack Trace (if applicable)
'(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /gpt2/resolve/main/tokenizer_config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f55346aa800>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: *****')' thrown while requesting HEAD https://huggingface.co/gpt2/resolve/main/tokenizer_config.json
### Description
I noticed BedrockChat is using GPT2 Tokenizer rather than LlamaTokenizer for the meta.llama2-70b-chat-v1 model
I first found this out when using Langchain in a network-isolated environment and was getting this error, where it's try to download gpt2 tokenizer_config,
The network error itself is not the problem but rather BedRockChat is not using the right tokenizer when doing token count. The code snippet I was executing was this.
I traced back to to the parent class [BaseLanguageModel](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/language_models/base.py) and noticed it is in fact using GPT2 Tokenizer.
### System Info
`!pip freeze | grep langchain`
```
langchain==0.1.16
langchain-community==0.0.33
langchain-core==0.1.45
langchain-text-splitters==0.0.1
langchainplus-sdk==0.0.20
``` | BedrockChat is using GPT2 Tokenzier rather than LlamaTokenizer | https://api.github.com/repos/langchain-ai/langchain/issues/20807/comments | 1 | 2024-04-23T23:11:22Z | 2024-07-31T16:07:55Z | https://github.com/langchain-ai/langchain/issues/20807 | 2,259,927,888 | 20,807 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.callbacks import get_openai_callback
from langchain_openai import OpenAIEmbeddings
model = OpenAIEmbeddings(model="text-embedding-3-small")
with get_openai_callback() as cb:
model.embed_documents(["Hello world"])
print(cb)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The `get_openai_callback` does not record token usages and costs when OpenAIEmbedding is used, reporting:
```text
Tokens Used: 0
Prompt Tokens: 0
Completion Tokens: 0
Successful Requests: 0
Total Cost (USD): $0.0
```
instead of
```
Tokens Used: 2
Prompt Tokens: 2
Completion Tokens: 0
Successful Requests: 1
Total Cost (USD): $2e-07
```
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Wed, 17 Apr 2024 10:11:09 +0000
> Python Version: 3.11.8 (main, Feb 12 2024, 14:50:05) [GCC 13.2.1 20230801]
Package Information
-------------------
> langchain_core: 0.1.45
> langchain: 0.1.16
> langchain_community: 0.0.34
> langsmith: 0.1.40
> langchain_chroma: 0.1.0
> langchain_openai: 0.1.3
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
> langchainplus_sdk: 0.0.7
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | get_open_ai_callback broken on OpenAIEmbeddings | https://api.github.com/repos/langchain-ai/langchain/issues/20799/comments | 0 | 2024-04-23T18:15:03Z | 2024-07-30T16:07:46Z | https://github.com/langchain-ai/langchain/issues/20799 | 2,259,486,717 | 20,799 |
[
"hwchase17",
"langchain"
] | Hi | Issuw with passing two prompts as input to `conversationchain` | https://api.github.com/repos/langchain-ai/langchain/issues/20797/comments | 5 | 2024-04-23T17:40:53Z | 2024-04-26T13:08:00Z | https://github.com/langchain-ai/langchain/issues/20797 | 2,259,419,820 | 20,797 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Code is bascially: https://python.langchain.com/docs/use_cases/question_answering/chat_history/
but use AzureSearch with async API
```python
from langchain_community.vectorstores.azuresearch import AzureSearch
retriver = AzureSearch().as_retriever()
#...
astream = pipeline.astream({"input": user_question})
async for chunk in astream:
print(chunk)
```
It results in `NotImplementedError: AzureSearchVectorStoreRetriever does not support async`
but it works just fine in the previous version.
### Error Message and Stack Trace (if applicable)
```
2024-04-23 11:58:15,788 ERROR Exception inside application: 'async_generator' object is not iterable
Traceback (most recent call last):
File "/app/gpt_exploration/chat.py", line 132, in run_async_query
async for chunk in astream:
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 4704, in astream
async for item in self.bound.astream(
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 4704, in astream
async for item in self.bound.astream(
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2900, in astream
async for chunk in self.atransform(input_aiter(), config, **kwargs):
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2883, in atransform
async for chunk in self._atransform_stream_with_config(
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1979, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2853, in _atransform
async for output in final_pipeline:
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 4740, in atransform
async for item in self.bound.atransform(
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2883, in atransform
async for chunk in self._atransform_stream_with_config(
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1979, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2853, in _atransform
async for output in final_pipeline:
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/passthrough.py", line 587, in atransform
async for chunk in self._atransform_stream_with_config(
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1979, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/passthrough.py", line 577, in _atransform
yield await first_map_chunk_task
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/utils/aiter.py", line 62, in anext_impl
return await __anext__(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3317, in atransform
async for chunk in self._atransform_stream_with_config(
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1979, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3304, in _atransform
chunk = AddableDict({step_name: task.result()})
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3287, in get_next_chunk
return await py_anext(generator)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 4740, in atransform
async for item in self.bound.atransform(
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2883, in atransform
async for chunk in self._atransform_stream_with_config(
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1979, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2853, in _atransform
async for output in final_pipeline:
File "/usr/local/lib/python3.12/site-packages/langchain_core/output_parsers/transform.py", line 60, in atransform
async for chunk in self._atransform_stream_with_config(
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1943, in _atransform_stream_with_config
final_input: Optional[Input] = await py_anext(input_for_tracing, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/utils/aiter.py", line 62, in anext_impl
return await __anext__(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer
item = await iterator.__anext__()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1316, in atransform
async for chunk in input:
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1316, in atransform
async for chunk in input:
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 4740, in atransform
async for item in self.bound.atransform(
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/passthrough.py", line 587, in atransform
async for chunk in self._atransform_stream_with_config(
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1979, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/passthrough.py", line 577, in _atransform
yield await first_map_chunk_task
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/utils/aiter.py", line 62, in anext_impl
return await __anext__(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3317, in atransform
async for chunk in self._atransform_stream_with_config(
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1979, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3304, in _atransform
chunk = AddableDict({step_name: task.result()})
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3287, in get_next_chunk
return await py_anext(generator)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 4172, in atransform
async for output in self._atransform_stream_with_config(
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1979, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 4086, in _atransform
async for ichunk in input:
File "/usr/local/lib/python3.12/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer
item = await iterator.__anext__()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer
item = await iterator.__anext__()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer
item = await iterator.__anext__()
^^^^^^^^^^^^^^^^^^^^^^^^^^
[Previous line repeated 7 more times]
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/passthrough.py", line 587, in atransform
async for chunk in self._atransform_stream_with_config(
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1979, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/passthrough.py", line 577, in _atransform
yield await first_map_chunk_task
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/utils/aiter.py", line 62, in anext_impl
return await __anext__(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3317, in atransform
async for chunk in self._atransform_stream_with_config(
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1979, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3304, in _atransform
chunk = AddableDict({step_name: task.result()})
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3287, in get_next_chunk
return await py_anext(generator)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 4740, in atransform
async for item in self.bound.atransform(
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1333, in atransform
async for output in self.astream(final, config, **kwargs):
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/branch.py", line 380, in astream
async for chunk in runnable.astream(
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2900, in astream
async for chunk in self.atransform(input_aiter(), config, **kwargs):
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2883, in atransform
async for chunk in self._atransform_stream_with_config(
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1979, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2853, in _atransform
async for output in final_pipeline:
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1333, in atransform
async for output in self.astream(final, config, **kwargs):
File "/usr/local/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 820, in astream
yield await self.ainvoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/retrievers.py", line 227, in ainvoke
return await self.aget_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/retrievers.py", line 384, in aget_relevant_documents
raise e
File "/usr/local/lib/python3.12/site-packages/langchain_core/retrievers.py", line 377, in aget_relevant_documents
result = await self._aget_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_community/vectorstores/azuresearch.py", line 735, in _aget_relevant_documents
raise NotImplementedError(
NotImplementedError: AzureSearchVectorStoreRetriever does not support async
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/asgiref/sync.py", line 518, in thread_handler
raise exc_info[1]
File "/usr/local/lib/python3.12/site-packages/django/http/response.py", line 514, in __aiter__
async for part in self.streaming_content:
File "/usr/local/lib/python3.12/site-packages/django/http/response.py", line 471, in awrapper
async for part in _iterator:
File "/app/gpt_exploration/chat.py", line 152, in run_async_query
store_data(session_id, answer_id, user_question, answer, answer_time, doc_context['context'], configuration_id)
~~~~~~~~~~~^^^^^^^^^^^
TypeError: 'NoneType' object is not subscriptable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/django/core/handlers/asgi.py", line 170, in __call__
await self.handle(scope, receive, send)
File "/usr/local/lib/python3.12/site-packages/django/core/handlers/asgi.py", line 209, in handle
task.result()
File "/usr/local/lib/python3.12/site-packages/django/core/handlers/asgi.py", line 193, in process_request
await self.send_response(response, send)
File "/usr/local/lib/python3.12/site-packages/django/core/handlers/asgi.py", line 325, in send_response
async for part in content:
File "/usr/local/lib/python3.12/site-packages/django/http/response.py", line 524, in __aiter__
for part in await sync_to_async(list)(self.streaming_content):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/asgiref/sync.py", line 468, in __call__
ret = await asyncio.shield(exec_coro)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/asgiref/sync.py", line 520, in thread_handler
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
TypeError: 'async_generator' object is not iterable
```
### Description
This is a regression, installing `langchain-community==0.0.33` fixes the issue, installing `langchain-community==0.0.34` breaks.
### System Info
langchain==0.1.8
# 0.0.33 works
langchain-community==0.0.34
langchain-core==0.1.44
langchain-openai==0.0.6 | langchain community 0.0.34 NotImplementedError: AzureSearchVectorStoreRetriever does not support async - working in 0.0.33 | https://api.github.com/repos/langchain-ai/langchain/issues/20787/comments | 1 | 2024-04-23T12:38:16Z | 2024-08-05T16:08:51Z | https://github.com/langchain-ai/langchain/issues/20787 | 2,258,773,796 | 20,787 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import os
from common import constants
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
chat = ChatOpenAI(
base_url=constants.BASE_URL,
openai_api_key=constants.API_KEY
)
# 创建 Memory
memory = ConversationBufferMemory(return_messages=True)
# 将 Memory 封装到 ConversationChain
conversation = ConversationChain(llm=chat, memory=memory)
print(conversation.prompt.template)
# response = conversation.predict(input="请问中国的首都是哪里?")
# print(response)
response = conversation.apply([{"input": "请问中国的首都是哪里?"}])
print(response)
### Error Message and Stack Trace (if applicable)
Current conversation:
{history}
Human: {input}
AI:
Traceback (most recent call last):
File "/opt/anaconda3/envs/syxy/lib/python3.8/site-packages/langchain/chains/llm.py", line 157, in apply
raise e
File "/opt/anaconda3/envs/syxy/lib/python3.8/site-packages/langchain/chains/llm.py", line 154, in apply
response = self.generate(input_list, run_manager=run_manager)
File "/opt/anaconda3/envs/syxy/lib/python3.8/site-packages/langchain/chains/llm.py", line 78, in generate
prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)
File "/opt/anaconda3/envs/syxy/lib/python3.8/site-packages/langchain/chains/llm.py", line 105, in prep_prompts
selected_inputs = {k: inputs[k] for k in self.prompt.input_variables}
File "/opt/anaconda3/envs/syxy/lib/python3.8/site-packages/langchain/chains/llm.py", line 105, in <dictcomp>
selected_inputs = {k: inputs[k] for k in self.prompt.input_variables}
KeyError: 'history'
python-BaseException
### Description
调用predict方法可以正常执行
调用apply方法报错,没有history key
history key 是memory 的key
apply方法 没有经过chain的 __call__方法
直接调用llm 的generate_prompt方法
导致了报错
### System Info
langchain @ file:///home/conda/feedstock_root/build_artifacts/langchain_1685957913682/work
langchain-core @ file:///home/conda/feedstock_root/build_artifacts/langchain-core_1703029179104/work
macos
python 3.8.5 | Langchain ConversationChain.apply 没有加载memory 中的key导致了key error | https://api.github.com/repos/langchain-ai/langchain/issues/20785/comments | 3 | 2024-04-23T09:52:20Z | 2024-04-25T04:18:47Z | https://github.com/langchain-ai/langchain/issues/20785 | 2,258,441,588 | 20,785 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
def load_chat_template() -> ChatPromptTemplate:
"""
Loads and returns a Langchain ChatPromptTemplate, initiated with system & user messages.
Expects to find the templates within a subdirectory named 'prompts' and the system prompt
template is named 'system.jinja2'.
Returns:
ChatPromptTemplate: Configured chat prompt template with system and user
messages, including a placeholder for agent messages.
"""
cwd = Path(__file__).parent
system_prompt = PromptTemplate.from_file(cwd / "prompts" / "system.jinja2", template_format="jinja2")
return ChatPromptTemplate.from_messages(
[
("system", system_prompt.format()),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
llm = AzureChatOpenAI(
api_key=config.api_key,
api_version=config.azure_openai_api_version,
azure_endpoint=config.azure_openai_endpoint,
azure_deployment=config.azure_openai_deployment_name,
)
chat_template = load_chat_template()
# Create a list of custom tools to be used by the agent
# Replace this with some sample tools
# Output: List[BaseTool]
tools = prepare_tools(self._search_service)
# Reference https://python.langchain.com/docs/modules/agents/agent_types/openai_tools/
agent = create_openai_tools_agent(llm, tools, chat_template)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=init_config().rag_agent_verbose)
if isinstance(agent_executor.agent, RunnableMultiActionAgent):
# BUG: This prints true always
print(f"Agent is set to stream: {agent_executor.agent.stream_runnable}")
return agent_executor.invoke({"input": input_prompt})
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
In the `AzureChatOpenAI` model streaming is set to false, but in the `agent_executor.stream_runnable` is always set to true and it never inherits the value from `AzureChatOpenAI`, which I'd expect it to. Maybe that expectation is incorrect? Is this an intended behavior?
The way I've found to disable streaming is by setting it manually false after the `AgentExecutor` is returned, like so:
```python
if isinstance(agent_executor.agent, RunnableMultiActionAgent):
agent_executor.agent.stream_runnable = False
```
This actually disables streaming. Is there a better way to do this?
In my use case, I wanted to disable streaming so that I could enable caching for tools as OpenAI streaming implementation doesn't implement caching.
### System Info
```sh
$ pip freeze | grep langchain
langchain==0.1.16
langchain-community==0.0.34
langchain-core==0.1.45
langchain-openai==0.1.3
langchain-text-splitters==0.0.1
langchainhub==0.1.15
opentelemetry-instrumentation-langchain==0.14.5
```
---
Discovered the issue with @quovadim
| RunnableMultiActionAgent enable streaming by default even if disabled in the LLM | https://api.github.com/repos/langchain-ai/langchain/issues/20782/comments | 0 | 2024-04-23T09:22:59Z | 2024-07-30T16:07:36Z | https://github.com/langchain-ai/langchain/issues/20782 | 2,258,380,545 | 20,782 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
interpreter_assistant = OpenAIAssistantRunnable.create_assistant(
name="langchain assistant",
instructions="You are a personal math tutor. Write and run code to answer math questions.",
tools=[{"type": "code_interpreter"}],
model="gpt-4-1106-preview",
)
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[24], line 1
----> 1 interpreter_assistant = OpenAIAssistantRunnable.create_assistant(
2 name="langchain assistant",
3 instructions="You are a personal math tutor. Write and run code to answer math questions.",
4 tools=[{"type": "code_interpreter"}],
5 model="gpt-4-1106-preview",
6 )
File [~/python3.11/lib/python3.11/site-packages/langchain/agents/openai_assistant/base.py:247](https://notebook.aratech.cloud/lab/tree/LLM/python3.11/lib/python3.11/site-packages/langchain/agents/openai_assistant/base.py#line=246), in OpenAIAssistantRunnable.create_assistant(cls, name, instructions, tools, model, client, **kwargs)
233 """Create an OpenAI Assistant and instantiate the Runnable.
234
235 Args:
(...)
244 OpenAIAssistantRunnable configured to run using the created assistant.
245 """
246 client = client or _get_openai_client()
--> 247 assistant = client.beta.assistants.create(
248 name=name,
249 instructions=instructions,
250 tools=[_get_assistants_tool(tool) for tool in tools], # type: ignore
251 model=model,
252 file_ids=kwargs.get("file_ids"),
253 )
254 return cls(assistant_id=assistant.id, client=client, **kwargs)
TypeError: Assistants.create() got an unexpected keyword argument 'file_ids'
### Description
Probably the new OpenAi assistant V2 is not accpeting the file_ids argument.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #112~20.04.1-Ubuntu SMP Thu Mar 14 14:28:24 UTC 2024
> Python Version: 3.11.9 (main, Apr 6 2024, 17:59:24) [GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.1.44
> langchain: 0.1.16
> langchain_community: 0.0.33
> langsmith: 0.1.48
> langchain_experimental: 0.0.57
> langchain_openai: 0.1.3
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | OpenAI assistant new version seems not compatible | https://api.github.com/repos/langchain-ai/langchain/issues/20780/comments | 3 | 2024-04-23T08:21:07Z | 2024-05-14T16:22:45Z | https://github.com/langchain-ai/langchain/issues/20780 | 2,258,255,673 | 20,780 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
--> 248 result = chain({"question": question_with_region, "chat_history": chat_history})
249 sources = "\n".join(set(map(lambda x: x.metadata["source"], result['source_documents'])))
251 container_sas = self.blob_client.get_container_sas()
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\_api\deprecation.py:148, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
146 warned = True
147 emit_warning()
--> 148 return wrapped(*args, **kwargs)
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py:378, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
346 """Execute the chain.
347
348 Args:
(...)
369 `Chain.output_keys`.
370 """
371 config = {
372 "callbacks": callbacks,
373 "tags": tags,
374 "metadata": metadata,
375 "run_name": run_name,
376 }
--> 378 return self.invoke(
379 inputs,
380 cast(RunnableConfig, {k: v for k, v in config.items() if v is not None}),
381 return_only_outputs=return_only_outputs,
382 include_run_info=include_run_info,
383 )
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py:163, in Chain.invoke(self, input, config, **kwargs)
161 except BaseException as e:
162 run_manager.on_chain_error(e)
--> 163 raise e
164 run_manager.on_chain_end(outputs)
166 if include_run_info:
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py:153, in Chain.invoke(self, input, config, **kwargs)
150 try:
151 self._validate_inputs(inputs)
152 outputs = (
--> 153 self._call(inputs, run_manager=run_manager)
154 if new_arg_supported
155 else self._call(inputs)
156 )
158 final_outputs: Dict[str, Any] = self.prep_outputs(
159 inputs, outputs, return_only_outputs
160 )
161 except BaseException as e:
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\conversational_retrieval\base.py:155, in BaseConversationalRetrievalChain._call(self, inputs, run_manager)
151 accepts_run_manager = (
152 "run_manager" in inspect.signature(self._get_docs).parameters
153 )
154 if accepts_run_manager:
--> 155 docs = self._get_docs(new_question, inputs, run_manager=_run_manager)
156 else:
157 docs = self._get_docs(new_question, inputs) # type: ignore[call-arg]
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\conversational_retrieval\base.py:317, in ConversationalRetrievalChain._get_docs(self, question, inputs, run_manager)
309 def _get_docs(
310 self,
311 question: str,
(...)
314 run_manager: CallbackManagerForChainRun,
315 ) -> List[Document]:
316 """Get docs."""
--> 317 docs = self.retriever.get_relevant_documents(
318 question, callbacks=run_manager.get_child()
319 )
320 return self._reduce_tokens_below_limit(docs)
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\retrievers.py:321, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
319 except Exception as e:
320 run_manager.on_retriever_error(e)
--> 321 raise e
322 else:
323 run_manager.on_retriever_end(
324 result,
325 )
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\retrievers.py:314, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
312 _kwargs = kwargs if self._expects_other_args else {}
313 if self._new_arg_supported:
--> 314 result = self._get_relevant_documents(
315 query, run_manager=run_manager, **_kwargs
316 )
317 else:
318 result = self._get_relevant_documents(query, **_kwargs)
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\vectorstores.py:696, in VectorStoreRetriever._get_relevant_documents(self, query, run_manager)
692 def _get_relevant_documents(
693 self, query: str, *, run_manager: CallbackManagerForRetrieverRun
694 ) -> List[Document]:
695 if self.search_type == "similarity":
--> 696 docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
697 elif self.search_type == "similarity_score_threshold":
698 docs_and_similarities = (
699 self.vectorstore.similarity_search_with_relevance_scores(
700 query, **self.search_kwargs
701 )
702 )
File C:\codespaces\chatbot\stable-code\portal-api\utilities\azuresearch.py:353, in AzureSearch.similarity_search(self, query, k, **kwargs)
340 def similarity_search(
341 self, query: str, k: int = 4, **kwargs: Any
342 ) -> List[Document]:
343 """
344 Returns the most similar indexed documents to the query text.
345
(...)
351 List[Document]: A list of documents that are most similar to the query text.
352 """
--> 353 docs_and_scores = self.similarity_search_with_score(
354 query, k=k, filters=self.filters)
355 return [doc for doc, _ in docs_and_scores]
File C:\codespaces\chatbot\stable-code\portal-api\utilities\azuresearch.py:372, in AzureSearch.similarity_search_with_score(self, query, k, filters)
360 """Return docs most similar to query.
361
362 Args:
(...)
367 List of Documents most similar to the query and score for each
368 """
369 if self.index_name == "embeddings":
370 results = self.client.search(
371 search_text="",
--> 372 vector=Vector(value=np.array(self.embedding_function(
373 query), dtype=np.float32).tolist(), k=k, fields=FIELDS_CONTENT_VECTOR),
374 select=[f"{FIELDS_TITLE},{FIELDS_CONTENT},{FIELDS_METADATA}"],
375 filter=filters
376 )
377 # Convert results to Document objects
378 docs = [
379 (
380 Document(
(...)
386 for result in results
387 ]
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_community\embeddings\openai.py:697, in OpenAIEmbeddings.embed_query(self, text)
688 def embed_query(self, text: str) -> List[float]:
689 """Call out to OpenAI's embedding endpoint for embedding query text.
690
691 Args:
(...)
695 Embedding for the text.
696 """
--> 697 return self.embed_documents([text])[0]
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_community\embeddings\openai.py:668, in OpenAIEmbeddings.embed_documents(self, texts, chunk_size)
665 # NOTE: to keep things simple, we assume the list may contain texts longer
666 # than the maximum context and use length-safe embedding function.
667 engine = cast(str, self.deployment)
--> 668 return self._get_len_safe_embeddings(texts, engine=engine)
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_community\embeddings\openai.py:494, in OpenAIEmbeddings._get_len_safe_embeddings(self, texts, engine, chunk_size)
492 batched_embeddings: List[List[float]] = []
493 for i in _iter:
--> 494 response = embed_with_retry(
495 self,
496 input=tokens[i : i + _chunk_size],
497 **self._invocation_params,
498 )
499 if not isinstance(response, dict):
500 response = response.dict()
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_community\embeddings\openai.py:116, in embed_with_retry(embeddings, **kwargs)
114 """Use tenacity to retry the embedding call."""
115 if is_openai_v1():
--> 116 return embeddings.client.create(**kwargs)
117 retry_decorator = _create_retry_decorator(embeddings)
119 @retry_decorator
120 def _embed_with_retry(**kwargs: Any) -> Any:
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\resources\embeddings.py:113, in Embeddings.create(self, input, model, dimensions, encoding_format, user, extra_headers, extra_query, extra_body, timeout)
107 embedding.embedding = np.frombuffer( # type: ignore[no-untyped-call]
108 base64.b64decode(data), dtype="float32"
109 ).tolist()
111 return obj
--> 113 return self._post(
114 "/embeddings",
115 body=maybe_transform(params, embedding_create_params.EmbeddingCreateParams),
116 options=make_request_options(
117 extra_headers=extra_headers,
118 extra_query=extra_query,
119 extra_body=extra_body,
120 timeout=timeout,
121 post_parser=parser,
122 ),
123 cast_to=CreateEmbeddingResponse,
124 )
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\_base_client.py:1233, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
1219 def post(
1220 self,
1221 path: str,
(...)
1228 stream_cls: type[_StreamT] | None = None,
1229 ) -> ResponseT | _StreamT:
1230 opts = FinalRequestOptions.construct(
1231 method="post", url=path, json_data=body, files=to_httpx_files(files), **options
1232 )
-> 1233 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\_base_client.py:922, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
913 def request(
914 self,
915 cast_to: Type[ResponseT],
(...)
920 stream_cls: type[_StreamT] | None = None,
921 ) -> ResponseT | _StreamT:
--> 922 return self._request(
923 cast_to=cast_to,
924 options=options,
925 stream=stream,
926 stream_cls=stream_cls,
927 remaining_retries=remaining_retries,
928 )
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\_base_client.py:1013, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
1010 err.response.read()
1012 log.debug("Re-raising status error")
-> 1013 raise self._make_status_error_from_response(err.response) from None
1015 return self._process_response(
1016 cast_to=cast_to,
1017 options=options,
(...)
1020 stream_cls=stream_cls,
1021 )
NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}} | Getting the Issue with the New Langchain version | https://api.github.com/repos/langchain-ai/langchain/issues/20778/comments | 2 | 2024-04-23T08:07:47Z | 2024-08-09T15:29:50Z | https://github.com/langchain-ai/langchain/issues/20778 | 2,258,230,169 | 20,778 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
system_message = '''You are an assistant for question-answering tasks.
Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know.
Use three sentences maximum and keep the answer concise.'''
user_message = '''Question: {question}
Context: {context}
Answer:'''
prompt = PromptTemplate(
template=llama3_prompt_template.format(system_message=system_message, user_message=user_message),
input_variables=['question', 'context']
)
llm = ChatOllama(model=local_llm, temperature=0)
def format_docs(docs):
return "\n\n".join([doc.page_content for doc in docs])
rag_chain = prompt | llm | StrOutputParser()
question = 'agent memory'
docs = retriever.invoke(question)
generation = rag_chain.invoke({'question': question, 'context': docs})
print(generation)
### Error Message and Stack Trace (if applicable)
ValueError('Ollama call failed with status code 400. Details: {"error":"unexpected server status: 1"}')Traceback (most recent call last):
File "/home/darthcoder/miniconda3/envs/LangChain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 411, in generate
self._generate_with_cache(
File "/home/darthcoder/miniconda3/envs/LangChain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 632, in _generate_with_cache
result = self._generate(
File "/home/darthcoder/miniconda3/envs/LangChain/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py", line 259, in _generate
final_chunk = self._chat_stream_with_aggregation(
File "/home/darthcoder/miniconda3/envs/LangChain/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py", line 190, in _chat_stream_with_aggregation
for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
File "/home/darthcoder/miniconda3/envs/LangChain/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py", line 162, in _create_chat_stream
yield from self._create_stream(
File "/home/darthcoder/miniconda3/envs/LangChain/lib/python3.10/site-packages/langchain_community/llms/ollama.py", line 251, in _create_stream
raise ValueError(
ValueError: Ollama call failed with status code 400. Details: {"error":"unexpected server status: 1"}
### Description
I am implementing this demo - https://github.com/langchain-ai/langgraph/blob/main/examples/rag/langgraph_rag_agent_llama3_local.ipynb - from LangChain's Youtube video
The same cell during Generation sometimes runs but sometimes gives the Error 400
This occurs for other code cells too at random
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Jan 11 04:09:03 UTC 2024
> Python Version: 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:36:39) [GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.1.45
> langchain: 0.1.16
> langchain_community: 0.0.34
> langsmith: 0.1.49
> langchain_chroma: 0.1.0
> langchain_cli: 0.0.21
> langchain_experimental: 0.0.53
> langchain_groq: 0.1.2
> langchain_nomic: 0.0.2
> langchain_pinecone: 0.0.3
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
> langgraph: 0.0.38
> langserve: 0.1.0 | Error 400 from Ollama while generation at random cell runs | https://api.github.com/repos/langchain-ai/langchain/issues/20773/comments | 9 | 2024-04-23T06:33:10Z | 2024-05-17T04:21:54Z | https://github.com/langchain-ai/langchain/issues/20773 | 2,258,061,163 | 20,773 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
PROMPT_TEMPLATE = """Respond to the human as helpfully and accurately as possible. You have access to the following tools:
{tools}
Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).
Valid "action" values: "Final Answer" or {tool_names}
Provide only ONE action per $JSON_BLOB, as shown:
```
{{
"action": $TOOL_NAME,
"action_input": $INPUT
}}
```
Follow this format:
Question: input question to answer
Thought: consider previous and subsequent steps
Action:
```
$JSON_BLOB
```
Observation: action result
... (repeat Thought/Action/Observation N times)
Thought: I know what to respond
Action:
```
{{
"action": "Final Answer",
"action_input": "Final response to human"
}}
Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation'''
human = '''{input}
'''
"""
def create_agent(
llm: BaseLanguageModel,
tools: list,
output_parser: WebCrawleRegexParser,
tools_renderer: ToolsRenderer = render_text_description_and_args,
**kwargs
):
prompt_template = PromptTemplate.from_template(
PROMPT_TEMPLATE
)
missing_vars = {"tools", "tool_names", "agent_scratchpad"}.difference(
prompt_template.input_variables
)
if missing_vars:
raise ValueError(f"Prompt missing required variables: {missing_vars}")
from langchain.agents.format_scratchpad import format_log_to_str
prompt = prompt_template.format(
tools=tools_renderer(list(tools)),
tool_names=", ".join([t.name for t in tools]),
)
print(f"prompt={prompt}")
stop = ["\nObservation"]
llm_with_stop = llm.bind(stop=stop)
agent = (
prompt_template|
llm_with_stop
)
return agent
agent = create_web_crawler_agent(llm=llm, tools=tools, output_parser=None)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
ret = agent_executor.invoke({"input": "hi"})
### Error Message and Stack Trace (if applicable)
File "D:\ProgramData\anaconda3\lib\site-packages\langchain\chains\base.py", line 163, in invoke
raise e
File "D:\ProgramData\anaconda3\lib\site-packages\langchain\chains\base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "D:\ProgramData\anaconda3\lib\site-packages\langchain\agents\agent.py", line 1432, in _call
next_step_output = self._take_next_step(
File "D:\ProgramData\anaconda3\lib\site-packages\langchain\agents\agent.py", line 1138, in _take_next_step
[
File "D:\ProgramData\anaconda3\lib\site-packages\langchain\agents\agent.py", line 1138, in <listcomp>
[
File "D:\ProgramData\anaconda3\lib\site-packages\langchain\agents\agent.py", line 1166, in _iter_next_step
output = self.agent.plan(
File "D:\ProgramData\anaconda3\lib\site-packages\langchain\agents\agent.py", line 397, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "D:\ProgramData\anaconda3\lib\site-packages\langchain_core\runnables\base.py", line 2875, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "D:\ProgramData\anaconda3\lib\site-packages\langchain_core\runnables\base.py", line 2862, in transform
yield from self._transform_stream_with_config(
File "D:\ProgramData\anaconda3\lib\site-packages\langchain_core\runnables\base.py", line 1880, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "D:\ProgramData\anaconda3\lib\site-packages\langchain_core\runnables\base.py", line 2826, in _transform
for output in final_pipeline:
File "D:\ProgramData\anaconda3\lib\site-packages\langchain_core\runnables\base.py", line 4722, in transform
yield from self.bound.transform(
File "D:\ProgramData\anaconda3\lib\site-packages\langchain_core\runnables\base.py", line 1283, in transform
for chunk in input:
File "D:\ProgramData\anaconda3\lib\site-packages\langchain_core\runnables\base.py", line 1300, in transform
yield from self.stream(final, config, **kwargs)
File "D:\ProgramData\anaconda3\lib\site-packages\langchain_core\runnables\base.py", line 808, in stream
yield self.invoke(input, config, **kwargs)
File "D:\ProgramData\anaconda3\lib\site-packages\langchain_core\prompts\base.py", line 128, in invoke
return self._call_with_config(
File "D:\ProgramData\anaconda3\lib\site-packages\langchain_core\runnables\base.py", line 1625, in _call_with_config
context.run(
File "D:\ProgramData\anaconda3\lib\site-packages\langchain_core\runnables\config.py", line 347, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "D:\ProgramData\anaconda3\lib\site-packages\langchain_core\prompts\base.py", line 111, in _format_prompt_with_error_handling
_inner_input = self._validate_input(inner_input)
File "D:\ProgramData\anaconda3\lib\site-packages\langchain_core\prompts\base.py", line 103, in _validate_input
raise KeyError(
KeyError: "Input to PromptTemplate is missing variables {'tools', 'tool_names'}. Expected: ['input', 'tool_names', 'tools'] Received: ['input', 'intermediate_steps']"
### Description
### System Info
| LangChain's Prompt is like a holy shit design. So many errors when i uesd it in AgentExecutor.Could you improve it ? | https://api.github.com/repos/langchain-ai/langchain/issues/20769/comments | 3 | 2024-04-23T04:14:52Z | 2024-07-12T03:09:30Z | https://github.com/langchain-ai/langchain/issues/20769 | 2,257,904,037 | 20,769 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code will endlessly print multiple poems and revisions of poems about fish :
```python
from langchain_community.chat_models.ollama import ChatOllama
llm = ChatOllama(model='llama3:70b')
for chunk in llm.stream('Write a poem about fish'):
print(chunk.content)
```
This can be solved by adding an explicit stop condition:
```python
llm = ChatOllama(model='llama3:70b', stop=["<|eot_id|>"])
```
Can you please update the `Ollama` and `ChatOllama` elements to include this stop condition for Llama3?
### Error Message and Stack Trace (if applicable)
_No response_
### Description
See example code. Generations under Llama3 never terminate
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:12:49 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6020
> Python Version: 3.11.5 (main, Sep 11 2023, 08:31:25) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.1.45
> langchain: 0.1.16
> langchain_community: 0.0.32
> langsmith: 0.1.43
> langchain_google_vertexai: 0.0.5
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.14
> langserve: 0.0.34
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph | Ollama running Llama3 does not terminate. | https://api.github.com/repos/langchain-ai/langchain/issues/20765/comments | 8 | 2024-04-22T23:31:08Z | 2024-08-06T16:08:27Z | https://github.com/langchain-ai/langchain/issues/20765 | 2,257,628,343 | 20,765 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.llms import Replicate
model = Replicate(
model="meta/meta-llama-3-70b-instruct:" + version,
model_kwargs={"temperature": 0.2, "max_length": 1024, "top_p": 1},
)
```
This when compared to directly using Replicate's API within Python:
```python
import replicate
replicate.run(
"meta/meta-llama-3-70b-instruct",
input={
"top_p": 0.9,
"prompt": prompt,
"max_tokens": 512,
"min_tokens": 0,
"temperature": 0.6,
"prompt_template": "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful assistant<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
"presence_penalty": 1.15,
"frequency_penalty": 0.2
}
)
```
### Error Message and Stack Trace (if applicable)
replicate.exceptions.ReplicateError: ReplicateError Details:
title: Invalid version or not permitted
status: 422
detail: The specified version does not exist (or perhaps you don't have permission to use it?)
### Description
I am trying to use Langchain to use Llama 3 - however, there are no version numbers that are required when I am using Replicate's API directly. There are also no direct ways on the Replicate website to find which specific version number we are using when trying to use replicate.
To identify the version number, I queried `https://api.replicate.com/v1/models/meta/meta-llama-3-70b-instruct` as a GET request, and received the `latest_version` in the response. Upon feeding this latest_version into the 'version' variable, I still get the above error message.
Two questions:
1. Am I doing something wrong here when invoking the Replicate model using Langchain?
2. Can we get rid of the version number requirement, when Replicate's own API does not require a version number in most scenarios? It could be an optional parameter perhaps.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:12:25 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6030
> Python Version: 3.9.18 | packaged by conda-forge | (main, Dec 23 2023, 16:35:41)
[Clang 16.0.6 ]
Package Information
-------------------
> langchain_core: 0.1.42
> langchain: 0.1.16
> langchain_community: 0.0.32
> langsmith: 0.1.46
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Replicate Version Numbers and issues with running Llama 3 using Langchain's Replicate class | https://api.github.com/repos/langchain-ai/langchain/issues/20757/comments | 7 | 2024-04-22T20:49:46Z | 2024-07-31T13:50:12Z | https://github.com/langchain-ai/langchain/issues/20757 | 2,257,439,971 | 20,757 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
# main.py
import os
from typing import Any
from langchain.callbacks.base import BaseCallbackHandler
from langchain.schema import ChatResult, LLMResult
from langchain_core.outputs.chat_generation import ChatGeneration
from langchain_openai import ChatOpenAI
# bug in langchain.schema.ChatResult
class CorrectChatResult(ChatResult):
generations: list[list[ChatGeneration]] # type: ignore
class WrongTypedAgentCallbackHandler(BaseCallbackHandler):
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> Any:
print(response.generations[0][0].message)
class CorrectAgentCallbackHandler(BaseCallbackHandler):
def on_llm_end(self, response: CorrectChatResult, **kwargs: Any) -> Any:
print(response.generations[0][0].message)
llm = ChatOpenAI(
api_key=os.getenv("OPENAI_API_KEY", ""),
max_tokens=5,
callbacks=[CorrectAgentCallbackHandler(), WrongTypedAgentCallbackHandler()],
)
llm.invoke("is mypy helpfull tool?")
```
```sh
mypy main.py
```
### Error Message and Stack Trace (if applicable)
main.py:17: error: "Generation" has no attribute "message" [attr-defined]
main.py:21: error: Signature of "on_llm_end" incompatible with supertype "LLMManagerMixin" [override]
main.py:21: note: Superclass:
main.py:21: note: def on_llm_end(self, response: LLMResult, *, run_id: UUID, parent_run_id: UUID | None = ..., **kwargs: Any) -> Any
main.py:21: note: Subclass:
main.py:21: note: def on_llm_end(self, response: CorrectChatResult, **kwargs: Any) -> Any
main.py:26: error: Argument "api_key" to "ChatOpenAI" has incompatible type "str | SecretStr | None"; expected "SecretStr | None" [arg-type]
main.py:26: error: Argument 2 to "getenv" has incompatible type "str"; expected "SecretStr | None" [arg-type]
### Description
If I change https://github.com/langchain-ai/langchain/blob/c010ec8b71771dc3f54dc148475c90070b6a7c0b/libs/core/langchain_core/outputs/chat_result.py#L10
to correct type: `List[List[ChatGeneration]]` error get's solved
### System Info
*langchain==0.1.16
langchain-community==0.0.34
langchain-core==0.1.45
langchain-openai==0.0.8
langchain-text-splitters==0.0.1*
platform macos
python 3.11.7
| LangChain.outputs.chat_result.py has wrong type hint | https://api.github.com/repos/langchain-ai/langchain/issues/20744/comments | 0 | 2024-04-22T15:50:43Z | 2024-07-29T16:08:32Z | https://github.com/langchain-ai/langchain/issues/20744 | 2,256,899,035 | 20,744 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
llm = ChatOpenAI(
model_name= "ChatGLM3",
# model_path="/dssg/home/acct-medhyq/medhyq-zll/chatGLM3/chatglm3-6b",
openai_api_base= "http://localhost:8000/v1",
openai_api_key= "EMPTY",
streaming= True
)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
readonlymemory = ReadOnlySharedMemory(memory=memory)
cypher_tool = LLMCypherGraphChain(
llm=llm, graph=osteosarcoma_graph, verbose=True, memory=readonlymemory)
fulltext_tool = LLMKeywordGraphChain(
llm=llm, graph=osteosarcoma_graph, verbose=True)
vector_tool = LLMNeo4jVectorChain(
llm=llm, verbose=True, graph=osteosarcoma_graph
)
### Error Message and Stack Trace (if applicable)
2024-04-22 21:47:15,296 - ERROR - Error: variable agent_scratchpad should be a list of base messages, got Could not parse LLM output:
**Action:** Cypher search
**Action Input:** "What genes promote the growth of osteosarcoma?"
```json
tool_call(action='cypher', action_input='What genes promote the growth of osteosarcoma?')
```
Observation: Invalid or incomplete response
Thought:Could not parse LLM output:
I apologize for the inconvenience. To answer your question, the genes that promote the growth of osteosarcoma are not fully understood. However, some studies have identified certain genetic mutations that may contribute to the development and progression of osteosarcoma. These include mutations in the CDKN4A and TP53 genes, as well as others. It is important to note that while these genetic changes can increase the risk of developing osteosarcoma, they do not guarantee the disease will occur. Other factors, such as environmental factors and lifestyle choices, may also play a role in the development of osteosarcoma.
Observation: Invalid or incomplete response
Thought:Could not parse LLM output:
I apologize for the confusion earlier. To answer your question, there isn't a specific tool that can determine the response to your question. The response to your question is a complex issue involving genetics and cancer research. A team of experts in the field would need to conduct extensive research and experiments to come to a conclusion.
Observation: Invalid or incomplete response
Thought:Could not parse LLM output:
I'm sorry, but as an AI language model, I don't have access to external information and cannot perform actions to look up information. The best course of action would be for you to consult with a medical professional or academic expert in the field of osteosarcoma. They will have access to current research and can provide you with the most accurate and up-to-date information.
Observation: Invalid or incomplete response
Thought:Could not parse LLM output:
I apologize for any confusion. To clarify, there is no one specific tool that can provide a direct answer to your question. However, I can offer some general information on the topic. Osteosarcoma is a type of cancer that affects bones and can arise from several different genetic mutations. Some of these genetic mutations, such as those in the CDKN4A and TP53 genes, have been associated with an increased risk of developing osteosarcoma. Additionally, certain lifestyle factors, such as smoking and excessive alcohol consumption, may also contribute to the development of osteosarcoma. It is important to remember, however, that genetics alone do not determine the development of osteosarcoma and that other factors may also play a role. If you have any further questions, I recommend consulting with a healthcare professional or academic expert in the field of osteosarcoma.
Observation: Invalid or incomplete response
Thought:Could not parse LLM output:
I'm sorry, but I cannot perform a tool call as I am not connected to a tool. To answer your question, the response depends on the specific genetic mutations involved in osteosarcoma and the underlying causes of the disease. While there are several known genetic mutations that increase the risk of osteosarcoma, the precise mechanisms by which these mutations contribute to the disease are not yet fully understood. Additionally, there are many other factors that may contribute to the development of osteosarcoma, including environmental and lifestyle factors. Therefore, it is recommended that individuals seeking information on osteosarcoma consult with a medical professional or academic expert for the most accurate and up-to-date information.
Observation: Invalid or incomplete response
Thought:Could not parse LLM output:
I'm sorry, but I cannot perform a tool call as I am not connected to a tool. To answer your question, the response to osteosarcoma growth promotion genes varies depending on the specific genetic mutations involved. Some genetic mutations like CDKN4A and TP53 gene mutations can lead to an increased risk of osteosarcoma. However, the exact mechanism by which these mutations contribute to osteosarcoma growth and development is still under investigation. It is recommended to consult with a medical professional or academic expert for the most accurate and up-to-date information on this topic.
Observation: Invalid or incomplete response
Thought:Could not parse LLM output:
I'm sorry, but as an AI language model, I don't have access to current research on specific cases and cannot provide a personalized response. The best way to get an accurate answer would be to consult with a medical professional or academic expert who specializes in the field of osteosarcoma. They can provide you with the most up-to-date information and give you personalized advice based on your specific situation.
Observation: Invalid or incomplete response
Thought:Could not parse LLM output:
I'm sorry, but I am unable to perform a tool call to retrieve information on the\u57fa\u56e0\u4fc3\u8fdb\u9aa8\u8089\u7624\u751f\u957f\u7684\u54cd\u5e94\u3002\u8bf7\u60a8\u5c1d\u8bd5\u5176\u4ed6\u95ee\u9898\u6216\u5bfb\u6c42\u4e13\u4e1a\u4eba\u58eb\u7684\u5e2e\u52a9\u4ee5\u83b7\u53d6\u66f4\u51c6\u786e\u7684\u7b54\u6848\u3002
Observation: Invalid or incomplete response
Thought:Could not parse LLM output:
I'm sorry, but I am unable to provide a response to your question as I do not have access to any relevant tools or information. Can you please try again with a different question or seek assistance from a different source?
Observation: Invalid or incomplete response
Thought:Could not parse LLM output:
I'm sorry, but as an AI language model, I don't have access to current research on specific cases and cannot provide a personalized response. The best way to get an accurate answer would be to consult with a medical professional or academic expert who specializes in the field of osteosarcoma. They can provide you with the most up-to-date information and give you personalized advice based on your specific situation.
Observation: Invalid or incomplete response
Thought:Could not parse LLM output:
I'm sorry, but as an AI language model, I don't have access to current research on specific cases and cannot provide a personalized response. The best way to get an accurate answer would be to consult with a medical professional or academic expert who specializes in the field of osteosarcoma. They can provide you with the most up-to-date information and give you personalized advice based on your specific situation.
Observation: Invalid or incomplete response
Thought:Could not parse LLM output:
I'm sorry, but I cannot provide a response to your question as I am not connected to any relevant tools or information. Please try again with a different question or seek assistance from a different source.
Observation: Invalid or incomplete response
Thought:Could not parse LLM output:
I'm sorry, but as an AI language model, I don't have access to current research on specific cases and cannot provide a personalized response. The best way to get an accurate answer would be to consult with a medical professional or academic expert who specializes in the field of osteosarcoma. They can provide you with the most up-to-date information and give you personalized advice based on your specific situation.
Observation: Invalid or incomplete response
Thought:Could not parse LLM output:
I'm sorry, but as an AI language model, I don't have access to current research on specific cases and cannot provide a personalized response. The best way to get an accurate answer would be to consult with a medical professional or academic expert who specializes in the field of osteosarcoma. They can provide you with the most up-to-date information and give you personalized advice based on your specific situation.
Observation: Invalid or incomplete response
Thought:
### Description
When I ask a question, he keeps repeating
Observation: Invalid or incomplete response
Thought:Could not parse LLM output:
### System Info
linux
python 3.10
langchain lastest
| #could not parser LLM output | https://api.github.com/repos/langchain-ai/langchain/issues/20743/comments | 0 | 2024-04-22T15:39:40Z | 2024-07-29T16:08:27Z | https://github.com/langchain-ai/langchain/issues/20743 | 2,256,871,777 | 20,743 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.llms import Ollama
llm = Ollama(model="llama3")
llm.invoke("Tell me a joke")
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[6], [line 1](vscode-notebook-cell:?execution_count=6&line=1)
----> [1](vscode-notebook-cell:?execution_count=6&line=1) llm.invoke("Tell me a joke")
File /mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:276, in BaseLLM.invoke(self, input, config, stop, **kwargs)
[266](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:266) def invoke(
[267](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:267) self,
[268](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:268) input: LanguageModelInput,
(...)
[272](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:272) **kwargs: Any,
[273](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:273) ) -> str:
[274](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:274) config = ensure_config(config)
[275](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:275) return (
--> [276](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:276) self.generate_prompt(
[277](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:277) [self._convert_input(input)],
[278](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:278) stop=stop,
[279](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:279) callbacks=config.get("callbacks"),
[280](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:280) tags=config.get("tags"),
[281](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:281) metadata=config.get("metadata"),
[282](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:282) run_name=config.get("run_name"),
[283](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:283) run_id=config.pop("run_id", None),
[284](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:284) **kwargs,
[285](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:285) )
[286](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:286) .generations[0][0]
[287](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:287) .text
[288](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:288) )
File /mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:633, in BaseLLM.generate_prompt(self, prompts, stop, callbacks, **kwargs)
[625](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:625) def generate_prompt(
[626](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:626) self,
[627](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:627) prompts: List[PromptValue],
(...)
[630](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:630) **kwargs: Any,
[631](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:631) ) -> LLMResult:
[632](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:632) prompt_strings = [p.to_string() for p in prompts]
--> [633](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:633) return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File /mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:803, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
[788](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:788) if (self.cache is None and get_llm_cache() is None) or self.cache is False:
[789](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:789) run_managers = [
[790](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:790) callback_manager.on_llm_start(
[791](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:791) dumpd(self),
(...)
[801](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:801) )
[802](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:802) ]
--> [803](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:803) output = self._generate_helper(
[804](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:804) prompts, stop, run_managers, bool(new_arg_supported), **kwargs
[805](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:805) )
[806](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:806) return output
[807](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:807) if len(missing_prompts) > 0:
File /mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:670, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
[668](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:668) for run_manager in run_managers:
[669](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:669) run_manager.on_llm_error(e, response=LLMResult(generations=[]))
--> [670](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:670) raise e
[671](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:671) flattened_outputs = output.flatten()
[672](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:672) for manager, flattened_output in zip(run_managers, flattened_outputs):
File /mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:657, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
[647](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:647) def _generate_helper(
[648](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:648) self,
[649](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:649) prompts: List[str],
(...)
[653](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:653) **kwargs: Any,
[654](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:654) ) -> LLMResult:
[655](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:655) try:
[656](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:656) output = (
--> [657](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:657) self._generate(
[658](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:658) prompts,
[659](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:659) stop=stop,
[660](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:660) # TODO: support multiple run managers
[661](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:661) run_manager=run_managers[0] if run_managers else None,
[662](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:662) **kwargs,
[663](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:663) )
[664](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:664) if new_arg_supported
[665](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:665) else self._generate(prompts, stop=stop)
[666](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:666) )
[667](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:667) except BaseException as e:
[668](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_core/language_models/llms.py:668) for run_manager in run_managers:
File /mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:417, in Ollama._generate(self, prompts, stop, images, run_manager, **kwargs)
[415](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:415) generations = []
[416](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:416) for prompt in prompts:
--> [417](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:417) final_chunk = super()._stream_with_aggregation(
[418](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:418) prompt,
[419](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:419) stop=stop,
[420](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:420) images=images,
[421](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:421) run_manager=run_manager,
[422](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:422) verbose=self.verbose,
[423](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:423) **kwargs,
[424](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:424) )
[425](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:425) generations.append([final_chunk])
[426](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:426) return LLMResult(generations=generations)
File /mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:326, in _OllamaCommon._stream_with_aggregation(self, prompt, stop, run_manager, verbose, **kwargs)
[317](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:317) def _stream_with_aggregation(
[318](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:318) self,
[319](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:319) prompt: str,
(...)
[323](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:323) **kwargs: Any,
[324](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:324) ) -> GenerationChunk:
[325](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:325) final_chunk: Optional[GenerationChunk] = None
--> [326](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:326) for stream_resp in self._create_generate_stream(prompt, stop, **kwargs):
[327](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:327) if stream_resp:
[328](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:328) chunk = _stream_response_to_generation_chunk(stream_resp)
File /mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:172, in _OllamaCommon._create_generate_stream(self, prompt, stop, images, **kwargs)
[164](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:164) def _create_generate_stream(
[165](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:165) self,
[166](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:166) prompt: str,
(...)
[169](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:169) **kwargs: Any,
[170](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:170) ) -> Iterator[str]:
[171](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:171) payload = {"prompt": prompt, "images": images}
--> [172](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:172) yield from self._create_stream(
[173](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:173) payload=payload,
[174](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:174) stop=stop,
[175](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:175) api_url=f"{self.base_url}/api/generate",
[176](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:176) **kwargs,
[177](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:177) )
File /mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:251, in _OllamaCommon._create_stream(self, api_url, payload, stop, **kwargs)
[249](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:249) else:
[250](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:250) optional_detail = response.text
--> [251](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:251) raise ValueError(
[252](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:252) f"Ollama call failed with status code {response.status_code}."
[253](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:253) f" Details: {optional_detail}"
[254](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:254) )
[255](https://vscode-remote+ssh-002dremote-002b10-002e16-002e22-002e110.vscode-resource.vscode-cdn.net/mnt/nfs/wangyu/Miniconda/envs/RAG/lib/python3.10/site-packages/langchain_community/llms/ollama.py:255) return response.iter_lines(decode_unicode=True)
ValueError: Ollama call failed with status code 502. Details:
### Description
I ran the instance code, but reported this error.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #21~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Feb 9 13:32:52 UTC 2
> Python Version: 3.10.14 (main, Mar 21 2024, 16:24:04) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.1.45
> langchain: 0.1.16
> langchain_community: 0.0.34
> langsmith: 0.1.49
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| ValueError: Ollama call failed with status code 502. Details | https://api.github.com/repos/langchain-ai/langchain/issues/20742/comments | 9 | 2024-04-22T15:00:00Z | 2024-06-30T15:38:14Z | https://github.com/langchain-ai/langchain/issues/20742 | 2,256,772,479 | 20,742 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import base64
import json
import os
import vertexai
from google.oauth2.service_account import Credentials
from langchain_core.messages.human import HumanMessage
from langchain_google_vertexai import ChatVertexAI
llm_model = ChatVertexAI(
model_name="gemini-1.5-pro-preview-0409",
convert_system_message_to_human=True,
temperature=0.7,
)
with open("/Users/david/Downloads/1713170355070_sample_1.mp4", "rb") as f:
video_b64 = base64.b64encode(f.read()).decode("utf-8")
google_credentials = Credentials.from_service_account_info(
json.loads(os.getenv("GCP_LIC"), strict=False)
)
vertexai.init(
project=os.getenv("GCP_PROJECT_ID"),
location=os.getenv("GCP_REGION"),
credentials=google_credentials,
)
res = llm_model.invoke(
[
HumanMessage(
content=[
{"type": "text", "text": "Summary of video"},
{"type": "media", "mimeType": "video/mp4", "data": video_b64},
]
),
]
)
print(res)
```
JS Code Example: https://js.langchain.com/docs/use_cases/media
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/Users/david/Documents/test2/langchain_error.py", line 30, in <module>
res = llm_model.invoke(
File "/Users/david/Documents/test2/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 158, in invoke
self.generate_prompt(
File "/Users/david/Documents/test2/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 560, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "/Users/david/Documents/test2/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 421, in generate
raise e
File "/Users/david/Documents/test2/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 411, in generate
self._generate_with_cache(
File "/Users/david/Documents/test2/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 632, in _generate_with_cache
result = self._generate(
File "/Users/david/Documents/test2/.venv/lib/python3.10/site-packages/langchain_google_vertexai/chat_models.py", line 497, in _generate
system_instruction, history_gemini = _parse_chat_history_gemini(
File "/Users/david/Documents/test2/.venv/lib/python3.10/site-packages/langchain_google_vertexai/chat_models.py", line 215, in _parse_chat_history_gemini
parts = _convert_to_parts(message)
File "/Users/david/Documents/test2/.venv/lib/python3.10/site-packages/langchain_google_vertexai/chat_models.py", line 166, in _convert_to_parts
return [_convert_to_prompt(part) for part in raw_content]
File "/Users/david/Documents/test2/.venv/lib/python3.10/site-packages/langchain_google_vertexai/chat_models.py", line 166, in <listcomp>
return [_convert_to_prompt(part) for part in raw_content]
File "/Users/david/Documents/test2/.venv/lib/python3.10/site-packages/langchain_google_vertexai/chat_models.py", line 160, in _convert_to_prompt
raise ValueError("Only text and image_url types are supported!")
ValueError: Only text and image_url types are supported!
```
### Description
- I would expect this to work as this sample code is based on JS sample.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:31:00 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6020
> Python Version: 3.10.0 (default, Nov 11 2023, 18:46:15) [Clang 15.0.0 (clang-1500.0.40.1)]
Package Information
-------------------
> langchain_core: 0.1.45
> langchain: 0.1.16
> langchain_community: 0.0.34
> langsmith: 0.1.49
> langchain_anthropic: 0.1.11
> langchain_error: Installed. No version info available.
> langchain_google_vertexai: 1.0.1
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| Vertex AI Gemini video support | https://api.github.com/repos/langchain-ai/langchain/issues/20738/comments | 4 | 2024-04-22T12:42:47Z | 2024-05-21T16:54:12Z | https://github.com/langchain-ai/langchain/issues/20738 | 2,256,441,373 | 20,738 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
def get_history_chain(llm) -> RunnableWithMessageHistory:
chain = history_prompt | llm | debug_in_chain
return RunnableWithMessageHistory(
chain,
get_session_history=lambda session_id: RedisChatMessageHistory(session_id, settings.REDIS_URL),
input_messages_key="question",
history_messages_key="history",
)
```
where the llm object is an instance of `<class 'langchain_mistralai.chat_models.ChatMistralAI'>`
The object passed to the llm in the chain at runtime is an instance of `<class 'langchain_core.prompt_values.ChatPromptValue'>`
### Error Message and Stack Trace (if applicable)
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/starlette/responses.py", line 265, in __call__
await wrap(partial(self.listen_for_disconnect, receive))
File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/starlette/responses.py", line 261, in wrap
await func()
File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/starlette/responses.py", line 238, in listen_for_disconnect
message = await receive()
File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 535, in receive
await self.message_event.wait()
File "/usr/lib/python3.10/asyncio/locks.py", line 214, in wait
await fut
asyncio.exceptions.CancelledError: Cancelled by cancel scope 741aaee368c0
During handling of the above exception, another exception occurred:
+ Exception Group Traceback (most recent call last):
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 407, in run_asgi
| result = await app( # type: ignore[func-returns-value]
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
| return await self.app(scope, receive, send)
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in __call__
| await super().__call__(scope, receive, send)
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/starlette/applications.py", line 123, in __call__
| await self.middleware_stack(scope, receive, send)
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/starlette/middleware/errors.py", line 186, in __call__
| raise exc
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in __call__
| await self.app(scope, receive, _send)
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/starlette/middleware/cors.py", line 85, in __call__
| await self.app(scope, receive, send)
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
| await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
| raise exc
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
| await app(scope, receive, sender)
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/starlette/routing.py", line 756, in __call__
| await self.middleware_stack(scope, receive, send)
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/starlette/routing.py", line 776, in app
| await route.handle(scope, receive, send)
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/starlette/routing.py", line 297, in handle
| await self.app(scope, receive, send)
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/starlette/routing.py", line 77, in app
| await wrap_app_handling_exceptions(app, request)(scope, receive, send)
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
| raise exc
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
| await app(scope, receive, sender)
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/starlette/routing.py", line 75, in app
| await response(scope, receive, send)
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/starlette/responses.py", line 258, in __call__
| async with anyio.create_task_group() as task_group:
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 678, in __aexit__
| raise BaseExceptionGroup(
| exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/starlette/responses.py", line 261, in wrap
| await func()
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/starlette/responses.py", line 250, in stream_response
| async for chunk in self.body_iterator:
| File "/home/jules/workspaces/ai_projects/intern-assistant/assistant_api.py", line 41, in handle_streaming
| # print(chunk)
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4704, in astream
| async for item in self.bound.astream(
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4704, in astream
| async for item in self.bound.astream(
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2900, in astream
| async for chunk in self.atransform(input_aiter(), config, **kwargs):
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2883, in atransform
| async for chunk in self._atransform_stream_with_config(
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1984, in _atransform_stream_with_config
| chunk = cast(Output, await py_anext(iterator))
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2853, in _atransform
| async for output in final_pipeline:
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4740, in atransform
| async for item in self.bound.atransform(
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2883, in atransform
| async for chunk in self._atransform_stream_with_config(
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1984, in _atransform_stream_with_config
| chunk = cast(Output, await py_anext(iterator))
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2853, in _atransform
| async for output in final_pipeline:
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1333, in atransform
| async for output in self.astream(final, config, **kwargs):
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 319, in astream
| raise e
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 297, in astream
| async for chunk in self._astream(
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/langchain_mistralai/chat_models.py", line 455, in _astream
| async for chunk in await acompletion_with_retry(
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/langchain_mistralai/chat_models.py", line 124, in _aiter_sse
| async for event in event_source.aiter_sse():
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/httpx_sse/_api.py", line 37, in aiter_sse
| self._check_content_type()
| File "/home/jules/workspaces/ai_projects/intern-assistant/dev_env/lib/python3.10/site-packages/httpx_sse/_api.py", line 18, in _check_content_type
| raise SSEError(
| httpx_sse._exceptions.SSEError: Expected response header Content-Type to contain 'text/event-stream', got 'application/json'
+------------------------------------
### Description
the exception occurs when i update from langchain-mistralai==0.1.1 to langchain-mistralai==0.1.2
i have debbugged the chain, and the error appears only when calling the llm in the chain.
The SSE error comes from the mistralAi client receiving packets from the Api, so i really think that's a bug in the lib.
If it's not, i didn't found any documentation on this changes.
### System Info
# Working version:
`aiohttp==3.9.3`
`aiosignal==1.3.1`
annotated-types==0.6.0
`anyio==4.3.0`
async-timeout==4.0.3
attrs==23.2.0
certifi==2024.2.2
charset-normalizer==3.3.2
click==8.1.7
dataclasses-json==0.6.4
exceptiongroup==1.2.0
faiss-cpu==1.8.0
fastapi==0.110.1
filelock==3.13.4
frozenlist==1.4.1
fsspec==2024.3.1
greenlet==3.0.3
grpcio==1.62.1
grpcio-tools==1.62.1
h11==0.14.0
h2==4.1.0
hpack==4.0.0
`httpcore==1.0.5`
`httpx==0.25.2`
`httpx-sse==0.4.0`
huggingface-hub==0.22.2
hyperframe==6.0.1
idna==3.6
Jinja2==3.1.3
joblib==1.4.0
jsonpatch==1.33
jsonpointer==2.4
`langchain==0.1.15`
`langchain-community==0.0.32`
`langchain-core==0.1.41`
`langchain-mistralai==0.1.1`
`langchain-text-splitters==0.0.1`
langsmith==0.1.43
MarkupSafe==2.1.5
marshmallow==3.21.1
mistralai==0.1.8
mpmath==1.3.0
multidict==6.0.5
mypy-extensions==1.0.0
networkx==3.3
numpy==1.26.4
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-nccl-cu12==2.19.3
nvidia-nvjitlink-cu12==12.4.127
nvidia-nvtx-cu12==12.1.105
orjson==3.10.0
packaging==23.2
pandas==2.2.1
pillow==10.3.0
portalocker==2.8.2
protobuf==4.25.3
pyarrow==15.0.2
pydantic==2.6.4
pydantic_core==2.16.3
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
pytz==2024.1
PyYAML==6.0.1
qdrant-client==1.8.2
redis==5.0.3
regex==2023.12.25
requests==2.31.0
safetensors==0.4.2
scikit-learn==1.4.2
scipy==1.13.0
sentence-transformers==2.6.1
six==1.16.0
sniffio==1.3.1
SQLAlchemy==2.0.29
starlette==0.37.2
sympy==1.12
tenacity==8.2.3
threadpoolctl==3.4.0
tokenizers==0.15.2
torch==2.2.2
tqdm==4.66.2
transformers==4.39.3
triton==2.2.0
typing-inspect==0.9.0
typing_extensions==4.11.0
tzdata==2024.1
urllib3==2.2.1
uvicorn==0.29.0
yarl==1.9.4
# Last version not working:
`aiohttp==3.9.5`
`aiosignal==1.3.1`
annotated-types==0.6.0
`anyio==4.3.0`
async-timeout==4.0.3
attrs==23.2.0
certifi==2024.2.2
charset-normalizer==3.3.2
click==8.1.7
dataclasses-json==0.6.4
exceptiongroup==1.2.1
faiss-cpu==1.8.0
fastapi==0.110.2
filelock==3.13.4
frozenlist==1.4.1
fsspec==2024.3.1
greenlet==3.0.3
grpcio==1.62.2
grpcio-tools==1.62.2
h11==0.14.0
h2==4.1.0
`hpack==4.0.0`
`httpcore==1.0.5`
`httpx==0.25.2`
`httpx-sse==0.4.0`
huggingface-hub==0.22.2
hyperframe==6.0.1
idna==3.7
Jinja2==3.1.3
joblib==1.4.0
jsonpatch==1.33
jsonpointer==2.4
`langchain==0.1.16`
`langchain-community==0.0.34`
`langchain-core==0.1.45`
`langchain-mistralai==0.1.2`
`langchain-text-splitters==0.0.1`
langsmith==0.1.49
MarkupSafe==2.1.5
marshmallow==3.21.1
mistralai==0.1.8
mpmath==1.3.0
multidict==6.0.5
mypy-extensions==1.0.0
networkx==3.3
numpy==1.26.4
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-nccl-cu12==2.19.3
nvidia-nvjitlink-cu12==12.4.127
nvidia-nvtx-cu12==12.1.105
orjson==3.10.1
packaging==23.2
pandas==2.2.2
pillow==10.3.0
portalocker==2.8.2
protobuf==4.25.3
pyarrow==16.0.0
pydantic==2.7.0
pydantic_core==2.18.1
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
pytz==2024.1
PyYAML==6.0.1
qdrant-client==1.8.2
redis==5.0.3
regex==2024.4.16
requests==2.31.0
safetensors==0.4.3
scikit-learn==1.4.2
scipy==1.13.0
sentence-transformers==2.7.0
six==1.16.0
sniffio==1.3.1
SQLAlchemy==2.0.29
starlette==0.37.2
sympy==1.12
tenacity==8.2.3
threadpoolctl==3.4.0
tokenizers==0.15.2
torch==2.2.2
tqdm==4.66.2
transformers==4.39.3
triton==2.2.0
typing-inspect==0.9.0
typing_extensions==4.11.0
tzdata==2024.1
urllib3==2.2.1
uvicorn==0.29.0
yarl==1.9.4
| langchain-mistralai: SSE error when receiving from the API | https://api.github.com/repos/langchain-ai/langchain/issues/20737/comments | 0 | 2024-04-22T12:02:14Z | 2024-05-13T08:01:06Z | https://github.com/langchain-ai/langchain/issues/20737 | 2,256,359,128 | 20,737 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
https://python.langchain.com/docs/expression_language/primitives/binding/ is shows how to bind fix stop parameter of LLM when using LCEL. I want to give the `stop` argument when I call the chain. I do not find any documents to show how to do it.
```python
runnable.invoke(
"x raised to the third plus seven equals 12",
stop=["SOLUTION:"], # this line error
)
```
How can I do it?
### Idea or request for content:
_No response_ | DOC: How to specify model stop argument when invoke a LCEL chain | https://api.github.com/repos/langchain-ai/langchain/issues/20730/comments | 0 | 2024-04-22T08:12:06Z | 2024-07-29T16:08:22Z | https://github.com/langchain-ai/langchain/issues/20730 | 2,255,883,403 | 20,730 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import os
import google.generativeai as genai
from langchain_google_genai import GoogleGenerativeAIEmbeddings
import langchain
from langchain_community.vectorstores.redis import Redis as lc_redis
from tqdm import tqdm
os.environ['GOOGLE_API_KEY'] = 'your-Google-api-key'
genai.configure(api_key='GOOGLE_API_KEY')
sample_1=["""Azure Key Vault is a cloud service for securely storing and accessing secrets.A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, or cryptographic keys."""]
embeddings = GoogleGenerativeAIEmbeddings(model="models/embedding-001")
url="redis://default:test#[email protected]:14876"
rds = lc_redis.from_texts(
sample_1,
embeddings,
redis_url=url,
index_name="sample_index",)
results = rds.similarity_search('network')
print(results)
### Error Message and Stack Trace (if applicable)
Value Error: Redis failed to connect : Port could not be cast to integer value as 'Test'.
### Description
we are facing issue while using Langchain Redis to connect we were not able to connect to redis instance. The error is as follows:
Value Error: Redis failed to connect : Port could not be cast to integer value as 'Welcome'
for example url="redis://default:Test#[email protected]:14876"
Facing the Issue when password contains '#'. But works in case of other password like Test@2024
### System Info
**This is pip freeze langchain output:**
langchain==0.1.16
langchain-community==0.0.33
langchain-core==0.1.42
langchain-google-genai==1.0.2
langchain-text-splitters==0.0.1
**Version of redis database using:**
redis_version:6.2.10
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:0000000000000000000000000000000000000000
redis_mode:standalone
os:Linux 5.15.0-1051-gcp x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:9.4.0
**Python version:**
python version :3.12.2
| Value Error: Redis failed to connect : Port could not be cast to integer value as 'Test'. | https://api.github.com/repos/langchain-ai/langchain/issues/20729/comments | 5 | 2024-04-22T07:28:59Z | 2024-08-01T16:06:34Z | https://github.com/langchain-ai/langchain/issues/20729 | 2,255,793,356 | 20,729 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
self.vectorstore = Weaviate(
client=self.dbClient,
index_name=collectionName,
text_key=self.embeddingProperty,
embedding=self.embedding,
by_text=False,
)
SelfQueryRetriever.from_llm(
self.llm, self.vectorstore, documentContentDescription, metadataFieldInfo, verbose=True,
# enable_limit=True,
)
### Error Message and Stack Trace (if applicable)
Error during query: [{'locations': [{'column': 6, 'line': 1}], 'message': 'invalid \'where\' filter: child operand at position 0: data type filter cannot use "valueInt" on type "number", use "valueNumber" instead', 'path': ['Get', 'LKT_ASSISTANT_Product']}]
### Description
Maybe weaviate changed api recently.
### System Info
python 3.11, linux
langchain 0.1.16
langchain-community 0.0.33
langchain-core 0.1.43
langchain-experimental 0.0.49
langchain-google-genai 1.0.2
langchain-openai 0.1.3
langchain-text-splitters 0.0.1
langchainhub 0.1.14 | Langchain self query weaviate | https://api.github.com/repos/langchain-ai/langchain/issues/20726/comments | 1 | 2024-04-22T07:02:43Z | 2024-06-24T09:55:19Z | https://github.com/langchain-ai/langchain/issues/20726 | 2,255,744,704 | 20,726 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
file_management = FileManagementToolkit(
# If you don't provide a root_dir, operations will default to the current working directory
root_dir=root_dir
).get_tools()
using this tool to write data to a python file writes the data as a string and the \n is interpreted as a character not as \n itself and the rest of the code is not moved to the new line
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Trying to use the filemanagementtoolkit to write python code in a new line but it fails in correctly doing so as it should
### System Info
windows,
python -11.4
| Langchain FilemanagementToolkit incorrectly writes python code to files | https://api.github.com/repos/langchain-ai/langchain/issues/20721/comments | 5 | 2024-04-22T05:04:25Z | 2024-07-29T16:08:12Z | https://github.com/langchain-ai/langchain/issues/20721 | 2,255,586,018 | 20,721 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Link to Doc Page: [LangChain > Retrieval > Document loaders > PDF > Using Unstructured](https://python.langchain.com/docs/modules/data_connection/document_loaders/pdf/#using-unstructured)
The section on _Using Unstructured_ does not describe the package installation step required before importing the modules. It is missing the following line:
`pip install unstructured[all-docs]`
I believe this is confusing users as seen in #20700 and #19312
A simple fix would be to add this line to the [pdf.mdx](docs/docs/modules/data_connection/document_loaders/pdf.mdx)
### Idea or request for content:
Additions about information for package installations like this one can be made for the other packages listed on this page as well. But this needs to be tested and will be reported in a separate issue. | DOC: [PDF] update package installation instructions for Unstructured | https://api.github.com/repos/langchain-ai/langchain/issues/20719/comments | 1 | 2024-04-22T04:30:19Z | 2024-07-29T16:08:07Z | https://github.com/langchain-ai/langchain/issues/20719 | 2,255,552,579 | 20,719 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
simply the following code:
```
from langchain_groq import ChatGroq
from llm_utils import get_chat_prompt
chat = ChatGroq(temperature=0.7,
groq_api_key="API_KEY",
model_name="mixtral-8x7b-32768")
```
causes the failure:
```
ImportError: cannot import name 'make_invalid_tool_call' from 'langchain_core.output_parsers.openai_tools' (/Users/travisbarton/opt/anaconda3/envs/trivia-gpt-backend39/lib/python3.9/site-packages/langchain_core/output_parsers/openai_tools.py)
```
### Error Message and Stack Trace (if applicable)
```Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevconsole.py", line 364, in runcode
coro = func()
File "<input>", line 1, in <module>
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import
module = self._system_import(name, *args, **kwargs)
File "/Users/travisbarton/opt/anaconda3/envs/trivia-gpt-backend39/lib/python3.9/site-packages/langchain_groq/__init__.py", line 1, in <module>
from langchain_groq.chat_models import ChatGroq
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import
module = self._system_import(name, *args, **kwargs)
File "/Users/travisbarton/opt/anaconda3/envs/trivia-gpt-backend39/lib/python3.9/site-packages/langchain_groq/chat_models.py", line 58, in <module>
from langchain_core.output_parsers.openai_tools import (
ImportError: cannot import name 'make_invalid_tool_call' from 'langchain_core.output_parsers.openai_tools' (/Users/travisbarton/opt/anaconda3/envs/trivia-gpt-backend39/lib/python3.9/site-packages/langchain_core/output_parsers/openai_tools.py)
```
### Description
Just the import seems to be broken
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:12:49 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6020
> Python Version: 3.9.19 (main, Mar 21 2024, 12:08:14)
[Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.1.45
> langchain: 0.1.14
> langchain_community: 0.0.31
> langsmith: 0.1.38
> langchain_anthropic: 0.1.4
> langchain_groq: 0.1.2
> langchain_openai: 0.1.1
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | Cannot import ChatGroq `ImportError: cannot import name 'make_invalid_tool_call' from 'langchain_core.output_parsers.openai_tools'` | https://api.github.com/repos/langchain-ai/langchain/issues/20714/comments | 6 | 2024-04-21T19:42:11Z | 2024-05-17T20:23:12Z | https://github.com/langchain-ai/langchain/issues/20714 | 2,255,233,728 | 20,714 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code demonstrates the issue:
```import torch
from langchain_community.llms import LlamaCpp
from langchain_core.callbacks import CallbackManager, StreamingStdOutCallbackHandler
from langchain_core.prompts import PromptTemplate
from langchain.chains import LLMChain
DEVICE_TYPE = "cuda" if torch.cuda.is_available() else "cpu"
SHOW_SOURCES = True
print("device : ", DEVICE_TYPE)
template = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are an expert at summarizing long and unstructured notes<|eot_id|><|start_header_id|>user<|end_header_id|>
Please provide a summary of the following text: \n\n {content} \n\n <|eot_id|><|start_header_id|>assistant<|end_header_id|>
"""
prompt = PromptTemplate.from_template(template=template)
# Callbacks support token-wise streaming
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
n_gpu_layers = -1 # The number of layers to put on the GPU. The rest will be on the CPU. If you don't know how many layers there are, you can use -1 to move all to GPU.
n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
n_ctx = 8192
# Make sure the model path is correct for your system!
llm = LlamaCpp(
model_path="./models/Meta-Llama-3-70B-Instruct.Q4_K_M.gguf",
n_gpu_layers=n_gpu_layers,
n_batch=n_batch,
n_ctx=n_ctx,
callback_manager=callback_manager,
verbose=True, # Verbose is required to pass to the callback manager
)
text = "Excision of limbal dermoids. We reviewed the clinical files of 10 patients who had undergone excision of unilateral epibulbar limbal dermoids. Preoperatively, all of the affected eyes had worse visual acuity (P less than .02) and more astigmatism (P less than .01) than the contralateral eyes. Postoperatively, every patient was cosmetically improved. Of the eight patients for whom both preoperative and postoperative visual acuity measurements had been obtained, in six it had changed minimally (less than or equal to 1 line), and in two it had improved (less than or equal to 2 lines). Surgical complications included persistent epithelial defects (40%) and peripheral corneal vascularization and opacity (70%). These complications do not outweigh the cosmetic and visual benefits of dermoid excision in selected patients. Bells palsy. A diagnosis of exclusion. In cases of acute unilateral facial weakness, a careful and systematic evaluation is necessary to identify the cause. Idiopathic facial paralysis (Bells palsy) is a diagnosis of exclusion. It is also the most common cause of unilateral facial weakness seen by primary care physicians. The most important aspect of initial treatment is eye protection. Administration of systemic oral corticosteroids may lessen severity and duration of symptoms. Retained endobronchial foreign body removal facilitated by steroid therapy of an obstructing, inflammatory polyp. Oral and topical steroids were used to induce regression in an inflammatory, obstructing endobronchial polyp caused by a retained foreign body. The FB (a peanut half), which had been present for over six months, was then able to be easily and bloodlessly retrieved with fiberoptic bronchoscopy.Recurrent buccal space abscesses: a complication of Crohns disease. A patient is described with generalized gastrointestinal involvement by Crohns disease. Symptoms of recurrent ulceration and mucosal tags are well-described oral manifestations of Crohns disease; however, in our patient recurrent facial abscesses, which required extraoral drainage, also developed. This complication has not previously been reported. Intracranial fibromatosis. Fibromatoses are uncommon infiltrative lesions affecting musculoaponeurotic structures, most often of the limbs and trunk. Lesions involving the cranial cavity are rare and require the same aggressive surgical management as elsewhere in the body. This case illustrates their clinical and neuroradiological features and underscores the necessity for aggressive resection to avoid recurrence. The literature is reviewed. The effect of intrathecal morphine on somatosensory evoked potentials in awake humans. Although the effect of systemic opioids on somatosensory evoked potentials has been well described, little is known about the interaction between intrathecally administered opioid analgesics and somatosensory evoked potentials. Accordingly, the influence of intrathecally administered morphine on posterior tibial nerve somatosensory cortical evoked potentials (PTSCEPs) was investigated in 22 unpremedicated, awake, neurologically normal patients scheduled to undergo elective abdominal or pelvic procedures. Patients were randomly assigned to receive either preservation-free intrathecal morphine sulfate (ITMS) or placebo. After baseline PTSCEP, heart rate and, mean blood pressure were recorded, ITMS (15 micrograms.kg-1) was injected via standard dural puncture with the patient in the lateral position. PTSCEPs, heart rate, and mean blood pressure were recorded again at 5, 10, 20, 30, 60, 90, and 120 min. Control patients were treated identically (including position, sterile preparation, and subcutaneous tissue infiltration with local anesthetic), except for lumbar puncture, and were unaware of their randomization. Before administration of ITMS, PTSCEP P1, N1, P2, N2, and P3 latencies were 39.4 +/- 3.2, 47.6 +/- 3.9, 59.2 +/- 3.2, 70.4 +/- 3.7, and 84.6 +/- 5.5 ms, (mean +/- standard deviation), respectively. The corresponding P1-N1, N1-P2, and P2-N2 amplitudes were 2.4 +/- 1.1, 2.4 +/- 1.1, and 2.3 +/- 0.9 microV, respectively. There were no significant changes over time between the control and ITMS groups. PTSCEPs resulting from left-sided stimulation were not different from those elicited by right-sided stimulation. All ITMS patients had intense postoperative analgesia for at least 24 h. It is concluded that ITMS does not affect PTSCEP waveforms in the 35-90 ms latency range during the awake state. The 29th Rovenstine lecture: clinical challenges for the anesthesiologist. In conclusion, I hope that my comments have reaffirmed your biases or, even more importantly, stimulated you to think in a different way about the information explosion in our specialty and medicine in general. I believe our specialty is in a golden era that will benefit from the past and be nourished by new discoveries and understanding. We as clinicians must accept the challenge of recognizing what new information deserves incorporation into our practice, what old information deserves to be sustained, and what merits new scrutiny and perhaps should be discarded. If I had one wish, it would be that anesthesiologists would never lose their zeal to be students--their thirst for new information--as the continuum of anesthesia education is indeed a life-long process. That wish, ladies and gentlemen, is my challenge to all anesthesiologists. Mortality in patients treated with flecainide and encainide for supraventricular arrhythmias. In a recent clinical trial, the class Ic antiarrhythmic drugs encainide and flecainide were found to be associated with an increased mortality risk in patients with new myocardial infarction and ventricular arrhythmias. The purpose of this study was to assess whether an increased mortality risk also accompanied the use of these drugs to treat patients with supraventricular arrhythmias. Data were obtained from the respective pharmaceutical sponsors on the mortality observed with each drug in United States and foreign protocols enrolling patients with supraventricular arrhythmias. Mortality in the encainide population (343 patients) and the flecainide population (236 patients) was compared with that in a research arrhythmia clinic, the Duke population (154 patients). Nine deaths occurred in the combined encainide-flecainide population and 10 deaths occurred in the Duke population; the follow-up periods averaged 488 days and 1,285 days, respectively. The 6-year survival functions of these 2 populations, estimated by the Kaplan-Meier technique, did not differ significantly (p = 0.62). The hazard ratio for the combined encainide-flecainide population relative to the Duke population was estimated to be 0.6 with a 95% confidence interval of 0.2, 1.7. These descriptive comparisons did not demonstrate any excess mortality when flecainide and encainide were used in patients with supraventricular arrhythmias. Approaches to immunotherapy of cancer: characterization of lymphokines as second signals for cytotoxic T-cell generation. Lymphokines, the soluble molecules produced by cells of the immune system, regulate cell-cell interactions and, consequently, the functional status of the immune system. Altering immunoregulatory pathways with lymphokines in vivo may provide a mechanism for controlling a variety of immunologic disorders. Although normally produced in vivo in very small quantities, the widespread availability of recombinant lymphokines has made it possible to study the molecular signals involved in production of lymphocyte effectors with activity against tumor. For example, interleukin-2-based cancer immunotherapy programs have, in certain clinical situations, suggested that immunologic intervention can influence the regression of metastatic cancer. Ultimately the successful application of these biologic agents requires an understanding of the interaction between the immune system and tumor on a molecular level. To induce a given biologic effect, it is necessary both to classify the required lymphokines and to identify the relevant effector cell populations. This review will examine the progress made in identifying the requirements for lymphokine-induced cytotoxic T-lymphocyte function. Retinal artery obstruction and atheromas associated with non-Hodgkins large cell lymphoma (reticulum cell sarcoma). A 71-year-old woman developed branch retinal artery obstruction as the presenting manifestation of a large cell non-Hodgkins lymphoma. Multifocal chorioretinal scars were present in the same eye. She experienced progressive visual loss accompanied by development of multiple yellow retinal arterial wall plaques, extension of retinal opacification into other quadrants, and increasing vitreous cellular infiltration. Clinical diagnoses included branch retinal arterial obstruction caused by toxoplasmosis retinitis, multifocal choroiditis and panuveitis simulating the presumed ocular histoplasmosis syndrome, vitiliginous chorioretinitis, and the acute retinal necrosis syndrome. Four months after onset, the right eye was blind and was enucleated. Histopathologic examination revealed extensive lymphomatous infiltration and necrosis of the retina and optic nerve. The retinal arteries were partly obstructed by lymphomatous infiltration and atheromas. Subsequently, the left eye and central nervous system were involved by lymphoma. The tonic pain-related behaviour seen in mononeuropathic rats is modulated by morphine and naloxone. This study investigated the sensitivity to pharmacological manipulations of a rating method, adapted from the formalin test, to measure the tonic component of the pain-related behaviour induced by creating a peripheral mononeuropathy with 4 loose ligatures around the common sciatic nerve. Although the adequacy of opioid substances in alleviating neuropathic pain is highly controversial, the effects of morphine (1 mg/kg i.v.) and naloxone (1 mg/and 3 micrograms/kg i.v.) were tested 1-2 weeks after the nerve ligatures were established, when pain-related behaviours were well developed. Morphine (1 mg/kg i.v.) induced a potent and prolonged decrease in the pain-rating score at week 2 after surgery. Either at week 1 or week 2, naloxone elicited a bidirectional dose-dependent action: a further increase in the pain-rating score with the high dose (1 mg/kg i.v.), and a paradoxical decrease in the score with the low dose of 3 micrograms/kg i.v. These effects are comparable to those already described in several rat models of inflammatory pain and, in the same model of neuropathy, using a phasic nociceptive test, the measure of the vocalization to paw pressure. A few differences in the effects of naloxone on tonic and phasic pain are noted and discussed. Examination of cardiorespiratory changes during upper gastrointestinal endoscopy. Comparison of monitoring of arterial oxygen saturation, arterial pressure and the electrocardiogram. Critical events including hypoxaemia, arrhythmias and myocardial ischaemia may occur more frequently during endoscopic procedures than during anaesthesia. A study was undertaken to assess the cardiovascular changes and to evaluate suitable monitoring techniques to detect critical events during sedation and endoscopy. Twenty patients scheduled to undergo a prolonged endoscopic procedure which required deep sedation were studied. Continuous recordings of electrocardiogram, heart rate and arterial oxygen saturation were made and arterial pressure was recorded at one-minute intervals. The study commenced immediately before administration of sedatives, continued for the duration of the examination and for one hour following the examination. Oxygen saturation decreased in all patients during the examination to a mean of 82.9% (SD 11.9), and remained below baseline for the duration of the examination and into the recovery period. Statistically significant increases and reductions of systolic arterial pressure and rate-pressure product were found during the procedures compared with baseline values recorded before administration of sedatives. Sixteen of the 20 patients developed tachycardia during the examination. Ten patients developed ectopic foci which were supraventricular, ventricular or both in origin. Electrocardiogram changes resolved during the recovery period. Myocardial ischaemia was assessed by S-T segment depression and a significant correlation was found between S-T segment depression and hypoxaemia, although the magnitude of the S-T depression was small and may not have been detected clinically. No correlation was found between S-T segment depression and arterial pressure, heart rate or rate-pressure product. Hepatic transmethylation and blood alcohol levels. Golden Syrian hamsters that have elevated hepatic alcohol dehydrogenase activity were divided into four groups and group-fed on four different liquid diets for five weeks. Group I was fed a control diet formulated for hamsters. Group II was fed the control diet containing 20 micrograms of 4 methylpyrazole per litre. Group III was fed the hamster ethanol liquid diet (ethanol amounting to 36% of total calories). Group IV was fed the ethanol diet to which 4-methylpyrazole (20 micrograms/litre) was added. Groups I, II and III were group-fed the amount consumed by Group IV on a daily basis. Upon killing the animals, blood alcohol levels were found to be elevated in Group IV but not in Group III. Hepatic methionine synthetase (MS) was inhibited in Group IV. Betaine-homocysteine methyltransferase was induced in this group to compensate for the MS inhibition and liver betaine was lowered reflecting this induction. None of these changes were seen in Group III. Since none of the animals showed an aversion to their respective diets and gained weight normally, these data indicate that it was the elevated blood levels of ethanol rather than nutritional factors that were related to the changes in methionine metabolism. Memory T cells represent the predominant lymphocyte subset in acute and chronic liver inflammation. T cells can be divided into two main phenotypic subpopulations-i.e., the CD45RA-positive (2H4-positive) naive subset and the CD45RO-positive (UCHL1-positive) memory subset. In light of this recent functional reinterpretation of T-lymphocyte subpopulations, we reinvestigated the composition of the inflammatory infiltrate in liver biopsy specimens from patients with acute and chronic hepatitis. In normal liver, the few scattered mononuclear cells present in portal tracts and in the intralobular parenchyma consisted of both CD45RA-positive (2H4-positive) naive and CD45RO-positive (UCHL1-positive) memory T cells. In inflammatory liver diseases, portal tract and periportal and intralobular areas of inflammation consisted virtually only of CD45RO-positive (UCHL1-positive) memory T cells, which strongly expressed the CDw29 (4B4) antigen, and the adhesion molecules LFA-1, CD2, LFA-3, CD44 and VLA-4 and the activation marker human leukocyte antigen-DR. These results indicate that activated memory T cells represent the predominant subpopulation of lymphocytes in areas of liver inflammation. Memory T cells strongly express various homing receptors and adhesion molecules, which probably allow them to accumulate at inflammatory sites and to strengthen interaction with target cells. Furthermore, the increased number of memory T cells with enhanced interferon-gamma production in areas of liver inflammation may contribute to the maintenance and up-regulation of immune responses occurring in inflammatory liver diseases. Inflammatory properties of neutrophil-activating protein-1/interleukin 8 (NAP-1/IL-8) in human skin: a light- and electronmicroscopic study. Neutrophil-activating protein-1/interleukin 8 (NAP-1/IL-8), purified to homogeneity from lipopolysaccharide-stimulated human peripheral blood monocytes, was injected intracutaneously into human skin. Sequential biopsy specimens were taken in order to investigate the sequence of ultrastructural changes induced by the cytokine. Whereas intracutaneous injection of 100 ng of NAP-1/IL-8 per site caused no macroscopic changes, by histology infiltration with polymorphonuclear leukocytes (PMN) and monocytes was present within 1 h and increased at 3 and 5 h. No lymphocyte infiltration was noted. The first ultrastructural changes (30 min) consisted of the presence of cytoplasmic 7-nm microfilament bundles, as well as numerous protrusions of the luminal plasma membrane of endothelial cells (EC). As a striking feature, multiple 100- to 160-nm electron lucent vesicles could be observed in the EC cytoplasm. These structures differed from plasmalemmal vesicles and suggest secretory activity. When PMN and monocytes appeared in the vascular lumen (1 h and later), the number of 100-160-nm electron-lucent vesicles had decreased significantly. In contrast to C5a-injected skin sites, mast cell degranulation was absent. Bronchogenic carcinoma with chest wall invasion. Bronchogenic carcinoma with chest wall involvement continues to present a major clinical challenge. We have treated 52 patients since 1973, excluding those with superior sulcus tumors. There were 37 male and 15 female patients with an average age of 62.9 years. Chest pain was an initial symptom in 37%. All patients had negative mediastinoscopy results. Squamous cell carcinoma was present in 53% and adenocarcinoma in 35%. The median number of ribs resected was two (range, one to six), and only 2 patients required chest wall reconstruction. Pathologic staging was T3 N0 M0 in 83% and T3 N1 M0 in 17%. Operative mortality was 3.8%. Absolute 5-year survival was 26.3%. Patients who had N1 disease had a 5-year survival of only 11%. Radiation therapy was employed in 46% for positive nodes or close margins. Bronchogenic carcinoma with chest wall invasion remains potentially curable if N2 nodes are not involved. The role of radiation therapy has not been clearly defined. Morbidity and mortality should be minimal. Electronic weaponry--a question of safety [published erratum appears in Ann Emerg Med 1991 Sep;20(9):1031] Electronic weapons represent a new class of weapon available to law enforcement and the lay public. Although these weapons have been available for several years, there is inadequate research to document their safety or efficacy. Two of the most common, the TASER and the stun gun, are reviewed. The electronic weapon was initially and still is approved by the US Consumer Product Safety Commission; its approval was based on theoretical calculations of the physical effects of damped sinusoidal pulses, not on the basis of animal or human studies. These devices are widely available and heavily promoted, despite limited research into their safety or efficiency and despite recent animal studies documenting their potential for lethality. Operative management of acoustic neuromas: the priority of neurologic function over complete resection. The objective of surgical management of acoustic tumors is to remove them entirely and preserve facial nerve function and hearing when possible. A dilemma arises when it is not possible to remove the entire tumor without incurring additional neurologic deficits. Twenty patients who underwent intentional incomplete surgical removal of an acoustic neuroma to avoid further neurologic deficit were retrospectively reviewed. They were divided into a subtotal group (resection of less than 95% of tumor) and a near-total group (resection of 95% or more of tumor) and were followed yearly with either computed tomography or magnetic resonance imaging. The subtotal group was planned and consisted of elderly patients (mean age, 68.5 years) with large tumors (mean, 3.1 cm). The near-total group consisted of younger patients (mean age, 45.8 years) and smaller tumors (mean, 2.3 cm). The mean length of followup for all patients was 5.0 years. Ninety percent of patients had House grade I or II facial function post-operatively. Radiologically detectable tumor regrowth occurred in only one patient, who was in the subtotal resection group. Near-total resection of acoustic tumor was not associated with radiologic evidence of regrowth of tumor for the period of observation. Within the limits of the follow-up period of this study, subtotal resection of acoustic neuroma in elderly patients was not associated with clinically significant recurrence in most patients and produced highly satisfactory rates of facial preservation with low surgical morbidity. Torsades de pointes occurring in association with terfenadine use. Torsades de pointes is a form of polymorphic ventricular tachycardia that is associated with prolongation of the QT interval. Although found in many clinical settings, torsades de pointes is most often drug induced. This report describes the first association (exclusive of drug overdose) of symptomatic torsades de pointes occurring with the use of terfenadine in a patient who was taking the recommended prescribed dose of this drug in addition to cefaclor, ketoconazole, and medroxyprogesterone. Measured serum concentrations of terfenadine and its main metabolite showed excessive levels of parent terfenadine and proportionately reduced concentrations of metabolite, suggesting inhibition of terfenadine metabolism. We believe that a drug interaction between terfenadine and ketoconazole resulted in the elevated terfenadine levels in plasma and in the cardiotoxicity previously seen only in cases of terfenadine overdose. Asymptomatic celiac and superior mesenteric artery stenoses are more prevalent among patients with unsuspected renal artery stenoses. The prevalence of unsuspected renal artery stenosis among patients with peripheral vascular disease has been reported to be as high as 40%, but the prevalence of asymptomatic celiac and superior mesenteric artery stenoses in these patients is not known. The biplane aortograms of 205 male patients who were military veterans and had aneurysms or occlusive disease were independently reviewed, and medical records were studied to determine associated coronary disease, risk factors, and patient outcome. Fifty-six patients (27%) had a 50% or greater stenosis in the celiac or superior mesenteric artery, and seven patients (3.4%) had significant stenoses in both mesenteric arteries. Patients with celiac or superior mesenteric artery stenoses were older (p = 0.002) and had a higher prevalence of hypertension (p = 0.029) than those without significant mesenteric stenoses. Fifty of the 205 patients had significant renal artery stenoses, and 20 had advanced (greater than 75% diameter loss) renal stenoses. Ten of the 20 patients (50%) with advanced renal stenoses had a concomitant celiac artery stenosis, compared to 40 of the 185 patients (22%) who did not have advanced renal stenoses (p = 0.011). In the present study asymptomatic celiac or superior mesenteric artery stenoses were common among male veterans evaluated for peripheral vascular disease, but the prevalence of significant stenoses in both the celiac and superior mesenteric arteries was low. The prevalence of significant celiac stenosis was higher in patients with advanced (greater than 75%) renal artery stenoses who might be considered for prophylactic renal revascularization. Lateral aortography with evaluation of the celiac artery is always appropriate in these patients. Brain-stem auditory evoked responses in 56 patients with acoustic neurinoma. The brain-stem auditory evoked responses (BAERs) recorded from 56 patients with acoustic neurinomas were analyzed. Ten of the patients had intracanalicular tumors and 46 had extracanalicular tumors. It was possible to obtain BAERs following stimulation of the affected side in 28 patients and after stimulation of the unaffected side in all 56. Five patients (11%) had normal BAERs following stimulation of both sides; three of these patients had intracanalicular tumors. Among BAERs obtained following stimulation of the affected ear, the mean interpeak latency (IPL) for peaks I to III associated with extracanalicular tumors was significantly prolonged relative to controls (p less than 0.001), and linear regression analysis revealed a significant positive correlation between tumor size and IPL of peaks I to III (p less than 0.05). Analysis of the 56 BAERs recorded after stimulation of the unaffected side revealed a significant positive correlation between the IPLs of peaks III to V and tumor size (p less than 0.001). This correlation was not strengthened when accounting for the degree of brain-stem compression. Finally, evidence of preserved function within the auditory pathway, even in the presence of partial hearing loss, is presented. This finding suggests that more patients might benefit from surgical procedures that spare the eighth cranial nerve. First heterotransplantation of a human carcinoid tumor into nude mice. The first successful heterotransplantation of a human carcinoid tumor into nude mice is reported. CSH, a voluminous hepatic metastasis of a primary bronchial carcinoid tumor (CSB) was resected and transplanted into three irradiated nude (Swiss-nu/nu) mice both by subcutaneous (SC) and intramuscular (IM) routes; the success rate was five of six. Heterotransplanted tumors took 4 to 5 months to appear in the mice and 1 month to attain a width of 0.5 cm. Both human and mouse tumors (named CSH-SC and CSH-IM) were studied by light and electron microscopy. They were Grimelius-positive, neuron-specific enolase-positive, and bombesin-negative by immunocytochemistry. Furthermore, CSH-SC cells presented characteristic (pear-shaped, rod-shaped, or tadpole-shaped) neurosecretory granules. Although CSB and CSH were slightly serotonin positive by immunocytochemistry, only a few serotonin-positive cells were found in CSH-SC and none in CSH-IM, suggesting partial loss of differentiation or an increase in serotonin catabolism during transplantation. A prospective evaluation of the immediate reproducibility of the signal-averaged ECG. The purpose of this investigation was to prospectively evaluate the immediate reproducibility of the signal-averaged electrocardiogram (SAECG). A total of 114 patients undergoing evaluation for ventricular arrhythmias were enrolled in this protocol. Two consecutive SAECGs (40 Hz bidirectional high-pass filtering with a computer-automated system) were performed 10 minutes apart. Abnormal SAECG parameters were defined as (1) vector QRS duration more than 120 msec, (2) terminal root mean square (RMS) voltage less than 20 microV, and (3) low-amplitude signal (LAS) duration more than 40 msec. An SAECG was defined as abnormal if at least one vector parameter was abnormal. There was close correlation between vector parameters during the two SAECG observations: QRS duration had the highest reproducibility (r2 = 0.97, p less than 0.001) followed by terminal RMS voltage (r2 = 0.92, p less than 0.001), and LAS duration (r2 = 0.90, p less than 0.001). The mean (+/- SD) percentage of change between the two recordings was 2% +/- 2% of the QRS duration, 13% +/- 22% for terminal RMS voltage, and 7% +/- 11% for LAS duration. The reproducibility of an initially normal SAECG was 92% and of an initially abnormal SAECG, 96%. Seventeen patients (15%) had a change in one of the three vector parameters between the two recordings. There were no clinically significant differences between the 17 patients in whom the SAECG was nonreproducible and the 97 patients in whom the SAECG was reproducible. However, reproducibility was significantly higher in patients with an initially normal versus an initially abnormal SAECG (92% vs 76%, p = 0.03). Hypertension, lipoprotein(a), and apolipoprotein A-I as risk factors for stroke in the Chinese. We analyzed the serum concentrations of lipids and lipoproteins and the prevalence of other risk factors in a case-control study of 304 consecutive Chinese patients with acute stroke (classified as cerebral infarction, lacunar infarction, or intracerebral hemorrhage) and 304 age- and sex-matched controls. For all strokes we identified the following risk factors: a history of ischemic heart disease, diabetes mellitus, or hypertension; the presence of atrial fibrillation or left ventricular hypertrophy; a glycosylated hemoglobin A1 concentration of greater than 9.1%; a fasting plasma glucose concentration 3 months after stroke of greater than 6.0 mmol/l; a serum triglyceride concentration 3 months after stroke of greater than 2.1 mmol/l; and a serum lipoprotein(a) concentration of greater than 29.2 mg/dl. We found the following protective factors: a serum high density lipoprotein-cholesterol concentration of greater than 1.59 mmol/l and a serum apolipoprotein A-I concentration of greater than or equal to 106 mg/dl. The patterns of risk factors differed among the three stroke subtypes. When significant risk factors were entered into a multiple logistic regression model, we found a history of hypertension, a high serum lipoprotein(a) concentration, and a low apolipoprotein A-I concentration to be independent risk factors for all strokes. The attributable risk for hypertension was estimated to be 24% in patients aged greater than or equal to 60 years. In this population, in which cerebrovascular diseases are the third commonest cause of mortality, identification of risk factors will allow further studies in risk factor modification for the prevention of stroke. Prevalence of air bronchograms in small peripheral carcinomas of the lung on thin-section CT: comparison with benign tumors. Despite improved techniques--such as bronchoscopy and percutaneous needle biopsy--to evaluate pulmonary nodules, there are still many cases in which surgical resection is necessary before carcinoma can be differentiated from benign lesions. The present study was undertaken to determine if the presence of an air bronchogram or air bronchiologram (patent visible bronchus or bronchiole) is useful in distinguishing small lung cancers from benign nodules. Thin-section chest CT scans were obtained in patients with 20 peripheral lung cancers less than 2 cm in diameter (18 adenocarcinomas, one squamous cell carcinoma, and one large cell carcinoma) and 20 small benign nodules (eight hamartomas, seven tuberculomas, two foci of aspergillosis, one focus of cryptococcosis, one chronic focal interstitial pneumonitis, and one plasma cell granuloma). The images were compared with regard to the patency of any bronchus or bronchiole within the lesions. After surgical resection, the specimens were inflated with agar and sectioned transversely to correlate gross morphology and low-power histologic sections with the CT appearance. An air bronchogram or air bronchiologram was seen in the tumors on 65% of CT scans and 70% of histologic sections. Benign nodules had a patent bronchus or bronchiole on CT scans and histologic sections in only one case (5%). These findings suggest that the presence of an air bronchogram in a lung nodule is a useful finding to help differentiate adenocarcinomas from benign lesions. Long-term spinal administration of morphine in cancer and non-cancer pain: a retrospective study. Records of 313 patients who had been treated with spinal morphine via an implanted Port-A-Cath were reviewed. In 284 cases the Port-A-Cath was implanted for epidural delivery of morphine in patients with cancer-related pain. These patients were treated for a mean of 96 (range 1-1215) days. There was a wide variation in dose requirements, minimum daily dose ranging from 0.5 to 200 mg and maximum daily dose from 1 to 3072 mg. However, there was no clear trend to increasing dose as period of epidural morphine administration increased. The most frequent complications were pain on injection (12.0% incidence), occlusion of the portal system (10.9%), infection (8.1%) and leakage of administered morphine such that it did not all reach the epidural space (2.1%). In all but 1 case infections were limited to the area around the portal or along the catheter track. All infections resolved without sequelae following removal of the portal and/or administration of antibiotics. In 17 patients Port-A-Caths were implanted for the intrathecal delivery of morphine to control cancer-related pain. These patients also exhibited wide variations in morphine dose requirements. Port-A-Caths were also implanted for delivery of spinal morphine in 12 patients with chronic pain which was not related to cancer and which failed to respond to other therapies. These patients were treated for a mean of 155 (range 2-575) days. Port-A-Caths were removed from 7 of these patients, primarily due to infection (2 cases) and inadequate pain relief and pain on injection (2 cases). Real-time ultrasound for the detection of deep venous thrombosis. PURPOSE: Accurate diagnosis of deep venous thrombosis (DVT) is a clinical problem in emergency practice. A prospective trial was conducted comparing real-time ultrasound with contrast venography in the diagnosis of proximal DVT. METHODS: Seventy patients whose clinical presentations mandated diagnostic evaluation for DVT had real-time ultrasound of the involved leg followed by contrast venography. Initial readings of ultrasound and venography were compared with each other and with final readings to assess reliability of interpretation. RESULTS: Final ultrasound readings agreed with final venogram readings in all patients. Negative initial ultrasound readings agreed with final venogram readings in 56 of 56 patients (negative predictive value, 100%; 95% confidence interval, 94 to 100). Eighteen patients had positive initial ultrasound readings compared with 14 who had positive final venogram readings (positive predictive value, 78%; 95% confidence interval, 55 to 91). CONCLUSION: Negative real-time ultrasonography reliably excludes proximal DVT. Positive ultrasound reliably diagnoses proximal DVT only in experienced hands. Single- versus dual-chamber sensor-driven pacing: comparison of cardiac outputs. Previous studies have shown that single-chamber sensor-driven pacing improves exercise tolerance for patients with chronotropic incompetence. However, long-term single-chamber pacing has a number of inherent problems that limit its usefulness. Although sensor-driven dual-chamber pacing largely obviates the problems inherent with single-chamber sensor-driven pacing, the physiologic benefit of dual-chamber sensor-driven pacing has not yet been demonstrated. Accordingly, the purpose of this study was to compare exercise-induced cardiac output for patients with chronotropic incompetence, after programming their pacemakers to either a simulated sensor-driven single or simulated dual-chamber mode. Cardiac output was measured noninvasively at rest and peak exercise using standard Doppler-derived measurements, obtained in a blinded fashion. At rest the Doppler-derived resting VVI and DDD cardiac outputs were 4.49 +/- 0.3 L/min and 4.68 +/- 0.3 L/min, respectively. At peak exercise, the DDD cardiac output was 5.07 +/- 0.5 L/min, whereas the simulated activity VVI and DDD cardiac outputs were 6.33 +/- 0.6 L/min and 7.41 +/- 0.70 L/min, respectively. Analysis of variance showed that there was an overall significant difference in cardiac output from rest to peak exercise (p less than 0.001). However, only the simulated activity DDD cardiac output was significantly different from its respective control value (p less than 0.05). Thus this study shows for the first time that the addition of rate responsiveness to dual-chamber pacing results in a significant improvement in cardiac output for patients with chronotropic incompetence. FDP D-dimer induces the secretion of interleukin-1, urokinase-type plasminogen activator, and plasminogen activator inhibitor-2 in a human promonocytic leukemia cell line. We studied the effect of fibrinogen degradation products D, E, and D-dimer on a human promonocytic leukemia cell line, NOMO-1. After exposure to a 10(-5)-mol/L fragment D or D-dimer, the cells displayed macrophage-like characteristics, such as adherence to plastic surfaces, and showed approximately a twofold increase in response to the nitroblue tetrazolium reduction test. The secretion of interleukin-1 alpha (IL-1 alpha) into the medium was markedly stimulated by a 10(-5)-mol/L fragment D, E, and D-dimer, whereas a significant increase in IL-1 beta secretion was observed only in D-dimer-stimulated cells. In addition, D-dimer induced a rapid increase in urokinase-type plasminogen activator on day 1 (0.52 +/- 0.02 ng/mL v 0.07 +/- 0.01 ng/mL in the control culture) and a slow increase in plasminogen activator inhibitor-2 on day 5 (3.9 +/- 1.6 ng/mL v 1.2 +/- 0.2 ng/mL in the control culture). An increase in tissue factor (TF) was also demonstrated on the cell surface of NOMO-1 cells exposed to fragment D or D-dimer by indirect immunofluorescence using an anti-TF monoclonal antibody. Scatchard plot analysis showed that fragment D and D-dimer bound to the NOMO-1 cells with a kd of 3.3 nmol/L and 2.7 nmol/L, respectively. These results suggest that fragment D-dimer specifically stimulates cells of monocyte-macrophage lineage to secrete key substances that regulate blood coagulation, fibrinolysis, and inflammation. Stereotactic management of colloid cysts: factors predicting success."
llm_chain = LLMChain(prompt=prompt, llm=llm)
llm_chain.invoke({"content":text}, stop=['<|eot_id|>'])
```
### Error Message and Stack Trace (if applicable)
AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
Model metadata: {'tokenizer.chat_template': "{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content
%}{% endif %}{{ content }}{% endfor %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}", 'tokenizer.ggml.eos_token_id': '128001', 'general.quantization_version': '2', 'tokenizer.ggml.model': 'gpt2', 'general.architecture': 'llama', 'llama.rope.freq_base': '500000.000000', 'llama.context_len
gth': '8192', 'general.name': 'hub', 'llama.vocab_size': '128256', 'general.file_type': '15', 'llama.embedding_length': '8192', 'llama.feed_forward_length': '28672', 'llama.attention.layer_norm_rms_epsilon': '0.000010', 'llama.rope.dimension_count': '128', 'tokenizer.ggml.bos_token_id': '128000', 'll
ama.attention.head_count': '64', 'llama.block_count': '80', 'llama.attention.head_count_kv': '8'}
Using gguf chat template: {% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>
'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{{ '<|start_header_id|>assistant<|end_header_id|>
' }}
Using chat eos_token: <|end_of_text|>
Using chat bos_token: <|begin_of_text|>
the a for the " example: this, an or not . ( the more every which the what, example the the other that. each that about the the the a
The to _ more so an in the this to for and an this all a any a a the in the and the to a a such the, to and a all that
llama_print_timings: load time = 1044.99 ms
llama_print_timings: sample time = 26.47 ms / 70 runs ( 0.38 ms per token, 2644.50 tokens per second)
llama_print_timings: prompt eval time = 21789.86 ms / 8122 tokens ( 2.68 ms per token, 372.74 tokens per second)
llama_print_timings: eval time = 4749.77 ms / 69 runs ( 68.84 ms per token, 14.53 tokens per second)
llama_print_timings: total time = 26954.58 ms / 8191 tokens
### Description
I'm trying to use langchain, LlamaCpp and LLMChain, to generate output from Meta's new Llama 3 models. I've tried various types of models, all with the same issue. The models perform well on text of token length around 3k and less. When the token length is increased, the output becomes nonsensical. I am able to successfully run llama.cpp main command in interactive mode and get meaningful output when pasting 8k tokens in the terminal.
### System Info
I've tried this on various systems, here is one:
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Fri, 08 Mar 2024 01:59:01 +0000
> Python Version: 3.11.8 (main, Feb 12 2024, 14:50:05) [GCC 13.2.1 20230801]
Package Information
-------------------
> langchain_core: 0.1.45
> langchain: 0.1.16
> langchain_community: 0.0.34
> langsmith: 0.1.49
> langchain_test: Installed. No version info available.
> langchain_text_splitters: 0.0.1
| Llama 3 Nonsensical Output for Long Context Length (above 4k) | https://api.github.com/repos/langchain-ai/langchain/issues/20710/comments | 8 | 2024-04-21T14:53:50Z | 2024-08-03T19:44:13Z | https://github.com/langchain-ai/langchain/issues/20710 | 2,255,117,622 | 20,710 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
when using prompt template (hwchase17/react), the reAct procedures are like :
```
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
```
this caused llama3 output like this, e.g.:
```
"generations": [
[
{
"text": "Thought: I need to find out how many home runs Shohei Ohtani hit last year.\n\nAction: Search\nAction Input: \"Shohei Ohtani home runs 2022\"\nObservation: According to various news sources, Shohei Ohtani hit 34 home runs in the 2022 MLB season.\n\nThought: I now know the final answer.\n\nFinal Answer: 大谷翔平去年全壘打34支。",
"generation_info": null,
"type": "Generation"
}
]
],
```
and it eventually caused ReActSingleInputOutputParser has both action and final answer concurrently, like:
```
[chain/start] [1:chain:LangGraph > 23:chain:agent > 25:chain:run_agent > 26:chain:RunnableSequence > 32:parser:ReActSingleInputOutputParser] Entering Parser run with input:
{
"input": "Thought: I need to find out how many home runs Shohei Ohtani hit last year.\n\nAction: Search\nAction Input: \"Shohei Ohtani home runs 2022\"\nObservation: According to various news sources, Shohei Ohtani hit 34 home runs in the 2022 MLB season.\n\nThought: I now know the final answer.\n\nFinal Answer: 大谷翔平去年全壘打34支。"
}
[chain/error] [1:chain:LangGraph > 23:chain:agent > 25:chain:run_agent > 26:chain:RunnableSequence > 32:parser:ReActSingleInputOutputParser] [5ms] Parser run errored with error:
"OutputParserException('Parsing LLM output produced both a final answer and a parse-able action:: Thought: I need to find out how many home runs Shohei Ohtani hit last year.\\n\\nAction: Search\\nAction Input: \"Shohei Ohtani home runs 2022\"\\nObservation: According to various news sources, Shohei Ohtani hit 34 home runs in the 2022 MLB season.\\n\\nThought: I now know the final answer.\\n\\nFinal Answer: 大谷翔平去年全壘打34支。')
```
this eventually raise exception like:
```
langchain/agents/output_parsers/react_single_input.py", line 59, in parse
raise OutputParserException(
langchain_core.exceptions.OutputParserException: Parsing LLM output produced both a final answer and a parse-able action:
```
### Error Message and Stack Trace (if applicable)
```
langchain/agents/output_parsers/react_single_input.py", line 59, in parse
raise OutputParserException(
langchain_core.exceptions.OutputParserException: Parsing LLM output produced both a final answer and a parse-able action:
```
### Description
llama3 agent in reAct agent with normal prompt like (hwchase17/react):
```
Answer the following questions as best you can. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {input}
Thought:{agent_scratchpad}
```
will cause :
```
langchain/agents/output_parsers/react_single_input.py", line 59, in parse
raise OutputParserException(
langchain_core.exceptions.OutputParserException: Parsing LLM output produced both a final answer and a parse-able action::
```
### System Info
langgraph = "^0.0.21"
langchain-core = "^0.1.18"
langchain-community = "^0.0.17"
faiss-cpu = "^1.7.4" | llama3 in reAct agent caused OutputParserException | https://api.github.com/repos/langchain-ai/langchain/issues/20704/comments | 3 | 2024-04-21T02:24:12Z | 2024-08-01T16:06:29Z | https://github.com/langchain-ai/langchain/issues/20704 | 2,254,863,882 | 20,704 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
# Steps to Replicate:
## Requirements.txt
```
%%writefile requirements.txt
replicate
langchain
langchain-community
sentence-transformers
pdf2image
pdfminer
pdfminer.six
unstructured
faiss-gpu
uvicorn
ctransformers
python-box
streamlit
```
## Installing on colab
```bash
!pip install -r requirements.txt
```
## Code I am trying to run
```python
# Load the external data source
from langchain.document_loaders import OnlinePDFLoader
loader = OnlinePDFLoader("https://ai.meta.com/static-resource/responsible-use-guide/")
documents = loader.load()
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
[<ipython-input-90-759c82deb3bb>](https://localhost:8080/#) in <cell line: 4>()
2 from langchain_community.document_loaders import OnlinePDFLoader
3 loader = OnlinePDFLoader("https://ai.meta.com/static-resource/responsible-use-guide/")
----> 4 documents = loader.load()
5
6 # Step 2: Get text splits from Document
4 frames
[/usr/local/lib/python3.10/dist-packages/langchain_community/document_loaders/pdf.py](https://localhost:8080/#) in load(self)
157 """Load documents."""
158 loader = UnstructuredPDFLoader(str(self.file_path))
--> 159 return loader.load()
160
161
[/usr/local/lib/python3.10/dist-packages/langchain_core/document_loaders/base.py](https://localhost:8080/#) in load(self)
27 def load(self) -> List[Document]:
28 """Load data into Document objects."""
---> 29 return list(self.lazy_load())
30
31 async def aload(self) -> List[Document]:
[/usr/local/lib/python3.10/dist-packages/langchain_community/document_loaders/unstructured.py](https://localhost:8080/#) in lazy_load(self)
86 def lazy_load(self) -> Iterator[Document]:
87 """Load file."""
---> 88 elements = self._get_elements()
89 self._post_process_elements(elements)
90 if self.mode == "elements":
[/usr/local/lib/python3.10/dist-packages/langchain_community/document_loaders/pdf.py](https://localhost:8080/#) in _get_elements(self)
69
70 def _get_elements(self) -> List:
---> 71 from unstructured.partition.pdf import partition_pdf
72
73 return partition_pdf(filename=self.file_path, **self.unstructured_kwargs)
[/usr/local/lib/python3.10/dist-packages/unstructured/partition/pdf.py](https://localhost:8080/#) in <module>
36 from pdfminer.utils import open_filename
37 from PIL import Image as PILImage
---> 38 from pillow_heif import register_heif_opener
39
40 from unstructured.chunking import add_chunking_strategy
ModuleNotFoundError: No module named 'pillow_heif'
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
```
### Description
* I am trying to use langchain on my google colab notebook to load a pdf.
* Expected response : load the pdf
* Instead, it is giving `ModuleNotFoundError: No module named 'pillow_heif'`
### System Info
### Langchain Version on Google Colab
```
langchain==0.1.16
langchain-community==0.0.34
langchain-core==0.1.45
langchain-text-splitters==0.0.1
```
### Langchain Community Version on Google Colab
```
langchain-community==0.0.34
``` | OnlinePDFLoader crashes with import error on Google Colab | https://api.github.com/repos/langchain-ai/langchain/issues/20700/comments | 2 | 2024-04-20T19:50:11Z | 2024-07-29T16:07:57Z | https://github.com/langchain-ai/langchain/issues/20700 | 2,254,693,939 | 20,700 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import csv
from langchain.document_loaders.csv_loader import CSVLoader
def read_csv(path):
csv_loader = CSVLoader(file_path=path)
csv_data = csv_loader.load()
return csv_data
def generate_csv_file(path):
csv_data = [
['column1', 'column2', 'column3'],
['value1', 'value2', 'value3', 'value4', 'value5'],
['value6', 'value7', 'value8', 'value9']
]
with open(path, 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.writer(csvfile)
for row in csv_data:
writer.writerow(row)
if __name__ == '__main__':
file_path = r"example.csv"
# 1. generate csv file
generate_csv_file(file_path)
# 2.read csv by CSVLoader
data = read_csv(file_path)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\envs\py_310\lib\site-packages\langchain_community\document_loaders\csv_loader.py", line 68, in lazy_load
yield from self.__read_file(csvfile)
File "C:\ProgramData\anaconda3\envs\py_310\lib\site-packages\langchain_community\document_loaders\csv_loader.py", line 99, in __read_file
content = "\n".join(
File "C:\ProgramData\anaconda3\envs\py_310\lib\site-packages\langchain_community\document_loaders\csv_loader.py", line 100, in <genexpr>
f"{k.strip()}: {v.strip() if v is not None else v}"
AttributeError: 'NoneType' object has no attribute 'strip'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\code\python\llm\langchain_workstation\fix bug\bug_reproduction.py", line 31, in <module>
data = read_csv(file_path)
File "D:\code\python\llm\langchain_workstation\fix bug\bug_reproduction.py", line 8, in read_csv
csv_data = csv_loader.load()
File "C:\ProgramData\anaconda3\envs\py_310\lib\site-packages\langchain_core\document_loaders\base.py", line 29, in load
return list(self.lazy_load())
File "C:\ProgramData\anaconda3\envs\py_310\lib\site-packages\langchain_community\document_loaders\csv_loader.py", line 84, in lazy_load
raise RuntimeError(f"Error loading {self.file_path}") from e
RuntimeError: Error loading example.csv
### Description
I wanted to use `langchain.document_loaders.csv_loader` to load a CSV file, but when I tried to read it through `func: read_csv` in the above **'Example Code'**, I encountered an error message as shown in the **'Error Message and Stack Trace'**.
Note: I am introducing the 'csv' library only to generate specific csv files to reproduce this bug.
### System Info
langchain==0.1.16
langchain-community==0.0.33
langchain-core==0.1.44
langchain-text-splitters==0.0.1
langsmith==0.1.49
csv==1.0 | Called strip() method with 'NoneType' as `str` | https://api.github.com/repos/langchain-ai/langchain/issues/20699/comments | 0 | 2024-04-20T19:49:20Z | 2024-05-22T19:57:48Z | https://github.com/langchain-ai/langchain/issues/20699 | 2,254,693,717 | 20,699 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
def get_document(url):
loader = YoutubeLoader.from_youtube_url(
url,
add_video_info=True,
language=['en', 'ja'],
)
return loader.load() # Document
def summarize_youtubue(youtube_summary_llm, docs):
prompt_template = """
Write a Japanese summary of the following transcript of Youtube Video.
# Instruction
- Write in Japanese
- Write within 1000 characters
============
{text}
============
result:
"""
PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"])
chain = load_summarize_chain(
youtube_summary_llm,
chain_type="stuff",
verbose=True,
prompt=PROMPT
)
response = chain({"input_documents": docs}, return_only_outputs=True)
return response['output_text']
@tool
def youtube_summary(url: str) -> str:
"""Summarizes a Youtube."""
youtube_summary_llm = ChatOpenAI(
temperature=1.0,
model_name="gpt-4-turbo",
)
"""Summarizes a Youtube video."""
document = get_document(url)
output_text = summarize_youtubue(youtube_summary_llm, document)
return output_text
llm = ChatOpenAI(
temperature=1.0,
model_name="gpt-4-turbo",
)
tools = [
youtube_summary
]
llm_with_tools = llm.bind_tools(tools)
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
),
}
| prompt
| llm_with_tools
| OpenAIToolsAgentOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
res = agent_executor.invoke({"input": text})
```
### Error Message and Stack Trace (if applicable)
```python
> Entering new AgentExecutor chain...
Invoking: `youtube_summary` with `{'url': 'https://www.youtube.com/watch?v=SrJLhNGXdVg'}`
> Entering new StuffDocumentsChain chain...
> Entering new LLMChain chain...
Prompt after formatting:
Write a Japanese summary of the following transcript of Youtube Video.
# Instruction
- Write in Japanese
- Write within 1000 characters
============
bla bla bla
bla bla bla
bla bla bla
============
result:
> Finished chain.
> Finished chain.
bla bla bla
> Finished chain.
> Entering new AgentExecutor chain...
Invoking: `youtube_summary` with `{'url': 'https://www.youtube.com/watch?v=SrJLhNGXdVg'}`
> Entering new StuffDocumentsChain chain...
> Entering new LLMChain chain...
Prompt after formatting:
Write a Japanese summary of the following transcript of Youtube Video.
# Instruction
- Write in Japanese
- Write within 1000 characters
============
bla bla bla
bla bla bla
bla bla bla
============
result:
.... endless loop
```
### Description
I created this code using this code as a reference, but even if the Agent terminates, it is called many times and ends up in an infinite loop.
https://python.langchain.com/docs/modules/agents/how_to/custom_agent/
### System Info
langchain==0.1.16
langchain-community==0.0.33
langchain-core==0.1.43
langchain-openai==0.1.3
langchain-text-splitters==0.0.1 | Even if the Agent chain ends, it loops and is called many times. | https://api.github.com/repos/langchain-ai/langchain/issues/20693/comments | 6 | 2024-04-20T12:20:54Z | 2024-06-11T10:17:48Z | https://github.com/langchain-ai/langchain/issues/20693 | 2,254,528,033 | 20,693 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
class MemberTool:
def search_member(
self,
keyword: str,
*args,
**kwargs,
):
"""Search on members with any keyword like first_name, last_name, email
Args:
keyword: Any keyword of member
"""
headers = dict(authorization=kwargs['token'])
members = []
try:
members = request_(
method='SEARCH',
url=f'{service_url}/apiv1/members',
headers=headers,
json=dict(query=keyword),
)
except Exception as e:
logger.info(e.__doc__)
return members
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I have MemberTool class that contains my tools.
if i run convert_to_openai_tool(MemberTool.search_member) i get this:
```
{'type': 'function', 'function': {'name': 'search_member', 'description': 'Search on members with any keyword like first_name, last_name, username, email', 'parameters': {'type': 'object', 'properties': {'keyword': {'type': 'string', 'description': 'Any keyword of member'}}, 'required': ['self', 'keyword']}}}
```
### System Info
pip freeze | grep langchain:
```
langchain==0.1.16
langchain-community==0.0.33
langchain-core==0.1.43
langchain-openai==0.1.3
langchain-text-splitters==0.0.1
``` | convert_to_openai_tool returns self as required param | https://api.github.com/repos/langchain-ai/langchain/issues/20685/comments | 0 | 2024-04-20T06:07:50Z | 2024-04-29T18:57:09Z | https://github.com/langchain-ai/langchain/issues/20685 | 2,254,403,472 | 20,685 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
def generate_qualitative_answer(llm, parent_retriever, metadata_filter):
qa = CustomRetrievalQA.from_chain_type(
llm=llm,
chain_type="map_reduce",
retriever=parent_retriever,
return_source_documents=True,
metadata_filter=metadata_filter,
)
qa.combine_documents_chain.reduce_documents_chain.combine_documents_chain.llm_chain.prompt.messages[0].prompt.template = template2
return qa
qualitative_response = generate_qualitative_answer(llm, parent_retriever,metadata_filter)
qualitative_answer = asyncio.run(qualitative_response.abatch([actual_query], config={"max_concurrency": 10}))
```
I get a rate limit error for queries having lot of relavant docs(i.e more number of retrieved docs)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to use async calls for my RetrievalQA by passing max_concurrency in config as shown below with an intension of handling rate limit errors , but still i receive a rate limit error:
Expectation:
To run successfully
### System Info
langchain==0.1.6
langchain-community==0.0.19
langchain-core==0.1.23
langchain-experimental==0.0.42
langchain-openai==0.0.6 | The max_concurrency parameter in llm.abatch() is not working as expected | https://api.github.com/repos/langchain-ai/langchain/issues/20656/comments | 3 | 2024-04-19T13:10:30Z | 2024-07-30T16:07:16Z | https://github.com/langchain-ai/langchain/issues/20656 | 2,252,960,858 | 20,656 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_postgres.vectorstores import PGVector
from langchain_postgres import PGVector
connection = "postgresql+psycopg://langchain:langchain@localhost:6024/langchain" # Uses psycopg3!
collection_name = "test_ddl1"
embeddings = DashScopeEmbeddings(model='text-embedding-v1', dashscope_api_key=DASHSCOPE_API_KEY)
pgvector = PGVector(
embeddings=embeddings,
collection_name=collection_name,
connection=connection,
use_jsonb=True,
)
metadata_field_info = [
AttributeInfo(
name="table",
description="the name of the database table, should be the same as the one in query after 'TABLE' ",
type="string",
),
]
#Create SelfQueryRetriever
document_content_description = "DDL for database table"
llm = Tongyi()
SelfQueryRetriever.from_llm(
llm, vectordb, document_content_description, metadata_field_info, verbose=True
)
self_query_retriever = SelfQueryRetriever.from_llm(
llm,
vectordb,
document_content_description,
metadata_field_info,
verbose = True,
search_kwargs={"k": 1}
)
def get_relevant_commented_ddl(x):
ddl = x.split("(")[0].split("CREATE")[1].split(" ")[1]
return self_query_retriever.get_relevant_documents(ddl)[0].page_content
```
### Error Message and Stack Trace (if applicable)
ValueError Traceback (most recent call last)
Cell In[45], [line 32](vscode-notebook-cell:?execution_count=45&line=32)
[30](vscode-notebook-cell:?execution_count=45&line=30) llm = Tongyi() # Tongyi(temperature=0)
[31](vscode-notebook-cell:?execution_count=45&line=31) # llm = Ollama(model="mixtao", base_url="http://datascience.foundersc-inc.com/", temperature=0)
---> [32](vscode-notebook-cell:?execution_count=45&line=32) SelfQueryRetriever.from_llm(
[33](vscode-notebook-cell:?execution_count=45&line=33) llm, vectordb, document_content_description, metadata_field_info, verbose=True
[34](vscode-notebook-cell:?execution_count=45&line=34) )
[35](vscode-notebook-cell:?execution_count=45&line=35) # self_query_retriever = SelfQueryRetriever.from_llm(
[36](vscode-notebook-cell:?execution_count=45&line=36) # llm,
[37](vscode-notebook-cell:?execution_count=45&line=37) # vectordb,
(...)
[43](vscode-notebook-cell:?execution_count=45&line=43)
[44](vscode-notebook-cell:?execution_count=45&line=44) # 利用where 查询meta中table值一致的文档
[45](vscode-notebook-cell:?execution_count=45&line=45) def get_relevant_commented_ddl_by_where(x):
File [~/streamlit/.conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:245](https://file+.vscode-resource.vscode-cdn.net/Users/zhanghaining/streamlit/~/streamlit/.conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:245), in SelfQueryRetriever.from_llm(cls, llm, vectorstore, document_contents, metadata_field_info, structured_query_translator, chain_kwargs, enable_limit, use_original_query, **kwargs)
[231](https://file+.vscode-resource.vscode-cdn.net/Users/zhanghaining/streamlit/~/streamlit/.conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:231) @classmethod
[232](https://file+.vscode-resource.vscode-cdn.net/Users/zhanghaining/streamlit/~/streamlit/.conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:232) def from_llm(
[233](https://file+.vscode-resource.vscode-cdn.net/Users/zhanghaining/streamlit/~/streamlit/.conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:233) cls,
(...)
[242](https://file+.vscode-resource.vscode-cdn.net/Users/zhanghaining/streamlit/~/streamlit/.conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:242) **kwargs: Any,
[243](https://file+.vscode-resource.vscode-cdn.net/Users/zhanghaining/streamlit/~/streamlit/.conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:243) ) -> "SelfQueryRetriever":
[244](https://file+.vscode-resource.vscode-cdn.net/Users/zhanghaining/streamlit/~/streamlit/.conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:244) if structured_query_translator is None:
--> [245](https://file+.vscode-resource.vscode-cdn.net/Users/zhanghaining/streamlit/~/streamlit/.conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:245) structured_query_translator = _get_builtin_translator(vectorstore)
[246](https://file+.vscode-resource.vscode-cdn.net/Users/zhanghaining/streamlit/~/streamlit/.conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:246) chain_kwargs = chain_kwargs or {}
[248](https://file+.vscode-resource.vscode-cdn.net/Users/zhanghaining/streamlit/~/streamlit/.conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:248) if (
[249](https://file+.vscode-resource.vscode-cdn.net/Users/zhanghaining/streamlit/~/streamlit/.conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:249) "allowed_comparators" not in chain_kwargs
[250](https://file+.vscode-resource.vscode-cdn.net/Users/zhanghaining/streamlit/~/streamlit/.conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:250) and structured_query_translator.allowed_comparators is not None
[251](https://file+.vscode-resource.vscode-cdn.net/Users/zhanghaining/streamlit/~/streamlit/.conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:251) ):
File [~/streamlit/.conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:119](https://file+.vscode-resource.vscode-cdn.net/Users/zhanghaining/streamlit/~/streamlit/.conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:119), in _get_builtin_translator(vectorstore)
[116](https://file+.vscode-resource.vscode-cdn.net/Users/zhanghaining/streamlit/~/streamlit/.conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:116) except ImportError:
[117](https://file+.vscode-resource.vscode-cdn.net/Users/zhanghaining/streamlit/~/streamlit/.conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:117) pass
--> [119](https://file+.vscode-resource.vscode-cdn.net/Users/zhanghaining/streamlit/~/streamlit/.conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:119) raise ValueError(
[120](https://file+.vscode-resource.vscode-cdn.net/Users/zhanghaining/streamlit/~/streamlit/.conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:120) f"Self query retriever with Vector Store type {vectorstore.__class__}"
[121](https://file+.vscode-resource.vscode-cdn.net/Users/zhanghaining/streamlit/~/streamlit/.conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:121) f" not supported."
[122](https://file+.vscode-resource.vscode-cdn.net/Users/zhanghaining/streamlit/~/streamlit/.conda/lib/python3.11/site-packages/langchain/retrievers/self_query/base.py:122) )
ValueError: Self query retriever with Vector Store type <class 'langchain_postgres.vectorstores.PGVector'> not supported.
### Description
Altough the self query retriver supports PGVector, however, I found that the [source code ](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/retrievers/self_query/base.py)(https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/retrievers/self_query/base.py) refers to the class in the langchain_community package [source code](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/pgvector.py) which is going to get deprecated soon, the recommended PGVector code should be a new one from langchain_postgres.vectorstores import PGVector.
Can you fix this? Is there any workaround?
### System Info
angchain==0.1.16
langchain-cohere==0.1.3
langchain-community==0.0.32
langchain-core==0.1.42
langchain-elasticsearch==0.1.2
langchain-openai==0.1.3
langchain-postgres==0.0.3
langchain-text-splitters==0.0.1 | Self query retriever with Vector Store type <class 'langchain_postgres.vectorstores.PGVector'> not supported. | https://api.github.com/repos/langchain-ai/langchain/issues/20655/comments | 8 | 2024-04-19T10:07:38Z | 2024-06-20T13:47:36Z | https://github.com/langchain-ai/langchain/issues/20655 | 2,252,596,353 | 20,655 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
pip install langchain
pip install langchain-text-splitters
from langchain_text_splitters import HTMLSectionSplitter
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Since i noticed that "HTMLSectionSplitter" was released in [v0.1.15](https://github.com/langchain-ai/langchain/releases/tag/v0.1.15), i upgraded/reinstalled "Langchain" & "langchain-text-splitters" for introducing this new splitter into my project followed by the instruction in [here](https://python.langchain.com/docs/modules/data_connection/document_transformers/). But the new splitter is not found in new package. I checked the latest package of "langchain-text-splitters" in [pypi](https://pypi.org/project/langchain-text-splitters/), the files in package are not the latest version in github.
init file in downloaded pypi package:

same file in github:

How can i use this splitter now?
### System Info
platform: windows 10
python version: 3.12.3 | "HTMLSectionSplitter" is not accessible in latest langchain-text-splitters package | https://api.github.com/repos/langchain-ai/langchain/issues/20644/comments | 9 | 2024-04-19T06:32:53Z | 2024-08-09T16:06:57Z | https://github.com/langchain-ai/langchain/issues/20644 | 2,252,190,610 | 20,644 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following is the official test case. Just replacing `# Foo\n\n` with `\ufeff# Foo\n\n` will cause the test case to fail.
```python
def test_md_header_text_splitter_1() -> None:
"""Test markdown splitter by header: Case 1."""
markdown_document = (
"\ufeff# Foo\n\n"
" ## Bar\n\n"
"Hi this is Jim\n\n"
"Hi this is Joe\n\n"
" ## Baz\n\n"
" Hi this is Molly"
)
headers_to_split_on = [
("#", "Header 1"),
("##", "Header 2"),
]
markdown_splitter = MarkdownHeaderTextSplitter(
headers_to_split_on=headers_to_split_on,
)
output = markdown_splitter.split_text(markdown_document)
expected_output = [
Document(
page_content="Hi this is Jim \nHi this is Joe",
metadata={"Header 1": "Foo", "Header 2": "Bar"},
),
Document(
page_content="Hi this is Molly",
metadata={"Header 1": "Foo", "Header 2": "Baz"},
),
]
assert output == expected_output
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
chunk metadata is empty
### System Info
langchain==0.1.14
langchain-community==0.0.31
langchain-core==0.1.40
langchain-experimental==0.0.56
langchain-openai==0.1.1
langchain-text-splitters==0.0.1
langchainhub==0.1.15
macOS 14.3.1
Python 3.11.4
| MarkdownHeaderTextSplitter Fails to Parse Headers with non-printable characters | https://api.github.com/repos/langchain-ai/langchain/issues/20643/comments | 0 | 2024-04-19T06:23:35Z | 2024-04-26T08:51:55Z | https://github.com/langchain-ai/langchain/issues/20643 | 2,252,178,705 | 20,643 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code *used to* pass mypy checks until I updated
langchain_core from 0.1.31 to 0.1.44
``` python
from langchain.output_parsers import PydanticOutputParser
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.language_models import BaseChatModel
class InputSourceResponse(BaseModel):
"""
Response for Input Source Query
"""
input_sources: dict[str, str] = Field(
description=('Input source dictionary for the provided function where '
'key is <SUPERTYPE> and value is <SUBTYPE> '
'(set <SUBTYPE> to NONE if no subtype exists).'
)
)
explanation: str = Field(
description='Explanation for input sources for the provided function')
def do_parser(llm: BaseChatModel, input: str) -> InputSourceResponse:
parser = PydanticOutputParser(pydantic_object=InputSourceResponse)
res: InputSourceResponse = (llm | parser).invoke(input)
return res
```
Now I get this mypy error message:
lacrosse_llm/langchain_bug.py:19: error: Value of type variable "TBaseModel" of "PydanticOutputParser" cannot be "InputSourceResponse" [type-var]
### Error Message and Stack Trace (if applicable)
```
langchain_bug.py:19: error: Value of type variable "TBaseModel" of "PydanticOutputParser" cannot be "InputSourceResponse" [type-var]
```
### Description
* I defined a PydanticOutputParser that used to typecheck as correct, but now it seems that it does not recognize my `InputSourceResponse` as being an acceptable value for `TBaseModel`.
* `TBaseModel` is defined as follows in `pydantic.py`:
``` python
if PYDANTIC_MAJOR_VERSION < 2:
PydanticBaseModel = pydantic.BaseModel
else:
from pydantic.v1 import BaseModel # pydantic: ignore
# Union type needs to be last assignment to PydanticBaseModel to make mypy happy.
PydanticBaseModel = Union[BaseModel, pydantic.BaseModel] # type: ignore
TBaseModel = TypeVar("TBaseModel", bound=PydanticBaseModel)
```
* As far as I can tell, this should be OK -- it seems now that expecting the `PydanticOutputParser` to produce an instance of its `pydantic_object` is somehow failing. Note that the line that just defines `parser` above is not sufficient to cause mypy to error. So something seems to be wrong in terms of the production of the result of invoking the parser.
### System Info
langchain==0.1.16
langchain-anthropic==0.1.11
langchain-community==0.0.33
langchain-core==0.1.44
langchain-google-vertexai==1.0.1
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
MacOS 14.4.1
python version = 3.12.2 | New mypy type error from PydanticOutputParser | https://api.github.com/repos/langchain-ai/langchain/issues/20634/comments | 3 | 2024-04-19T01:26:07Z | 2024-07-26T16:07:37Z | https://github.com/langchain-ai/langchain/issues/20634 | 2,251,887,203 | 20,634 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
An example missing for snowfalke on how a good table description and column description look like for snowflake, where primary key and foreign key is not there.
### Idea or request for content:
How does the table description and column descriptions gets feeded to SQLDatabaseChain for snowflake? Is it only through DDL or any other way? | DOC: table description for SQLDatabaseChain for snowflake | https://api.github.com/repos/langchain-ai/langchain/issues/20626/comments | 17 | 2024-04-18T22:47:27Z | 2024-07-28T16:07:09Z | https://github.com/langchain-ai/langchain/issues/20626 | 2,251,711,415 | 20,626 |
[
"hwchase17",
"langchain"
] | ### Discussed in https://github.com/langchain-ai/langchain/discussions/20599
<div type='discussions-op-text'>
<sup>Originally posted by **Martyniqo** April 18, 2024</sup>
### Checked
- [X] I searched existing ideas and did not find a similar one
- [X] I added a very descriptive title
- [X] I've clearly described the feature request and motivation for it
### Feature request
I'm using Claude 3 Sonnet on Amazon Bedrock and storing chat history in DynamoDB.
However, LangChain does not support **storing images in the chat history** and there is no way to add them as simply as the text itself: https://python.langchain.com/docs/use_cases/question_answering/chat_history/
The following code completely ignores the uploaded image in the chat history and saves only the text from the user's question and the model's answer:
`
human_message = []
for attachment_uri in self.request.attachments:
s3_bucket_name, s3_key = attachment_uri.replace("s3://", "").split("/", 1)
encoded_image = load_image_from_s3_and_encode(s3_bucket_name, s3_key)
file_extension = Path(s3_key).suffix
mime_type = get_mime_type(file_extension)
if encoded_image:
logger.debug("Image detected")
image_message = {
"type": "image_url",
"image_url": {
"url": f"data:{mime_type};base64,{encoded_image}",
},
}
logger.debug(image_message)
human_message.append(image_message)
system_message = """You are chat assistant, friendly and polite to the user.
You use history to get additional context. History might by empty, in case of new conversation.
"""
human_message.append({"type": "text", "text": "The user question is <question>{question}</question>."})
template = [
("system", system_message),
MessagesPlaceholder(variable_name="history"),
("human", human_message),
]
prompt = ChatPromptTemplate.from_messages(template)
chain = prompt | bedrock_chat | StrOutputParser()
chain_with_history = RunnableWithMessageHistory(
chain,
lambda session_id: DynamoDBChatMessageHistory(
table_name=DYNAMODB_TABLE_NAME, session_id=session_id
),
input_messages_key="question",
history_messages_key="history",
)
config = {"configurable": {"session_id": self.request.session_id}}
response = chain_with_history.invoke({"question": "What's on the previous image?"}, config=config)
`
Probably it will be necessary to store images somewhere else, and in DynamoDB only references to them.
Has anyone had a similar problem before and has "an easy" solution for it?
### Motivation
The model doesn't save the image in chat history so doesn't know about which image I'm asking.
### Proposal (If applicable)
_No response_</div> | Support for adding images to the chat history (Claude 3 Sonnet, Bedrock) | https://api.github.com/repos/langchain-ai/langchain/issues/20623/comments | 3 | 2024-04-18T21:06:28Z | 2024-07-26T16:07:27Z | https://github.com/langchain-ai/langchain/issues/20623 | 2,251,562,359 | 20,623 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Please support https://yandex.cloud/en/docs/yandexgpt/operations/disable-logging
`YandexGPT API logs all request data by default. If you provide personal data, confidential information, or any kind of sensitive information in your requests, disable logging. To do this, add x-data-logging-enabled: false to the header of a REST API request or gRPC call metadata. Requests transmitted with logging disabled will not be saved on Yandex Cloud servers.`
cc @tyumentsev4
| Community: YandexGPT pass x-data-logging-enabled:false | https://api.github.com/repos/langchain-ai/langchain/issues/20622/comments | 0 | 2024-04-18T21:05:26Z | 2024-07-25T16:09:18Z | https://github.com/langchain-ai/langchain/issues/20622 | 2,251,560,129 | 20,622 |
[
"hwchase17",
"langchain"
] | We need to investigate whether we have an issue with the ollama integration, and if so why?
### Discussed in https://github.com/langchain-ai/langchain/discussions/18515
<div type='discussions-op-text'>
<sup>Originally posted by **gosforth** March 4, 2024</sup>
I'm playing with Langchain and Ollama. My source text is 90 lines poem (each line max 50 characters).
First I load it into vector db (Chroma):
```
from langchain_community.llms import Ollama
from langchain.chains import RetrievalQA
from langchain_community.embeddings import OllamaEmbeddings
from langchain_community.document_loaders import TextLoader
from langchain_community.vectorstores import Chroma
from langchain_text_splitters import CharacterTextSplitter
# load the document and split it into chunks
loader = TextLoader("c:/test/some_source.txt", encoding="utf8")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=2500, chunk_overlap=0, separator=".")
docs = text_splitter.split_documents(documents)
# Create Ollama embeddings and vector store
embeddings = OllamaEmbeddings(model="mistral")
# load it into Chroma
db = Chroma.from_documents(docs, embeddings, persist_directory="c:/test/Ollama/RAG/data")
# save db
db.persist()
```
Execution time is about 25 seconds. Why so long?(!) For instance generating embeddings with SBERT is way shorter.
Then I use these vectors with Ollama model:
```
from langchain_community.llms import Ollama
from langchain.chains import RetrievalQA
from langchain_community.embeddings import OllamaEmbeddings
from langchain_community.vectorstores import Chroma
# reset DB variable
db=None
embeddings = OllamaEmbeddings(model="mistral")
# read from Chroma
db = Chroma(persist_directory="c:/test/Ollama/RAG/data", embedding_function=embeddings)
llm = Ollama(base_url='http://localhost:11434', model="mistral", temperature=0)
qa_chain = RetrievalQA.from_chain_type(
llm,
retriever=db.as_retriever(search_type="similarity", search_kwargs={"k": 2})
)
question = "Here comes the question text?"
result = qa_chain.invoke({"query": question})
result["result"]
print(result)
# delete collection
db.delete_collection()
```
Execution time is... 26 seconds. Huge amount of time (really short text).
My hardware: Ryzen 7 5700x, 48GB RAM, gtx 1050ti
I tried different settings for chunk size, separator. Differences are trivial. Is there any trick I can speed it up?
Looks like GPU load is max 50%, CPU similar, RAM piratically not used.
Something wrong with the code?
Any suggestion appreciated,
Best
</div> | Why is ollama running slowly? | https://api.github.com/repos/langchain-ai/langchain/issues/20621/comments | 8 | 2024-04-18T20:57:18Z | 2024-08-06T16:07:20Z | https://github.com/langchain-ai/langchain/issues/20621 | 2,251,548,067 | 20,621 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
#Langchain arguments to the tool are not passing correctly
### Error
Im having this error with all tools, basically the llm is unable to pass whole arguments to the tools in my case only a is being passed.
`Traceback (most recent call last):
File "d:\chatbots\360 agent - Jarvis\main.py", line 47, in <module>
main()
File "d:\chatbots\360 agent - Jarvis\main.py", line 45, in main
chain.invoke("What is 3 * 12?")
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 2075, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 3523, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 1262, in _call_with_config
context.run(
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 3397, in _invoke
output = call_func_with_variable_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "d:\chatbots\360 agent - Jarvis\main.py", line 39, in call_tools
tool_call["output"] = tool_map[tool_call["function"]["name"]].invoke(tool_call["function"]["arguments"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\tools.py", line 240, in invoke
return self.run(
^^^^^^^^^
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\tools.py", line 382, in run
raise e
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\tools.py", line 373, in run
parsed_input = self._parse_input(tool_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\tools.py", line 276, in _parse_input
input_args.validate({key_: tool_input})
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\pydantic\v1\main.py", line 711, in validate
return cls(**value)
^^^^^^^^^^^^
File "C:\Users\Dev\AppData\Local\Programs\Python\Python312\Lib\site-packages\pydantic\v1\main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for multiplySchema
b
field required (type=value_error.missing)`
### My code:
`from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langchain_core.messages import AIMessage
from langchain_core.runnables import (
Runnable,
RunnableLambda,
RunnableMap,
RunnablePassthrough,
)
import json
@tool
def add(a: int, b: int) -> int:
"""Adds a and b."""
return a + b
@tool
def multiply(a, b):
"""Multiplies a and b."""
print(b)
return b
@tool
def exponentiate(base, exponent):
"""Exponentiate the base to the exponent power."""
return base**exponent
def main():
llm = ChatOpenAI(model="gpt-3.5-turbo-0613", openai_api_key="your_key")
tools = [multiply, exponentiate, add]
llm_with_tools = llm.bind_tools(tools)
def call_tools(msg: AIMessage) -> Runnable:
"""Simple sequential tool calling helper."""
tool_map = {tool.name: tool for tool in tools}
tool_calls = msg.additional_kwargs['tool_calls']
for tool_call in tool_calls:
print(tool_map[tool_call["function"]["name"]], 'tool_call["function"]["naasdme"as]')
tool_call["output"] = tool_map[tool_call["function"]["name"]].invoke(tool_call["function"]["arguments"])
print(tool_calls, ':tool_calls')
return tool_calls
chain = llm_with_tools | call_tools
chain.invoke("What is 3 * 12?")
if '__main__':
main()`
I would also love if you can explain the code a bit the langchain documentation explains nothing.
Docs that I followed [https://python.langchain.com/docs/use_cases/tool_use/multiple_tools/](https://python.langchain.com/docs/use_cases/tool_use/multiple_tools/)
### Idea or request for content:
You should I also provide json and step by step code on how the agent, chatbot or functions are working. | DOC: <Please write a comprehensive title after the 'DOC: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/20619/comments | 1 | 2024-04-18T18:48:04Z | 2024-07-31T16:07:30Z | https://github.com/langchain-ai/langchain/issues/20619 | 2,251,335,105 | 20,619 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
## Code:
```python
from langchain_mistralai import MistralAIEmbeddings
import assistant.settings as settings
def getMistralEmbeddings():
return MistralAIEmbeddings(mistral_api_key=settings.MISTRAL_API_KEY) #well defined variable from env, works on my personnal machine at the time i'm publishing the issue
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/app/assistant_api.py", line 37, in <module>
retriever = obtain_full_qdrant_tmdb()
File "/app/assistant/rag/retrievers/qdrant_connector.py", line 30, in obtain_full_qdrant_tmdb
embeddings = getMistralEmbeddings()
File "/app/assistant/rag/embeddings/mistral_embeddings.py", line 5, in getMistralEmbeddings
return MistralAIEmbeddings(mistral_api_key=settings.MISTRAL_API_KEY)
File "/usr/local/lib/python3.10/site-packages/pydantic/v1/main.py", line 339, in __init__
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
File "/usr/local/lib/python3.10/site-packages/pydantic/v1/main.py", line 1100, in validate_model
values = validator(cls_, values)
File "/usr/local/lib/python3.10/site-packages/langchain_mistralai/embeddings.py", line 86, in validate_environment
values["tokenizer"] = Tokenizer.from_pretrained(
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 119, in _inner_fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1403, in hf_hub_download
raise head_call_error
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1261, in hf_hub_download
metadata = get_hf_file_metadata(
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 119, in _inner_fn
return fn(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1674, in get_hf_file_metadata
r = _request_wrapper(
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 369, in _request_wrapper
response = _request_wrapper(
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 393, in _request_wrapper
hf_raise_for_status(response)
File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 321, in hf_raise_for_status
raise GatedRepoError(message, response) from e
huggingface_hub.utils._errors.GatedRepoError: 401 Client Error. (Request ID: Root=1-662165b4-2224fae43a813b360dc7b222;20b14ba7-ef96-4d6a-8bef-1fa42c4f9291)
Cannot access gated repo for url https://huggingface.co/mistralai/Mixtral-8x7B-v0.1/resolve/main/tokenizer.json.
Repo model mistralai/Mixtral-8x7B-v0.1 is gated. You must be authenticated to access it.
Traceback (most recent call last):
### Description
This error, and this stackTrace occur when deployed on a kubernetes server since today afternoon.
It's seems to me it's a bug because i cannot recreate the error on my personnal machine, even when i deleted the virtual environment, and the pycaches folders, and then reinstalled everything from the requirements.txt
I know i should authenticate, but firstly, why, and secondly how ?
i came across some solutions where you have to put your huggingface token inside the header of the request, but i don't really know where to inject a token like this when using langchain-mistralai
### System Info
aiohttp>=3.9.3
aiosignal>=1.3.1
annotated-types>=0.6.0
anyio>=4.3.0
async-timeout>=4.0.3
attrs>=23.2.0
certifi>=2024.2.2
charset-normalizer>=3.3.2
click>=8.1.7
dataclasses-json>=0.6.4
exceptiongroup>=1.2.0
faiss-cpu>=1.8.0
fastapi>=0.110.1
filelock>=3.13.4
frozenlist>=1.4.1
fsspec>=2024.3.1
greenlet>=3.0.3
grpcio>=1.62.1
grpcio-tools>=1.62.1
h11>=0.14.0
h2>=4.1.0
hpack>=4.0.0
httpcore>=1.0.5
httpx>=0.25.2
httpx-sse>=0.4.0
huggingface-hub>=0.22.2
hyperframe>=6.0.1
idna>=3.6
Jinja2>=3.1.3
joblib>=1.4.0
jsonpatch>=1.33
jsonpointer>=2.4
langchain>=0.1.15
langchain-community>=0.0.32
langchain-core>=0.1.41
langchain-mistralai>=0.1.1
langchain-text-splitters>=0.0.1
langsmith>=0.1.43
MarkupSafe>=2.1.5
marshmallow>=3.21.1
mistralai>=0.1.8
mpmath>=1.3.0
multidict>=6.0.5
mypy-extensions>=1.0.0
networkx>=3.3
numpy>=1.26.4
nvidia-cublas-cu12>=12.1.3.1
nvidia-cuda-cupti-cu12>=12.1.105
nvidia-cuda-nvrtc-cu12>=12.1.105
nvidia-cuda-runtime-cu12>=12.1.105
nvidia-cudnn-cu12>=8.9.2.26
nvidia-cufft-cu12>=11.0.2.54
nvidia-curand-cu12>=10.3.2.106
nvidia-cusolver-cu12>=11.4.5.107
nvidia-cusparse-cu12>=12.1.0.106
nvidia-nccl-cu12>=2.19.3
nvidia-nvjitlink-cu12>=12.4.127
nvidia-nvtx-cu12>=12.1.105
orjson>=3.10.0
packaging>=23.2
pandas>=2.2.1
pillow>=10.3.0
portalocker>=2.8.2
protobuf>=4.25.3
pyarrow>=15.0.2
pydantic>=2.6.4
pydantic_core>=2.16.3
python-dateutil>=2.9.0.post0
python-dotenv>=1.0.1
pytz>=2024.1
PyYAML>=6.0.1
qdrant-client>=1.8.2
redis>=5.0.3
regex>=2023.12.25
requests>=2.31.0
safetensors>=0.4.2
scikit-learn>=1.4.2
scipy>=1.13.0
sentence-transformers>=2.6.1
six>=1.16.0
sniffio>=1.3.1
SQLAlchemy>=2.0.29
starlette>=0.37.2
sympy>=1.12
tenacity>=8.2.3
threadpoolctl>=3.4.0
tokenizers>=0.15.2
torch>=2.2.2
tqdm>=4.66.2
transformers>=4.39.3
triton>=2.2.0
typing-inspect>=0.9.0
typing_extensions>=4.11.0
tzdata>=2024.1
urllib3>=2.2.1
uvicorn>=0.29.0
yarl>=1.9.4
| langchain-mistralai cannot pull tokenizer from huggingface 401 | https://api.github.com/repos/langchain-ai/langchain/issues/20618/comments | 9 | 2024-04-18T18:46:40Z | 2024-07-19T12:47:28Z | https://github.com/langchain-ai/langchain/issues/20618 | 2,251,331,965 | 20,618 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import os
from langchain_community.llms import LlamaCpp
llm = LlamaCpp(
model_path="mistral-7b-instruct-v0.2.Q4_K_M.gguf",
chat_format="llama-2",
n_ctx=8192,
n_threads=6,
n_gpu_layers=-1,
max_tokens=8192,
)
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.prompts.prompt import PromptTemplate
from langchain_core.messages import AIMessage, HumanMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_community.graphs import Neo4jGraph
from langchain.document_loaders import WikipediaLoader
from langchain.text_splitter import TokenTextSplitter
from langchain_experimental.graph_transformers import LLMGraphTransformer
from neo4j import GraphDatabase
from langchain_community.vectorstores import Neo4jVector
from langchain_community.vectorstores.neo4j_vector import remove_lucene_chars
from langchain_core.runnables import ConfigurableField, RunnableParallel, RunnablePassthrough
. . .
text_splitter = TokenTextSplitter(chunk_size=512, chunk_overlap=24)
documents = text_splitter.split_documents(raw_documents[:3])
llm_transformer = LLMGraphTransformer(llm=llm)
graph_documents = llm_transformer.convert_to_graph_documents(documents)
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last): File "1.py", line 68, in <module> llm_transformer = LLMGraphTransformer(llm=llm) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_experimental\graph_transformers\llm.py", line 216, in __init__ structured_llm = llm.with_structured_output(schema) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\_api\beta_decorator.py", line 110, in warning_emitting_wrapper return wrapped(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\language_models\base.py", line 204, in with_structured_output raise NotImplementedError() NotImplementedError
### Description
seems like LLMGraphTransformer uses with_structured_output method of llm but LlamaCPP model backend don't have this method implemented.
### System Info
Windows 11.
Python 3.11.9 | NotImplementedError for method with_structured_output than I use Local Model with LlamaCPP as suggested in docs. And pass it to LLMGraphTransformer. | https://api.github.com/repos/langchain-ai/langchain/issues/20606/comments | 3 | 2024-04-18T14:12:12Z | 2024-05-16T19:22:20Z | https://github.com/langchain-ai/langchain/issues/20606 | 2,250,806,377 | 20,606 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
openai_api_version = "2024-02-01"
embeddings = AzureOpenAIEmbeddings(
deployment=os.getenv('EMBEDDING_DEPLOYMENT_NAME'),
openai_api_version=openai_api_version,
)
index_name: str = "my-index-name"
fields = [...]
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=os.getenv('VECTOR_STORE_ADDRESS'),
azure_search_key=os.getenv('VECTOR_STORE_PASSWORD'),
index_name=index_name,
embedding_function=embedding_function,
fields=fields,
# needed for semantic ranking
semantic_configuration_name = 'my-config',
)
retriever = vector_store.as_retriever(search_kwargs={"k": 5}, search_type="semantic_hybrid")
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm trying to use "semantic_hybrid" as `search_type` but it is not supported, as `as_retriever` doesn't return an instance of the class `AzureSearchVectorStoreRetriever` but a `VectorStoreRetriever`.
### System Info
- | `AzureSearch` vectorstore should be converted to `AzureSearchVectorStoreRetriever` when calling `as_retriever` | https://api.github.com/repos/langchain-ai/langchain/issues/20600/comments | 11 | 2024-04-18T12:21:14Z | 2024-05-15T15:51:00Z | https://github.com/langchain-ai/langchain/issues/20600 | 2,250,555,588 | 20,600 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
import json
import uuid
from datetime import datetime, timezone
from langchain.chains import create_extraction_chain
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_mistralai.chat_models import ChatMistralAI
class Joke(BaseModel):
setup: str = Field(description="The setup of the joke")
token = '...'
str = "Why did the hipster burn his mouth? He drank the coffee before it was cool."
llm = ChatMistralAI(
endpoint='https://id-serverless.francecentral.inference.ai.azure.com/v1/',
mistral_api_key=token,
)
structured_llm = llm.with_structured_output(Joke)
result = structured_llm.invoke(str)
print(result)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/home/proj/main.py", line 24, in <module>
result = structured_llm.invoke(str)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2499, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4511, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 158, in invoke
self.generate_prompt(
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 560, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 421, in generate
raise e
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 411, in generate
self._generate_with_cache(
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 632, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_mistralai/chat_models.py", line 469, in _generate
return self._create_chat_result(response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_mistralai/chat_models.py", line 476, in _create_chat_result
message=_convert_mistral_chat_message_to_message(res["message"]),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_mistralai/chat_models.py", line 125, in _convert_mistral_chat_message_to_message
return AIMessage(
^^^^^^^^^^
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_core/messages/base.py", line 47, in __init__
return super().__init__(content=content, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/proj/.venv/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 120, in __init__
super().__init__(**kwargs)
File "/home/proj/.venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for AIMessage
invalid_tool_calls -> 0 -> args
str type expected (type=type_error.str)
### Description
I'm trying to use the structured output with Azure's Mistral Endpoint.
Going through the backgtrace, the call for AIMessage() has these parameters:
```
{'tool_calls': [{'function': {'arguments': {'setup': 'Why did the hipster burn his mouth?'}, 'call_id': None, 'name': 'Joke'}, 'id': 'call_Joke_0', 'type': 'function'}]}
[]
[{'name': 'Joke', 'args': {'setup': 'Why did the hipster burn his mouth?'}, 'id': 'call_Joke_0', 'error': 'the JSON object must be str, bytes or bytearray, not dict'}]
```
The invalid_tool_calls is due to the exception `the JSON object must be str, bytes or bytearray, not dict` when parsing the function arguments
### System Info
System Information
------------------
> OS: Linux
> OS Version: #28~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Mar 15 10:51:06 UTC 2
> Python Version: 3.11.3 (main, Jul 27 2023, 10:19:30) [GCC 11.3.0]
Package Information
-------------------
> langchain_core: 0.1.44
> langchain: 0.1.16
> langchain_community: 0.0.33
> langsmith: 0.1.48
> langchain_mistralai: 0.1.2
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| ChatMistralAI on Azure with_structured_output error when parsing function arguments: the JSON object must be str, bytes or bytearray, not dict | https://api.github.com/repos/langchain-ai/langchain/issues/20596/comments | 0 | 2024-04-18T10:53:39Z | 2024-07-25T16:09:08Z | https://github.com/langchain-ai/langchain/issues/20596 | 2,250,381,910 | 20,596 |
[
"hwchase17",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
https://python.langchain.com/docs/use_cases/sql/agents/
The copy mechanism for code does not seem to be working.
### Idea or request for content:
When I click on the copy button, the code should be copied. | DOC: Code Copy is not working inside of sql/agents in the python page | https://api.github.com/repos/langchain-ai/langchain/issues/20584/comments | 2 | 2024-04-18T05:27:55Z | 2024-04-19T12:32:14Z | https://github.com/langchain-ai/langchain/issues/20584 | 2,249,782,137 | 20,584 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
This is the code I am trying to run:
```
python
from langchain.prompts.example_selector import (
MaxMarginalRelevanceExampleSelector,
SemanticSimilarityExampleSelector,
)
SemanticSimilarityExampleSelector.from_examples(
query_examples,
OpenAIEmbeddings(),
FAISS,
k=5,
input_keys=["input"],
)
```
### Error Message and Stack Trace (if applicable)
Below the full error:
example_selector = SemanticSimilarityExampleSelector.from_examples(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/8dc5f12176cc20a/antenv/lib/python3.11/site-packages/langchain_core/example_selectors/semantic_similarity.py", line 133, in from_examples
vectorstore = vectorstore_cls.from_texts(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/8dc5f12176cc20a/antenv/lib/python3.11/site-packages/langchain_community/vectorstores/faiss.py", line 930, in from_texts
embeddings = embedding.embed_documents(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/python/3.11.7/lib/python3.11/site-packages/langchain_openai/embeddings/base.py", line 517, in embed_documents
return self._get_len_safe_embeddings(texts, engine=engine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/python/3.11.7/lib/python3.11/site-packages/langchain_openai/embeddings/base.py", line 300, in _get_len_safe_embeddings
encoding = tiktoken.encoding_for_model(model_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/8dc5f12176cc20a/antenv/lib/python3.11/site-packages/tiktoken/model.py", line 75, in encoding_for_model
File "/tmp/8dc5f12176cc20a/antenv/lib/python3.11/site-packages/tiktoken/registry.py", line 60, in get_encoding
ValueError: Unknown encoding cl100k_base
### Description
I have an issue when trying to use SemanticSimilarityExampleSelector in an app hosted in Azure App Services. When running locally it works, I get the following error:
Unknown encoding cl100k_base.
### System Info
python version: 3.11
platform: linux
aiohttp==3.8.5
aiosignal==1.3.1
annotated-types==0.6.0
anyio==4.3.0
asn1crypto==1.5.1
async-timeout==4.0.3
attrs==23.2.0
azure-ai-contentsafety==1.0.0
azure-core==1.30.1
banal==1.0.6
beartype==0.18.2
beautifulsoup4==4.12.3
binaryornot==0.4.4
boolean.py==4.0
botbuilder-core==4.14.8
botbuilder-integration-aiohttp==4.14.8
botbuilder-schema==4.14.8
botframework-connector==4.14.8
botframework-streaming==4.14.8
certifi==2024.2.2
cffi==1.16.0
chardet==5.2.0
charset-normalizer==3.3.2
click==8.1.7
colorama==0.4.6
commoncode==31.0.3
container-inspector==32.0.1
cryptography==42.0.5
dataclasses-json==0.6.4
debian_inspector==31.1.0
distro==1.9.0
dockerfile-parse==2.0.1
dparse2==0.7.0
extractcode==31.0.0
extractcode-7z==16.5.210531
extractcode-libarchive==3.5.1.210531
faiss-cpu==1.8.0
fasteners==0.19
filelock==3.13.3
fingerprints==1.2.3
frozenlist==1.4.1
ftfy==6.2.0
gemfileparser2==0.9.3
greenlet==3.0.3
h11==0.14.0
html5lib==1.1
httpcore==1.0.4
httpx==0.27.0
idna==3.6
importlib_metadata==7.1.0
intbitset==3.1.0
isodate==0.6.1
jaraco.functools==4.0.0
javaproperties==0.8.1
Jinja2==3.1.3
jsonpatch==1.33
jsonpickle==1.4.2
jsonpointer==2.4
jsonschema==4.21.1
jsonschema-specifications==2023.12.1
jsonstreams==0.6.0
langchain==0.1.13
langchain-community==0.0.29
langchain-core==0.1.40
langchain-openai==0.1.1
langchain-text-splitters==0.0.1
langsmith==0.1.40
license-expression==30.3.0
lxml==5.2.1
MarkupSafe==2.1.5
marshmallow==3.21.1
more-itertools==10.2.0
msal==1.27.0
msrest==0.7.1
multidict==6.0.5
mypy-extensions==1.0.0
normality==2.5.0
numpy==1.26.4
oauthlib==3.2.2
openai==1.13.3
orjson==3.10.0
packageurl-python==0.15.0
packaging==23.2
packvers==21.5
pandas==2.2.1
parameter-expansion-patched==0.3.1
pdfminer.six==20231228
pefile==2023.2.7
pip-requirements-parser==32.0.1
pkginfo2==30.0.0
platformdirs==3.11.0
pluggy==1.4.0
plugincode==32.0.0
ply==3.11
publicsuffix2==2.20191221
pyahocorasick==2.1.0
pyarrow==15.0.2
pycparser==2.21
pydantic==2.6.3
pydantic_core==2.16.3
pygmars==0.8.0
Pygments==2.17.2
PyJWT==2.8.0
pymaven-patch==0.3.2
pyOpenSSL==24.1.0
pyparsing==3.1.2
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
pytz==2024.1
PyYAML==6.0.1
rdflib==7.0.0
referencing==0.33.0
regex==2023.12.25
requests==2.31.0
requests-oauthlib==1.3.1
rpds-py==0.18.0
saneyaml==0.6.0
scancode-toolkit==32.1.0
semantic-version==2.10.0
six==1.16.0
sniffio==1.3.1
snowflake-connector-python==3.7.1
snowflake-sqlalchemy==1.5.1
sortedcontainers==2.4.0
soupsieve==2.5
spdx-tools==0.8.2
SQLAlchemy==1.4.52
teams-ai==1.0.0
tenacity==8.2.3
text-unidecode==1.3
tiktoken==0.6.0
toml==0.10.2
tomlkit==0.12.4
tqdm==4.66.2
typecode==30.0.1
typecode-libmagic==5.39.210531
types-PyYAML==6.0.12.12
typing-inspect==0.9.0
typing_extensions==4.10.0
tzdata==2024.1
Unidecode==1.3.8
uritools==4.0.2
urllib3==1.26.18
urlpy==0.5
wcwidth==0.2.13
webencodings==0.5.1
xmltodict==0.13.0
yarl==1.9.4
zipp==3.18.1
| Dynamic Few Shot Prompt - SemanticSimilarityExampleSelector - Unknown encoding cl100k_base | https://api.github.com/repos/langchain-ai/langchain/issues/20567/comments | 2 | 2024-04-17T20:22:14Z | 2024-07-29T16:07:43Z | https://github.com/langchain-ai/langchain/issues/20567 | 2,249,208,659 | 20,567 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
In future I want to replace the pre-trained model with fine-tuned model
### Example Code
```
import re
import os
import requests
from PIL import Image
import gradio as gr
from langchain_groq import ChatGroq
from langchain.prompts import PromptTemplate
from langchain.chains import ConversationChain
from langchain_community.tools import DuckDuckGoSearchRun
from langchain_community.utilities import WikipediaAPIWrapper
from langchain.agents import Tool, initialize_agent
from langchain_core.prompts import ChatPromptTemplate
from langchain.chains import LLMChain
from langchain_community.llms import HuggingFaceHub
from langchain.chains.router import MultiPromptChain
from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser
from langchain.chains.router.multi_prompt_prompt import MULTI_PROMPT_ROUTER_TEMPLATE
from langchain_community.llms import HuggingFaceEndpoint
#auth_token = os.environ.get("HUGGINGFACEHUB_API_TOKEN")
from google.colab import userdata
HUGGINGFACE_TOKEN=userdata.get('HUGGINGFACE_TOKEN')
from transformers import AutoModelForCausalLM, AutoTokenizer,pipeline
from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from langchain import HuggingFaceHub
import warnings
warnings.filterwarnings("ignore")
from transformers import pipeline
import torch
llm = HuggingFacePipeline.from_model_id(
model_id="mistralai/Mistral-7B-v0.1",
task="text-generation",
pipeline_kwargs={"max_new_tokens": 1000},
)
wikipedia = WikipediaAPIWrapper()
search = DuckDuckGoSearchRun()
wikipedia_tool = Tool(
name='wikipedia',
func= wikipedia.run,
description="This tool leverages Wikipedia to gather information about Ingredients name ,description of the dish , Allergens , additional information . It's particularly useful for obtaining detailed and reliable information on various topics"
)
duckduckgo_tool = Tool(
name='DuckDuckGo Search',
func= search.run,
description="Useful for when you need to do a search on the internet to find information that another tool can't find. Always be specific with your input."
)
tools = [
Tool(
name = "DuckDuckGo Search",
func=duckduckgo_tool.run,
description="useful for when you need answer questions from internet"
)
]
tools.append(wikipedia_tool)
zero_shot_agent = initialize_agent(
agent="zero-shot-react-description",
tools=tools,
llm=llm,
verbose=True,
handle_parsing_errors=True,
max_iterations=10,
)
def menu_prompt(title):
prompt_menu = f'''
As a restaurant menu manager, your role is to gather below informations based on input data {title} (Name of the dish).
generate the output
### information to be extracted :
<Ingredients>: Only Ingredients included in the dish.
<Description>: Briefly describe the dish.
<Allergens>: Only Choose relevant options from this list - [Cereals, Crustaceans, Egg, Fish, Peanuts, SOYBEAN, Latte, Nuts, Celery, Mustard, Sesame seeds, Sulfur dioxide and sulphites, Shell, Clams].
<Additional Information>: Only Choose relevant options from this list - [Spicy, Vegan, Gluten free, Vegetarian].
### Output Format
"""
"ingredients": All Ingredients in a List,
"description": Description in a string,
"allergen": All allergen in a List,
"Additional_information": All Additional_information in a List
"""
### Input data:
{title}
### Output:
'''
return prompt_menu
def get_router(title):
prompt_menu=menu_prompt(title)
prompt_infos = [
{
"name": "Menu Manager",
"description": "Good for answering questions about Italian Dish[ingredients,description,allergens,additional_information]",
"prompt_template": prompt_menu,
}
]
# map destination chains
destination_chains = {}
for prompt_info in prompt_infos:
name = prompt_info["name"]
prompt_template = prompt_info["prompt_template"]
prompt = PromptTemplate(template=prompt_template, input_variables=["input"])
print("prompt: ", prompt)
chain = LLMChain(llm=llm, prompt=prompt)
destination_chains[name] = chain
default_chain = ConversationChain(llm=llm)
# Creating LLMRouterChain
destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
destinations_str = "\n".join(destinations)
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str)
router_prompt = PromptTemplate(
template=router_template,
input_variables=["input"],
output_parser=RouterOutputParser(),
)
# creating the router chain
router_chain = LLMRouterChain.from_llm(llm, router_prompt)
# Multiple Prompt Chain
chain = MultiPromptChain(
router_chain=router_chain,
destination_chains=destination_chains,
default_chain=default_chain,
verbose=True,
)
# Get response from the agent
response = chain.run(title)
return response
response=get_router("Pizza Margherita")
response
```
```
MULTI_PROMPT_ROUTER_TEMPLATE = """Given a raw text input to a \
language model select the model prompt best suited for the input. \
You will be given the names of the available prompts and a \
description of what the prompt is best suited for. \
You may also revise the original input if you think that revising\
it will ultimately lead to a better response from the language model.
<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
```json
{{{{
"destination": string \ name of the prompt to use or "DEFAULT"
"next_inputs": string \ a potentially modified version of the original input
}}}}
```
REMEMBER: "destination" MUST be one of the candidate prompt \
names specified below OR it can be "DEFAULT" if the input is not\
well suited for any of the candidate prompts.
REMEMBER: "next_inputs" can just be the original input \
if you don't think any modifications are needed.
<< CANDIDATE PROMPTS >>
{destinations}
<< INPUT >>
{{input}}
<< OUTPUT (remember to include the ```json and ```)>>"""
```
### Error Message and Stack Trace (if applicable)
> Entering new MultiPromptChain chain...
---------------------------------------------------------------------------
OutputParserException Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/langchain/chains/router/llm_router.py](https://localhost:8080/#) in parse(self, text)
98 expected_keys = ["destination", "next_inputs"]
---> 99 parsed = parse_and_check_json_markdown(text, expected_keys)
100 if not isinstance(parsed["destination"], str):
16 frames
OutputParserException: Got invalid return object. Expected key `destination` to be present, but got {}
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/langchain/chains/router/llm_router.py](https://localhost:8080/#) in parse(self, text)
114 return parsed
115 except Exception as e:
--> 116 raise OutputParserException(
117 f"Parsing text\n{text}\n raised following error:\n{e}"
118 )
OutputParserException: Parsing text
Given a raw text input to a language model select the model prompt best suited for the input. You will be given the names of the available prompts and a description of what the prompt is best suited for. You may also revise the original input if you think that revisingit will ultimately lead to a better response from the language model.
<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
```json
{
"destination": string \ name of the prompt to use or "DEFAULT"
"next_inputs": string \ a potentially modified version of the original input
}
```
REMEMBER: "destination" MUST be one of the candidate prompt names specified below OR it can be "DEFAULT" if the input is notwell suited for any of the candidate prompts.
REMEMBER: "next_inputs" can just be the original input if you don't think any modifications are needed.
<< CANDIDATE PROMPTS >>
Menu Manager: Good for answering questions about Italian Dish[ingredients,description,allergens,additional_information]
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "Menu Manager",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "DEFAULT",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "Menu Manager",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "DEFAULT",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "Menu Manager",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "DEFAULT",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "Menu Manager",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "DEFAULT",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "Menu Manager",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "DEFAULT",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "Menu Manager",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "DEFAULT",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "Menu Manager",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "DEFAULT",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "Menu Manager",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "DEFAULT",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "Menu Manager",
"next_inputs": "Pizza Margherita"
}
<< INPUT >>
Pizza Margherita
<< OUTPUT (remember to include the ```json and ```)>>
{
"destination": "DEFAULT",
"next_inputs": "Pizza Margh
raised following error:
Got invalid return object. Expected key `destination` to be present, but got {}
### Description
OutputParserException: Got invalid return object. Expected key `destination` to be present, but got {}
### System Info
!pip install langchain openai tiktoken transformers accelerate cohere gradio langchain_groq wikipedia duckduckgo-search bitsandbytes accelerate transformers --quiet
!pip install transformers==4.34.0
!pip install datasets==2.16.0
!pip install --upgrade langchain
!pip install bitsandbytes
!pip install -U peft
!pip install accelerate
!pip install -U trl
!pip install wandb
!pip install vllm
!pip install langchain transformers | Issue with HuggingFace pipeline with RouterOutputParser OutputParserException: Got invalid return object. Expected key `destination` to be present, but got {} | https://api.github.com/repos/langchain-ai/langchain/issues/20563/comments | 0 | 2024-04-17T18:12:29Z | 2024-07-24T16:08:42Z | https://github.com/langchain-ai/langchain/issues/20563 | 2,248,929,911 | 20,563 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.vectorstores import OpenSearchVectorSearch
from langchain_community.document_loaders import TextLoader
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from langchain_community.document_loaders import DirectoryLoader
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_core.documents import Document
from langchain.chains.query_constructor.base import AttributeInfo
import torch
embeddings = HuggingFaceEmbeddings()
docs = [
Document(
page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"},
),
Document(
page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2},
),
Document(
page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6},
),
Document(
page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3},
),
Document(
page_content="Toys come alive and have a blast doing so",
metadata={"year": 1995, "genre": "animated"},
),
Document(
page_content="Three men walk into the Zone, three men walk out of the Zone",
metadata={
"year": 1979,
38 "rating": 9.9,
39 "director": "Andrei Tarkovsky",
40 "genre": "science fiction",
41 },
42 ),
43 ]
44
45
46
47 vectorstore = OpenSearchVectorSearch.from_documents(
48 docs,
49 embeddings,
50 index_name="opensearch-self-query-demo",
51 opensearch_url="https://admin:admin@localhost:9200",use_ssl = False, verify_certs = False
52 )
53
54 model_id = "lmsys/vicuna-13b-v1.5"
55 tokenizer = AutoTokenizer.from_pretrained(model_id)
56 model = AutoModelForCausalLM.from_pretrained(model_id)
57 pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=200, device_map="auto",torch_dtype=torch.float16)
58 llm = HuggingFacePipeline(pipeline=pipe)
59
60 metadata_field_info = [
61 AttributeInfo(
62 name="genre",
63 description="The genre of the movie",
64 type="string or list[string]",
65 ),
66 AttributeInfo(
67 name="year",
68 description="The year the movie was released",
69 type="integer",
70 ),
71 AttributeInfo(
72 name="director",
73 description="The name of the movie director",
74 type="string",
75 ),
76 AttributeInfo(
77 name="rating", description="A 1-10 rating for the movie", type="float"
78 ),
79 ]
80 document_content_description = "Brief summary of a movie"
81
82
83 retriever = SelfQueryRetriever.from_llm(
84 llm, vectorstore, document_content_description, metadata_field_info, verbose=True
85 )
86
87 pol = retriever.get_relevant_documents("What are some movies about dinosaurs")
88 print(pol)
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/langchain_core/output_parsers/json.py", line 175, in parse_and_check_json_markdown
json_obj = parse_json_markdown(text)
File "/usr/local/lib/python3.10/site-packages/langchain_core/output_parsers/json.py", line 157, in parse_json_markdown
parsed = parser(json_str)
File "/usr/local/lib/python3.10/site-packages/langchain_core/output_parsers/json.py", line 125, in parse_partial_json
return json.loads(s, strict=strict)
File "/usr/local/lib/python3.10/json/__init__.py", line 359, in loads
return cls(**kw).decode(s)
File "/usr/local/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 2 column 14 (char 15)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/langchain/chains/query_constructor/base.py", line 50, in parse
parsed = parse_and_check_json_markdown(text, expected_keys)
File "/usr/local/lib/python3.10/site-packages/langchain_core/output_parsers/json.py", line 177, in parse_and_check_json_markdown
raise OutputParserException(f"Got invalid JSON object. Error: {e}")
langchain_core.exceptions.OutputParserException: Got invalid JSON object. Error: Expecting value: line 2 column 14 (char 15)
### Description
I am following this documentation-:
https://python.langchain.com/docs/integrations/retrievers/self_query/opensearch_self_query/
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Wed Jan 10 22:58:54 UTC 2024
> Python Version: 3.10.7 (main, Feb 29 2024, 10:06:00) [GCC 8.5.0 20210514 (Red Hat 8.5.0-20)]
Package Information
-------------------
> langchain_core: 0.1.33
> langchain: 0.1.13
> langchain_community: 0.0.29
> langsmith: 0.1.31
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | SelfQueryRetriever with an OpenSearch vector store doesn't work. | https://api.github.com/repos/langchain-ai/langchain/issues/20562/comments | 2 | 2024-04-17T17:08:59Z | 2024-07-31T16:07:25Z | https://github.com/langchain-ai/langchain/issues/20562 | 2,248,784,865 | 20,562 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code
```
from langchain_community.llms import OpenAI, Ollama, MLXPipeline
model, tokenizer = load("mlx-community/dolphin-2.8-mistral-7b-v02")
self.mlx = MLXPipeline(
model=model,
tokenizer=tokenizer,
pipeline_kwargs={"temp":0.7, "max_tokens":10}
)
```
on the following prompt
```
Collect and summarize recent news articles, press releases, and market analyses related to the company. Pay special attention to any significant events, market, sentiments, and analysts' opinions.
Your final answer MUST be a report that includes a comprehensive identification of key points makrting oriented following the Marketing 5Ps (Product, Place, Price, Promotion, People)
If you do your BEST WORK, I will give you a $10,000 commission!
Make sure to use the most recent data as possible.
Selected company by the customer is Tesla
```
Lead to an error during execution.
### Error Message and Stack Trace (if applicable)
```
File "/opt/homebrew/lib/python3.10/site-packages/langchain_community/llms/mlx_pipeline.py", line 189, in _stream
text = self.tokenizer.decode(token.item())
AttributeError: 'int' object has no attribute 'item'
```
### Description
Hi
* I am trying to use Langchain for loading a MLX model (cf code) on a given prompt.
* I face the error available in the error section: `AttributeError: 'int' object has no attribute 'item'`
Removing the `.items()` on the line 182 unlock the issue however I have nothing as result.
So my idea is not correct.
The file `libs/community/langchain_community/llms/mlx_pipeline.py` has been added last week so it is very new.
Could you take a look @Blaizzy ?
Thank you
### System Info
here is the version I use:
Python 3.10
```
pip freeze | grep langchain
langchain==0.1.12
langchain-community==0.0.33
langchain-core==0.1.43
langchain-openai==0.0.5
langchain-text-splitters==0.0.1
```
| Mistype issue using MLX model via MLXPipeline | https://api.github.com/repos/langchain-ai/langchain/issues/20561/comments | 17 | 2024-04-17T16:31:11Z | 2024-05-21T00:17:10Z | https://github.com/langchain-ai/langchain/issues/20561 | 2,248,722,667 | 20,561 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.vectorstores.azuresearch import AzureSearch
from azure.search.documents.indexes.models import (
FreshnessScoringFunction,
FreshnessScoringParameters,
ScoringProfile,
SearchableField,
SearchField,
SearchFieldDataType,
SimpleField,
TextWeights,
SemanticConfiguration,
SemanticPrioritizedFields,
SemanticField
)
fields = [
SimpleField(
name="id",
type=SearchFieldDataType.String,
key=True,
filterable=True,
),
SearchableField(
name="header1",
type=SearchFieldDataType.String,
searchable=True,
),
SearchableField(
name="header2",
type=SearchFieldDataType.String,
searchable=True,
), SearchableField(
name="header3",
type=SearchFieldDataType.String,
searchable=True,
),
SearchableField(
name="content",
type=SearchFieldDataType.String,
searchable=True,
),
SearchField(
name="content_vector",
type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True,
vector_search_dimensions=len(aoai_embeddings.embed_query("Text")),
vector_search_profile_name="myExhaustiveKnnProfile",
),
SearchableField(
name="metadata",
type=SearchFieldDataType.String,
searchable=True,
),
]
index_name: str = vector_store_index
# Adding a custom scoring profile with a freshness function
sc_name = "csrd_scoring_profile"
sc = ScoringProfile(
name=sc_name,
text_weights=TextWeights(weights={
"header1": 10,
"header2": 9,
"content": 8,
"content_vector": 8
}),
function_aggregation="sum"
)
semantic_configuration_name = 'my_semantic_configuration'
semantic_config = SemanticConfiguration(
name=semantic_configuration_name,
prioritized_fields=SemanticPrioritizedFields(
title_field=SemanticField(field_name='header2'),
content_fields=[SemanticField(field_name='content')],
keywords_fields=None,
)
)
vector_store: AzureSearch = AzureSearch(
search_type='semantic_hybrid',
scoring_profiles=[sc],
default_scoring_profile=sc_name,
semantic_configurations=[semantic_config],
semantic_configuration_name=semantic_configuration_name,
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=aoai_embeddings.embed_query,
fields=fields,
)
```
### Error Message and Stack Trace (if applicable)
There is no error but semantic configuration is not created for index.
### Description
Semantic configuration is not created for Azure AI Search index using Langchain community if both semantic config name and semantic configuration is provided.
When I checked in AzureSearch.py, I found below snippet which creates the semantic configuration.
```
# Create the semantic settings with the configuration
semantic_search = None
if semantic_configurations is None and semantic_configuration_name is not None:
semantic_configuration = SemanticConfiguration(
name=semantic_configuration_name,
prioritized_fields=SemanticPrioritizedFields(
content_fields=[SemanticField(field_name=FIELDS_CONTENT)],
),
)
semantic_search = SemanticSearch(configurations=[semantic_configuration])
# Create the search index with the semantic settings and vector search
index = SearchIndex(
name=index_name,
fields=fields,
vector_search=vector_search,
semantic_search=semantic_search,
scoring_profiles=scoring_profiles,
default_scoring_profile=default_scoring_profile,
cors_options=cors_options,
)
index_client.create_index(index)
```
if you observe, it create semantic config if semantic configuration is None and semantic configuration is not None only. else condition is not specified if both configuration and configuration name is present.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.20348
> Python Version: 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.27
> langchain: 0.1.8
> langchain_community: 0.0.24
> langsmith: 0.1.10
> langchain_openai: 0.0.8
> langchainhub: 0.1.14
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Semantic configuration is not created for Azure AI Search index using Langchain community. | https://api.github.com/repos/langchain-ai/langchain/issues/20549/comments | 1 | 2024-04-17T10:59:07Z | 2024-07-25T16:08:53Z | https://github.com/langchain-ai/langchain/issues/20549 | 2,248,005,081 | 20,549 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
# Docs to index
urls = [
"https://lilianweng.github.io/posts/2023-06-23-agent/",
"https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/",
"https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/",
]
# Load
docs = [WebBaseLoader(url).load() for url in urls]
docs_list = [item for sublist in docs for item in sublist]
# Split
text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
chunk_size=500, chunk_overlap=0
)
doc_splits = text_splitter.split_documents(docs_list)
# Add to vectorstore
vectorstore = Chroma.from_documents(
documents=doc_splits,
collection_name="rag-chroma",
embedding=embeddings,
)
retriever = vectorstore.as_retriever()
# Data model
class RouteQuery(BaseModel):
"""Route a user query to the most relevant datasource."""
datasource: Literal["vectorstore", "web_search"] = Field(
...,
description="Given a user question choose to route it to web search or a vectorstore.",
)
# LLM with function call
llm = AzureChatOpenAI(azure_deployment='chatgpt3', model="gpt-3.5-turbo-0125", temperature=0)
structured_llm_router = llm.with_structured_output(RouteQuery)
# Prompt
system = """You are an expert at routing a user question to a vectorstore or web search.
The vectorstore contains documents related to agents, prompt engineering, and adversarial attacks.
Use the vectorstore for questions on these topics. Otherwise, use web-search."""
route_prompt = ChatPromptTemplate.from_messages(
[
("system", system),
("human", "{question}"),
]
)
question_router = route_prompt | structured_llm_router
print(question_router.invoke({"question": "Who will the Bears draft first in the NFL draft?"}))
print(question_router.invoke({"question": "What are the types of agent memory?"}))
### Error Message and Stack Trace (if applicable)
C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\_api\beta_decorator.py:87: LangChainBetaWarning: The function `with_structured_output` is in beta. It is actively being worked on, so the API may change.
warn_beta(
Traceback (most recent call last):
File "C:\Users\prakotian\Desktop\Projects\GenAI Projects\AdaptiveRAG\router.py", line 87, in <module>
print(question_router.invoke({"question": "Who will the Bears draft first in the NFL draft?"}))
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\base.py", line 2499, in invoke
input = step.invoke(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\output_parsers\base.py", line 169, in invoke
return self._call_with_config(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\base.py", line 1625, in _call_with_config
context.run(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\config.py", line 347, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\output_parsers\base.py", line 170, in <lambda>
lambda inner_input: self.parse_result(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\output_parsers\openai_tools.py", line 182, in parse_result
json_results = super().parse_result(result, partial=partial)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\output_parsers\openai_tools.py", line 129, in parse_result
tool_calls = parse_tool_calls(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\output_parsers\openai_tools.py", line 85, in parse_tool_calls
raise OutputParserException("\n\n".join(exceptions))
langchain_core.exceptions.OutputParserException: Function RouteQuery arguments:
{
datasource: "web_search"
}
are not valid JSON. Received JSONDecodeError Expecting property name enclosed in double quotes: line 2 column 3 (char 4)
### Description
Expected Output is {
datasource: "web_search"
}
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.10.13 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:24:38) [MSC v.1916 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.43
> langchain: 0.1.16
> langchain_community: 0.0.33
> langsmith: 0.1.31
> langchain_cohere: 0.1.2
> langchain_experimental: 0.0.54
> langchain_openai: 0.1.3
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
> langgraph: 0.0.37 | AdaptiveRAG implementation does'nt work with AzureOpenAI(llm.`with_structured_output`) Error | https://api.github.com/repos/langchain-ai/langchain/issues/20548/comments | 2 | 2024-04-17T09:07:39Z | 2024-08-01T16:06:24Z | https://github.com/langchain-ai/langchain/issues/20548 | 2,247,776,504 | 20,548 |
[
"hwchase17",
"langchain"
] | > Looks like I've imported differently. It's type is supposed to say `langchain_community.graphs.networkx_graph.NetworkxEntityGraph`. It is working now!!
I am running the same issue , how did u solve it ?
My object is : "langchain.graphs.network_graph.NetworkxEntityGraph" object
_Originally posted by @nikhitaKanoj in https://github.com/langchain-ai/langchain/issues/15046#issuecomment-2060214876_
| > Looks like I've imported differently. It's type is supposed to say `langchain_community.graphs.networkx_graph.NetworkxEntityGraph`. It is working now!! | https://api.github.com/repos/langchain-ai/langchain/issues/20541/comments | 1 | 2024-04-17T03:15:00Z | 2024-04-18T21:09:01Z | https://github.com/langchain-ai/langchain/issues/20541 | 2,247,270,511 | 20,541 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
ExtractData = {
"name": "ExtractData",
"description": "ExtractData",
"input_schema": {
"type": "object", "description":"schema components for getting data",
"properties": data_schema,
"required": ["x","y']
}
}
llm = ChatAnthropic(model=MODEL_NAME, verbose=True)
llm_with_tools = llm.bind_tools([ExtractData ]) # OR llm.with_structured_output(ExtractData)
agent= initialize_agent(
[OtherTool]
llm_with_tools,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION)
agent.invoke(query)
```
### Error Message and Stack Trace (if applicable)
Observation: ExtractData is not a valid tool, try one of [OtherTool].
Thought:Apologies, let me try this again using the valid tools:
### Description
I am attempting to get structured output from an agent. While the above code DOES work, I always get this error message when the agent attempts to look for its llm bound tool inside the agent's tools. (Nevertheless it returns the right output in the end, but it sometimes does a few loops of the same operation first).
The correct behavior should be that the agent does NOT look for llm tools in its own tools, because the input it is trying to feed to the tool is ALREADY the correctly formatted input.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Jan 11 04:09:03 UTC 2024
> Python Version: 3.12.3 (main, Apr 14 2024, 13:07:33) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.42
> langchain: 0.1.16
> langchain_community: 0.0.32
> langsmith: 0.1.47
> langchain_anthropic: 0.1.8
> langchain_chroma: 0.1.0
> langchain_openai: 0.0.5
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15 | Problem when using ChatAnthropic bind_tools/with_structured_output with an agent: "x is not a valid tool". | https://api.github.com/repos/langchain-ai/langchain/issues/20530/comments | 2 | 2024-04-16T19:13:27Z | 2024-04-17T01:52:19Z | https://github.com/langchain-ai/langchain/issues/20530 | 2,246,724,713 | 20,530 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
A number of model's provide token usage stats as part of generation response. Should we provide a standardized interface to these stats? Would unblock downstream usage in e.g. tracers.
RFC proposal: #20522 | Standardized token usage information | https://api.github.com/repos/langchain-ai/langchain/issues/20524/comments | 6 | 2024-04-16T17:27:01Z | 2024-07-29T16:15:43Z | https://github.com/langchain-ai/langchain/issues/20524 | 2,246,555,280 | 20,524 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_mistralai.embeddings import MistralAIEmbeddings
```python
embeddings = MistralAIEmbeddings()
pass_test_str = "hello world" * 4094
embedded_pass_test_str = embeddings.embed_documents([pass_test_str])
print(f"Maximum number of tokens that pass: {len(embeddings.tokenizer.encode(pass_test_str))}") # 8190
print(f"Embedding dimension: {len(embedded_pass_test_str[0])}") # 1024
fail_test_str = "hello world" * 4095
print(f"Number of tokens: {len(embeddings.tokenizer.encode(fail_test_str))}") # 8192
embedded_fail_test_str = embeddings.embed_documents([fail_test_str])
```
### Error Message and Stack Trace (if applicable)
An error occurred with MistralAI: 'data'
Traceback (most recent call last):
File "/Users/y.tahtah/test_langchain_mistralai_embeddings/test.py", line 15, in <module>
embedded_fail_test_str = embeddings.embed_documents([fail_test_str])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/y.tahtah/test_langchain_mistralai_embeddings/venv/lib/python3.12/site-packages/langchain_mistralai/embeddings.py", line 135, in embed_documents
for embedding_obj in response.json()["data"]
~~~~~~~~~~~~~~~^^^^^^^^
KeyError: 'data'
### Description
I'm trying to embed some text using `MistralAIEmbeddings` and I have split my text according to the `MAX_TOKENS` variable in `libs/partners/mistralai/langchain_mistralai` but it's not working. Further investigation with the example code provided with this issue led me to find that the embedding models fails to embed a document way before the 16k token indicated by `MAX_TOKENS`.
[MistralAI's official page on endpoints](https://docs.mistral.ai/platform/endpoints/) doesn't specify a context window size for the embedding model.
Either there is an issue with how langchain hits the endpoint, but I couldn't find any issue in the code in that regard and I doubt it is the case since it works for strings of token count less than $8190$ (as the provided example code shows), or the MistralAI's embedding model has a context length of $8190$ and in which case we should update the `MAX_TOKENS` variable.
### System Info
Python 3.12.2
langchain-core==0.1.43
langchain-mistralai==0.1.2
MacOS 14.4.1 (M1) | MAX_TOKENS in MistralAIEmbeddings is incorrect | https://api.github.com/repos/langchain-ai/langchain/issues/20523/comments | 3 | 2024-04-16T17:25:29Z | 2024-04-25T00:39:07Z | https://github.com/langchain-ai/langchain/issues/20523 | 2,246,552,907 | 20,523 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.vectorstores import Qdrant
from langchain_openai import AzureOpenAIEmbeddings
embeddings = AzureOpenAIEmbeddings(model="text-embedding-3-small",
azure_endpoint="",
api_key="")
qdrant = Qdrant.from_documents(
docs,
embeddings,
path="local_qdrant",
collection_name="my_documents",
)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
When using `Qdrant.from_documents` to create a collection of documents. It seems that the collection is being recreated each time
```
from langchain_community.vectorstores import Qdrant
qdrant = Qdrant.from_documents(
docs,
embeddings,
path="local_qdrant",
collection_name="my_documents",
)
```
On the first run, the embeddings are saving correctly, to a local file.
However, when running again, I had assumed the collection would be reused, but it seems that the collection is be recreated.
My collection is very large, and takes about 10 minutes to complete. If i run the add_documents call again, it still takes the same time to complete, so my assumtion is that it is not being read from the collection on disk
### System Info
langchain==0.1.16
langchain-community==0.0.32
langchain-core==0.1.42
langchain-openai==0.1.2
langchain-text-splitters==0.0.1
Platform: OSX
Python Version: 3.11 | Qdrant `from_documents` does not load existing collection | https://api.github.com/repos/langchain-ai/langchain/issues/20514/comments | 3 | 2024-04-16T13:47:13Z | 2024-04-22T10:31:07Z | https://github.com/langchain-ai/langchain/issues/20514 | 2,246,114,753 | 20,514 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.prompts import ChatPromptTemplate
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.schema.runnable import RunnableLambda
from langchain.document_loaders import WebBaseLoader
loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
documents = loader.load()
doc = documents[0]
model = OllamaFunctions(temperature=0, model=os.environ['OPEN_HERMES_2_5'])
def flatten(matrix):
flat_list = []
for row in matrix:
flat_list += row
return flat_list
class Paper(BaseModel):
"""Information about papers mentioned."""
title: str
author: Optional[str]
class Info(BaseModel):
"""Information to extract"""
papers: List[Paper]
template = """A article will be passed to you. Extract from it all papers that are mentioned by this article.
Do not extract the name of the article itself. If no papers are mentioned that's fine - you don't need to extract any! Just return an empty list.
Do not make up or guess ANY extra information. Only extract what exactly is in the text."""
prompt = ChatPromptTemplate.from_messages([
("system", template),
("human", "{input}")
])
paper_extraction_function = [
convert_to_openai_function(Info)
]
extraction_model = model.bind(
functions=paper_extraction_function,
function_call={"name":"Info"}
)
extraction_chain = prompt | extraction_model | JsonKeyOutputFunctionsParser(key_name="papers")
text_splitter = RecursiveCharacterTextSplitter(chunk_overlap=0)
prep = RunnableLambda(
lambda x: [{"input": doc} for doc in text_splitter.split_text(x)]
)
chain = prep | extraction_chain.map() | flatten
chain.invoke(doc.page_content)
```
### Error Message and Stack Trace (if applicable)
-> custom given print for debug dict() at line 105 print(parsed_chat_result.keys()) to check on which chunk error occurred:
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
dict_keys(['thoughts', 'command'])
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
dict_keys(['tool', 'tool_input'])
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[107], line 1
----> 1 chain.invoke(doc.page_content)
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/runnables/base.py:2499, in RunnableSequence.invoke(self, input, config)
2497 try:
2498 for i, step in enumerate(self.steps):
-> 2499 input = step.invoke(
2500 input,
2501 # mark each step as a child run
2502 patch_config(
2503 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
2504 ),
2505 )
2506 # finish the root run
2507 except BaseException as e:
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/runnables/base.py:4262, in RunnableEachBase.invoke(self, input, config, **kwargs)
4259 def invoke(
4260 self, input: List[Input], config: Optional[RunnableConfig] = None, **kwargs: Any
4261 ) -> List[Output]:
-> 4262 return self._call_with_config(self._invoke, input, config, **kwargs)
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/runnables/base.py:1625, in Runnable._call_with_config(self, func, input, config, run_type, **kwargs)
1621 context = copy_context()
1622 context.run(var_child_runnable_config.set, child_config)
1623 output = cast(
1624 Output,
-> 1625 context.run(
1626 call_func_with_variable_args, # type: ignore[arg-type]
1627 func, # type: ignore[arg-type]
1628 input, # type: ignore[arg-type]
1629 config,
1630 run_manager,
1631 **kwargs,
1632 ),
1633 )
1634 except BaseException as e:
1635 run_manager.on_chain_error(e)
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/runnables/config.py:347, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
345 if run_manager is not None and accepts_run_manager(func):
346 kwargs["run_manager"] = run_manager
--> 347 return func(input, **kwargs)
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/runnables/base.py:4255, in RunnableEachBase._invoke(self, inputs, run_manager, config, **kwargs)
4248 def _invoke(
4249 self,
4250 inputs: List[Input],
(...)
4253 **kwargs: Any,
4254 ) -> List[Output]:
-> 4255 return self.bound.batch(
4256 inputs, patch_config(config, callbacks=run_manager.get_child()), **kwargs
4257 )
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/runnables/base.py:2643, in RunnableSequence.batch(self, inputs, config, return_exceptions, **kwargs)
2641 else:
2642 for i, step in enumerate(self.steps):
-> 2643 inputs = step.batch(
2644 inputs,
2645 [
2646 # each step a child run of the corresponding root run
2647 patch_config(
2648 config, callbacks=rm.get_child(f"seq:step:{i+1}")
2649 )
2650 for rm, config in zip(run_managers, configs)
2651 ],
2652 )
2654 # finish the root runs
2655 except BaseException as e:
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/runnables/base.py:4544, in RunnableBindingBase.batch(self, inputs, config, return_exceptions, **kwargs)
4542 else:
4543 configs = [self._merge_configs(config) for _ in range(len(inputs))]
-> 4544 return self.bound.batch(
4545 inputs,
4546 configs,
4547 return_exceptions=return_exceptions,
4548 **{**self.kwargs, **kwargs},
4549 )
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/runnables/base.py:634, in Runnable.batch(self, inputs, config, return_exceptions, **kwargs)
631 return cast(List[Output], [invoke(inputs[0], configs[0])])
633 with get_executor_for_config(configs[0]) as executor:
--> 634 return cast(List[Output], list(executor.map(invoke, inputs, configs)))
File /usr/lib/python3.11/concurrent/futures/_base.py:619, in Executor.map.<locals>.result_iterator()
616 while fs:
617 # Careful not to keep a reference to the popped future
618 if timeout is None:
--> 619 yield _result_or_cancel(fs.pop())
620 else:
621 yield _result_or_cancel(fs.pop(), end_time - time.monotonic())
File /usr/lib/python3.11/concurrent/futures/_base.py:317, in _result_or_cancel(***failed resolving arguments***)
315 try:
316 try:
--> 317 return fut.result(timeout)
318 finally:
319 fut.cancel()
File /usr/lib/python3.11/concurrent/futures/_base.py:456, in Future.result(self, timeout)
454 raise CancelledError()
455 elif self._state == FINISHED:
--> 456 return self.__get_result()
457 else:
458 raise TimeoutError()
File /usr/lib/python3.11/concurrent/futures/_base.py:401, in Future.__get_result(self)
399 if self._exception:
400 try:
--> 401 raise self._exception
402 finally:
403 # Break a reference cycle with the exception in self._exception
404 self = None
File /usr/lib/python3.11/concurrent/futures/thread.py:58, in _WorkItem.run(self)
55 return
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/runnables/config.py:466, in ContextThreadPoolExecutor.map.<locals>._wrapped_fn(*args)
465 def _wrapped_fn(*args: Any) -> T:
--> 466 return contexts.pop().run(fn, *args)
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/runnables/base.py:627, in Runnable.batch.<locals>.invoke(input, config)
625 return e
626 else:
--> 627 return self.invoke(input, config, **kwargs)
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:158, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
147 def invoke(
148 self,
149 input: LanguageModelInput,
(...)
153 **kwargs: Any,
154 ) -> BaseMessage:
155 config = ensure_config(config)
156 return cast(
157 ChatGeneration,
--> 158 self.generate_prompt(
159 [self._convert_input(input)],
160 stop=stop,
161 callbacks=config.get("callbacks"),
162 tags=config.get("tags"),
163 metadata=config.get("metadata"),
164 run_name=config.get("run_name"),
165 run_id=config.pop("run_id", None),
166 **kwargs,
167 ).generations[0][0],
168 ).message
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:560, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
552 def generate_prompt(
553 self,
554 prompts: List[PromptValue],
(...)
557 **kwargs: Any,
558 ) -> LLMResult:
559 prompt_messages = [p.to_messages() for p in prompts]
--> 560 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:421, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
419 if run_managers:
420 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 421 raise e
422 flattened_outputs = [
423 LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item]
424 for res in results
425 ]
426 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:411, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
408 for i, m in enumerate(messages):
409 try:
410 results.append(
--> 411 self._generate_with_cache(
412 m,
413 stop=stop,
414 run_manager=run_managers[i] if run_managers else None,
415 **kwargs,
416 )
417 )
418 except BaseException as e:
419 if run_managers:
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:632, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
630 else:
631 if inspect.signature(self._generate).parameters.get("run_manager"):
--> 632 result = self._generate(
633 messages, stop=stop, run_manager=run_manager, **kwargs
634 )
635 else:
636 result = self._generate(messages, stop=stop, **kwargs)
File ~/Workspace/lchain/venv311/lib/python3.11/site-packages/langchain_experimental/llms/ollama_functions.py:107, in OllamaFunctions._generate(self, messages, stop, run_manager, **kwargs)
101 raise ValueError(
102 f'"{self.llm.model}" did not respond with valid JSON. Please try again.'
103 )
105 print(parsed_chat_result.keys()) #CUSTOM added for DEBUG
--> 107 called_tool_name = parsed_chat_result["tool"]
108 called_tool_arguments = parsed_chat_result["tool_input"]
109 called_tool = next(
110 (fn for fn in functions if fn["name"] == called_tool_name), None
111 )
KeyError: 'tool'
### Description
While doing one of tutorial from DLAI an issue occured in function OllamaFunctions._generate from langchain_experimental pkg.
I use given article and I tried to parse it by follow tutorial steps. ( check python code )
The issue is that sometimes dict keys() in `OllamaFunctions._generate` doesn't contain `dict_keys(['tool', 'tool_input'])` rather other values as `dict_keys(['thoughts', 'command'])` which end up with KeyError.
Above code steps worked in tutorial ( for ChatOpenAI) but I did not try OpenAI chat because I do not have api key, and Im using Ollama local `openhermes_2.5_7b_q5_k_m`.
What I have observed:
> len(doc.page_content) == 43902
there is no issue when
> chain.invoke(doc.page_content[:30000])
and issue starts for:
> chain.invoke(doc.page_content[:40000])
For me in such cases `expect KeyError` handling should be added and allow user get final result with some info or other error should be raised to be more preciously
### System Info
System Information
------------------
> OS: Linux
> OS Version: #28~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Mar 15 10:51:06 UTC 2
> Python Version: 3.11.9 (main, Apr 6 2024, 17:59:24) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.43
> langchain: 0.1.16
> langchain_community: 0.0.33
> langsmith: 0.1.48
> langchain_experimental: 0.0.57
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
Ollama server | openhermes_2.5_7b_q5_k_m | CUDA | KeyError: "tool" in langchain_experimental -> OllamaFunctions._generate | https://api.github.com/repos/langchain-ai/langchain/issues/20513/comments | 2 | 2024-04-16T13:36:34Z | 2024-06-13T09:55:45Z | https://github.com/langchain-ai/langchain/issues/20513 | 2,246,090,039 | 20,513 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.runnables import RunnableLambda
from langchain_core.tracers.schemas import Run
def fake_chain(inputs: dict) -> dict:
return {**inputs, "key": "extra"}
def on_start(run: Run):
print("on_start:", run.inputs)
def on_end(run: Run):
print("on_end: ", run.outputs)
chain = RunnableLambda(fake_chain).with_listeners(on_end=on_end, on_start=on_start)
chain = chain.map()
data = [{"name": "one"}, {"name": "two"}]
out = chain.invoke(data, config={"max_concurrency": 1})
print("result: ", out)
```
`max_concurrency` is added for simplicity.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I want to store `fake_chain` output using listeners. `with_listeners()` allows to hook only top level runnable (according to its docstring). But `run` object is incorrect if use `map()`.
I expect to see
```
on_start: {'name': 'one'}
on_start: {'name': 'two'}
on_end: {'name': 'one', 'key': 'extra'}
on_end: {'name': 'two', 'key': 'extra'}
result: [{'name': 'one', 'key': 'extra'}, {'name': 'two', 'key': 'extra'}]
```
but get
```
on_start: {'name': 'one'}
on_start: {'name': 'one'} # <!
on_end: {'name': 'one', 'key': 'extra'}
on_end: {'name': 'one', 'key': 'extra'} # <!
result: [{'name': 'one', 'key': 'extra'}, {'name': 'two', 'key': 'extra'}]
```
I didn't dive deeper, but smth wrong happens in the `RunnableBindingBase.batch() -> _merge_configs()` (a guess).
### System Info
```shell
$ pip freeze | grep langchain
langchain==0.1.16
langchain-anthropic==0.1.4
langchain-community==0.0.33
langchain-core==0.1.43
langchain-google-genai==0.0.11
langchain-google-vertexai==0.1.2
langchain-groq==0.0.1
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
```
platform: `linux`
python: `3.11.8` | Incorrect listeners parameters for Runnable.with_listeners() and .map() | https://api.github.com/repos/langchain-ai/langchain/issues/20509/comments | 3 | 2024-04-16T11:00:20Z | 2024-05-13T15:16:18Z | https://github.com/langchain-ai/langchain/issues/20509 | 2,245,754,102 | 20,509 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
self.llm = self.llm.with_fallbacks(fallbackModels)
self.agent = create_tool_calling_agent(self.llm.llm, self.tools, self.promptTemplate.getAgentPrompt(self.tools))
### Error Message and Stack Trace (if applicable)
self.agent = create_tool_calling_agent(self.llm.llm, self.tools, self.promptTemplate.getAgentPrompt(self.tools))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/agents/tool_calling_agent/base.py", line 85, in create_tool_calling_agent
raise ValueError(
ValueError: This function requires a .bind_tools method be implemented on the LLM.
### Description
My code work well create_tool_calling_agent without call with_fallbacks function.
### System Info
langchain==0.1.16
langchain-community==0.0.33
langchain-core==0.1.43
langchain-experimental==0.0.49
langchain-google-genai==1.0.1
langchain-openai==0.1.3
langchain-text-splitters==0.0.1
langchainhub==0.1.14
platform linux
python 3.11 | LLM with_fallbacks function not work with create_tool_calling_agent | https://api.github.com/repos/langchain-ai/langchain/issues/20499/comments | 7 | 2024-04-16T06:51:10Z | 2024-07-10T04:57:16Z | https://github.com/langchain-ai/langchain/issues/20499 | 2,245,254,883 | 20,499 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
### Example Code
from langchain_openai import AzureChatOpenAI
from langchain_openai import AzureOpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import WebBaseLoader
from langchain_community.vectorstores import Chroma
from typing import Literal
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain import hub
from langchain_core.output_parsers import StrOutputParser
from langchain_community.tools.tavily_search import TavilySearchResults
# Docs to index
urls = [
"https://lilianweng.github.io/posts/2023-06-23-agent/",
"https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/",
"https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/",
]
# Load
docs = [WebBaseLoader(url).load() for url in urls]
docs_list = [item for sublist in docs for item in sublist]
# Split
text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
chunk_size=500, chunk_overlap=0
)
doc_splits = text_splitter.split_documents(docs_list)
# Add to vectorstore
vectorstore = Chroma.from_documents(
documents=doc_splits,
collection_name="rag-chroma",
embedding=embeddings,
)
retriever = vectorstore.as_retriever()
# Data model
class RouteQuery(BaseModel):
"""Route a user query to the most relevant datasource."""
datasource: Literal["vectorstore", "web_search"] = Field(
...,
description="Given a user question choose to route it to web search or a vectorstore.",
)
# LLM with function call
llm = AzureChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
structured_llm_router = llm.with_structured_output(RouteQuery)
# Prompt
system = """You are an expert at routing a user question to a vectorstore or web search.
The vectorstore contains documents related to agents, prompt engineering, and adversarial attacks.
Use the vectorstore for questions on these topics. Otherwise, use web-search."""
route_prompt = ChatPromptTemplate.from_messages(
[
("system", system),
("human", "{question}"),
]
)
question_router = route_prompt | structured_llm_router
print(question_router.invoke({"question": "Who will the Bears draft first in the NFL draft?"}))
print(question_router.invoke({"question": "What are the types of agent memory?"}))
### Error Message and Stack Trace (if applicable)
C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\_api\beta_decorator.py:87: LangChainBetaWarning: The function `with_structured_output` is in beta. It is actively being worked on, so the API may change.
warn_beta(
---ROUTE QUESTION---
Traceback (most recent call last):
File "C:\Users\prakotian\Desktop\Projects\GenAI Projects\AdaptiveRAG\app.py", line 255, in <module>
for output in app.stream(inputs):
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langgraph\pregel\__init__.py", line 686, in stream
_panic_or_proceed(done, inflight, step)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langgraph\pregel\__init__.py", line 1049, in _panic_or_proceed
raise exc
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\base.py", line 2499, in invoke
input = step.invoke(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langgraph\utils.py", line 49, in invoke
ret = self.func(input, merge_configs(self.config, config), **self.kwargs)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langgraph\graph\graph.py", line 67, in _route
result = self.condition.invoke(reader(config) if reader else input, config)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\base.py", line 3961, in invoke
return self._call_with_config(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\base.py", line 1625, in _call_with_config
context.run(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\config.py", line 347, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\base.py", line 3835, in _invoke
output = call_func_with_variable_args(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\config.py", line 347, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "C:\Users\prakotian\Desktop\Projects\GenAI Projects\AdaptiveRAG\app.py", line 142, in route_question
source = question_router.invoke({"question": question})
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\base.py", line 2499, in invoke
input = step.invoke(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\base.py", line 4511, in invoke
return self.bound.invoke(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\language_models\chat_models.py", line 158, in invoke
self.generate_prompt(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\language_models\chat_models.py", line 560, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\language_models\chat_models.py", line 421, in generate
raise e
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\language_models\chat_models.py", line 411, in generate
self._generate_with_cache(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\language_models\chat_models.py", line 632, in _generate_with_cache
result = self._generate(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_openai\chat_models\base.py", line 548, in _generate
response = self.client.create(messages=message_dicts, **params)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\openai\_utils\_utils.py", line 275, in wrapper
return func(*args, **kwargs)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\openai\resources\chat\completions.py", line 667, in create
return self._post(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\openai\_base_client.py", line 1233, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\openai\_base_client.py", line 922, in request
return self._request(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\openai\_base_client.py", line 1013, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid parameter: 'response_format' of type 'json_object' is not supported with this model.", 'type': 'invalid_request_error', 'param': 'response_format', 'code': None}}
(Py10) C:\Users\prakotian\Desktop\Projects\GenAI Projects\AdaptiveRAG>
(Py10) C:\Users\prakotian\Desktop\Projects\GenAI Projects\AdaptiveRAG>python router.py
C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\_api\beta_decorator.py:87: LangChainBetaWarning: The function `with_structured_output` is in beta. It is actively being worked on, so the API may change.
warn_beta(
Traceback (most recent call last):
File "C:\Users\prakotian\Desktop\Projects\GenAI Projects\AdaptiveRAG\router.py", line 87, in <module>
print(question_router.invoke({"question": "Who will the Bears draft first in the NFL draft?"}))
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\base.py", line 2499, in invoke
input = step.invoke(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\output_parsers\base.py", line 169, in invoke
return self._call_with_config(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\base.py", line 1625, in _call_with_config
context.run(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\runnables\config.py", line 347, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\output_parsers\base.py", line 170, in <lambda>
lambda inner_input: self.parse_result(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\output_parsers\openai_tools.py", line 182, in parse_result
json_results = super().parse_result(result, partial=partial)
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\output_parsers\openai_tools.py", line 129, in parse_result
tool_calls = parse_tool_calls(
File "C:\Users\prakotian\AppData\Local\miniconda3\envs\Py10\lib\site-packages\langchain_core\output_parsers\openai_tools.py", line 85, in parse_tool_calls
raise OutputParserException("\n\n".join(exceptions))
langchain_core.exceptions.OutputParserException: Function RouteQuery arguments:
{
datasource: "web_search"
}
are not valid JSON. Received JSONDecodeError Expecting property name enclosed in double quotes: line 2 column 3 (char 4)
### Description
Expected Output:-
datasource='web_search'
datasource='vectorstore'
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.10.13 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:24:38) [MSC v.1916 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.43
> langchain: 0.1.16
> langchain_community: 0.0.33
> langsmith: 0.1.31
> langchain_cohere: 0.1.2
> langchain_experimental: 0.0.54
> langchain_openai: 0.1.3
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
> langgraph: 0.0.37 | llm.with_structured_output(RouteQuery) fails running AdaptiveRAG Example with AzureOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/24100/comments | 4 | 2024-04-16T06:40:47Z | 2024-07-11T00:30:11Z | https://github.com/langchain-ai/langchain/issues/24100 | 2,401,975,261 | 24,100 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.chains import GraphCypherQAChain
from langchain_community.graphs import Neo4jGraph
from chatglm3 import chatglm3
llm = chatglm3()
graph = Neo4jGraph(
url="bolt://xxxx", username="xxxx", password="xxxx"
)
graph.refresh_schema()
print(graph.schema)
chain = GraphCypherQAChain.from_llm(llm, graph=graph, verbose=True)
chain.run("Who played in Top Gun?")
### Error Message and Stack Trace (if applicable)
Node properties are the following:
Movie {name: STRING},Actor {name: STRING}
Relationship properties are the following:
The relationships are the following:
(:Actor)-[:ACTED_IN]->(:Movie)
> Entering new GraphCypherQAChain chain...
D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain_core\_api\deprecation.py:117: LangChainDeprecationWarning: The function `run` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
Traceback (most recent call last):
File "E:\LLM_project\ChatGLM3-main\LLM+KG.py", line 16, in <module>
chain.run("Who played in Top Gun?")
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain_core\_api\deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain\chains\base.py", line 545, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain_core\_api\deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain\chains\base.py", line 378, in __call__
return self.invoke(
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain\chains\base.py", line 163, in invoke
raise e
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain\chains\base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain\chains\graph_qa\cypher.py", line 246, in _call
generated_cypher = self.cypher_generation_chain.run(
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain_core\_api\deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain\chains\base.py", line 545, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain_core\_api\deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain\chains\base.py", line 378, in __call__
return self.invoke(
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain\chains\base.py", line 163, in invoke
raise e
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain\chains\base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain\chains\llm.py", line 103, in _call
response = self.generate([inputs], run_manager=run_manager)
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain\chains\llm.py", line 115, in generate
return self.llm.generate_prompt(
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain_core\language_models\llms.py", line 568, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain_core\language_models\llms.py", line 741, in generate
output = self._generate_helper(
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain_core\language_models\llms.py", line 605, in _generate_helper
raise e
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain_core\language_models\llms.py", line 592, in _generate_helper
self._generate(
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\langchain_core\language_models\llms.py", line 1177, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "E:\LLM_project\ChatGLM3-main\chatglm3.py", line 33, in _call
response = self.model.chat(self.tokenizer, messages)
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\HF_HOME\modules\transformers_modules\6B_32k\modeling_chatglm.py", line 1034, in chat
inputs = tokenizer.build_chat_input(query, history=history, role=role)
File "D:\HF_HOME\modules\transformers_modules\6B_32k\tokenization_chatglm.py", line 193, in build_chat_input
input_ids.extend(self.build_single_message(role, "", query))
File "D:\HF_HOME\modules\transformers_modules\6B_32k\tokenization_chatglm.py", line 180, in build_single_message
message_tokens = self.tokenizer.encode(message)
File "D:\HF_HOME\modules\transformers_modules\6B_32k\tokenization_chatglm.py", line 37, in encode
assert type(s) is str
AssertionError
Exception ignored in: <function Driver.__del__ at 0x0000020CE064EB80>
Traceback (most recent call last):
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\neo4j\_sync\driver.py", line 507, in __del__
File "D:\anaconda\envs\ChatGLM-6B\lib\site-packages\neo4j\_meta.py", line 229, in unclosed_resource_warn
TypeError: 'NoneType' object is not callable
### Description
An error occurred while running the Neo4j tutorial provided in the official Langchain documentation
### System Info
This is my environment:
OS:windows 10
langchain 0.1.11
langchain-community 0.0.25
langchain-core 0.1.29
langchain-text-splitters 0.0.1
langchainhub 0.1.15
langsmith 0.1.22
python 3.8.16 | When I ran knowledge graph enhanced retrieval using Langchain+Neo4j, I encountered an error | https://api.github.com/repos/langchain-ai/langchain/issues/20497/comments | 0 | 2024-04-16T06:39:34Z | 2024-04-18T04:12:56Z | https://github.com/langchain-ai/langchain/issues/20497 | 2,245,233,619 | 20,497 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Callbacks documentation and code is a little ragged around the edges (imo).
### Error Message and Stack Trace (if applicable)
See the Callbacks [page](https://python.langchain.com/docs/modules/callbacks/).
### Description
The documentation has to be brought to the LCEL era, updated to include events like `on_retriever_start` and given a similar look and feel to other pages (e.g. Agents).
The code for the built-in handlers `StdOutCallbackHandler` has to be (lightly) modified for the LCEL era. `FileCallbackHandler` has to be moved from `community` to `core`
### System Info
NA | Callbacks need some TLC | https://api.github.com/repos/langchain-ai/langchain/issues/20493/comments | 3 | 2024-04-16T05:36:55Z | 2024-05-23T07:10:25Z | https://github.com/langchain-ai/langchain/issues/20493 | 2,245,130,783 | 20,493 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.utilities import SerpAPIWrapper
from langchain.agents import create_openai_tools_agent
from langchain.agents import AgentExecutor,Tool
chat = ChatOpenAI(model="gpt-3.5-turbo-1106",streaming=True)
search = SerpAPIWrapper()
#search = GoogleSearchAPIWrapper()
tools = [Tool(
name="google_search",
description="Search Google for recent results.",
func=search.run,
return_direct=False
)]
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant. You may not need to use tools for every query - the user may just want to chat!",
),
MessagesPlaceholder(variable_name="messages"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
agent = create_openai_tools_agent(tools = tools,llm = chat,prompt=prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
for chunk in agent_executor.stream({"messages": chat_history.messages}):
for key in chunk:
if key not in output:
output[key] = chunk[key]
else:
output[key] += chunk[key]
if "actions" in chunk:
for action in chunk["actions"]:
print(f"Calling Tool: `{action.tool}` with input `{action.tool_input}`")
continue
if "steps" in chunk:
observation = chunk["steps"][-1].observation
for step in chunk["steps"]:
print(f"Tool Result: `{step.observation}`")
continue
if "output" in chunk:
print(chunk["output"], end="", flush=True)
response_json = json.dumps({"stat": "SUCCESS", "content": chunk["output"]})
```
### Error Message and Stack Trace (if applicable)
```json
{'error': {'message': "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_APhVboGGQV2ZfLqDnqukNltV", 'type': 'invalid_request_error', 'param': 'messages.[4].role', 'code': None}}
```
### Description
I want to integrate Google search into my chatbot and use streaming output. It returned the following error: openai.BadRequestError: Error code: 400.
I searched on both Google and Github but did not find any relevant information.
### System Info
System Information
OS: Windows
OS Version: 10.0.19045
Python Version: 3.12.1 (tags/v3.12.1:2305ca5, Dec 7 2023, 22:03:25) [MSC v.1937 64 bit (AMD64)]
Package Information
langchain_core: 0.1.32
langchain: 0.1.12
langchain_community: 0.0.28
langsmith: 0.1.27
langchain_openai: 0.0.8
langchain_text_splitters: 0.0.1
langchainhub: 0.1.15 | When I use the tool in Agent, it returns OPEN AI 400 Bad Request. | https://api.github.com/repos/langchain-ai/langchain/issues/20492/comments | 14 | 2024-04-16T04:23:02Z | 2024-08-01T16:06:19Z | https://github.com/langchain-ai/langchain/issues/20492 | 2,245,061,633 | 20,492 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.