issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| ### System Info
I encountered an issue when executing a SQL statement that involves joining multiple tables. I am working with a SQL Server database, and the following SQL query is returning an error:
SELECT TOP 50000 [StoreNo], [StoreName], [Quantity], [GoodsName], [GoodsNo]
FROM [JBStore]
JOIN [CKCurrStore] ON [JBStore].[StoreNo] = [CKCurrStore].[StoreNo]
JOIN [JBGoods] ON [CKCurrStore].[GoodsNo] = [JBGoods].[GoodsNo]
The error seems to arise because the [StoreNo] and [GoodsNo] fields are not prefixed with the table names, which causes the query to fail.
Here are my table definitions:
table_info = {
"CKXSCheck": """
CREATE TABLE CKXSCheck (
"OrderNo" VARCHAR PRIMARY KEY,
"OrderDate" DATETIME,
"Amount" REAL,
"InOutTypeNo" INTEGER,
"CKAmount" REAL,
PRIMARY KEY ("OrderNo")
)""",
"CKXSCheckDetail": """
CREATE TABLE CKXSCheckDetail (
"OrderNo" VARCHAR,
"SerialNo" INTEGER,
"GoodsNo" INTEGER,
"Amount" REAL,
"Quantity" REAL,
"Price" REAL,
PRIMARY KEY ("OrderNo", "SerialNo"),
FOREIGN KEY ("OrderNo") REFERENCES CKXSCheck("OrderNo"),
FOREIGN KEY ("GoodsNo") REFERENCES JBGoods("GoodsNo")
)""",
"JBGoods": """
CREATE TABLE JBGoods (
"GoodsNo" INTEGER PRIMARY KEY,
"GoodsCode" VARCHAR,
"GoodsName" VARCHAR,
PRIMARY KEY ("GoodsNo")
)""",
"CKCurrStore": """
CREATE TABLE CKCurrStore (
"StoreNo" VARCHAR,
"GoodsNo" VARCHAR,
"Quantity" REAL,
FOREIGN KEY ("StoreNo") REFERENCES CKXSCheck("JBStore"),
FOREIGN KEY ("GoodsNo") REFERENCES JBGoods("GoodsNo")
)""",
"JBStore": """
CREATE TABLE JBStore (
"StoreNo" VARCHAR PRIMARY KEY,
"StoreName" VARCHAR,
PRIMARY KEY ("StoreNo")
)""",
}
How can I resolve this issue?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [x] Embedding Models
- [x] Prompts / Prompt Templates / Prompt Selectors
- [x] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Send a request to the '/api/query' endpoint
### Expected behavior
import os
import sqlite3
import pymssql
import tkinter as tk
import tkinter.ttk as ttk
from langchain.agents import create_sql_agent, ZeroShotAgent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
from langchain.prompts.prompt import PromptTemplate
from typing import Dict, Any
from langchain import LLMChain
from typing import Any, List, Tuple
from urllib.parse import quote_plus as urlquote
from sqlalchemy import create_engine
from sqlalchemy.engine import reflection
from sqlalchemy import inspect
from sqlalchemy.orm import sessionmaker
import pandas as pd
from sqlalchemy.sql import text as sql_text
from sqlalchemy import Table, MetaData, select
from sqlalchemy.sql import text
import json
import decimal
import datetime
import time
from sql_utils import add_table_prefix_to_columns
from flask import Flask, request, render_template,jsonify
import re
#查询全部商品
#查询销售额前100的所有商品
#查询库存数量明细表 显示仓库序号 仓库名称 数量 商品名称 商品序号
# 替换为你的 API 密钥
# Customized English prompt
_DEFAULT_TEMPLATE = """You are an MS SQL expert. Given an input question, first create a syntactically correct MS SQL query to run, then look at the results of the query and return the answer to the input question. Note that you should never perform any operations that could modify the database. This includes UPDATE, DELETE, or INSERT operations. Your job is only to read data and answer questions.
Unless the user specifies in the question a specific number of examples to obtain, query for 50000 results using the TOP clause as per MS SQL. You can order the results to return the most informative data in the database.
Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in square brackets ([]) to denote them as delimited identifiers.
Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.
Use the following format:
Question: "Question here"
SQLQuery: "SQL Query to run"
SQLResult: "Result of the SQLQuery"
Only use the following tables:
{table_info}
Question: {input}"""
PROMPT = PromptTemplate(
input_variables=["input", "table_info"], template=_DEFAULT_TEMPLATE
)
custom_table_info_OLD= {
"v_CKXSDetail": """CREATE TABLE v_CKXSDetail (
"OrderNo" VARCHAR , -- 单据编号,
"OrderDate":DATETime,-- 销售日期
"GoodsCode":VARCHAR,-- 商品编号
"GoodsName":VARCHAR,-- 商品名称
"OrderDate":DATETime,-- 销售日期
"Amount" numeric(18,2), -- 销售金额,
"InOutTypeNo" INTEGER, -- 值为2表示退货
"Quantity" INTEGER, -- 销售数量
"CKAmount" REAL-- 成本金额
)"""}
custom_table_info = {
"CKXSCheck": """CREATE TABLE CKXSCheck (
"OrderNo" VARCHAR PRIMARY KEY, -- 单号,字符型主键
"OrderDate":DATETime,-- 销售日期
"Amount" REAL, -- 整单销售金额,(case InOutTypeNo when 2 then -Amount else Amount end)
"InOutTypeNo" INTEGER, -- 值为2表示退货
"CKAmount" REAL,-- 整单成本金额
PRIMARY KEY ("OrderNo"), -- 将 OrderNo 主键
)""",
"CKXSCheckDetail": """CREATE TABLE CKXSCheckDetail (
"OrderNo" VARCHAR, -- 单号 ,外键
"SerialNo" INTEGER, -- 序列号 ,主键
"GoodsNo" INTEGER, -- 商品序号,外键
"Amount" REAL, -- 商品销售金额
"Quantity" REAL, -- 商品销售数量
"Price" REAL, -- 销售单价
PRIMARY KEY ("OrderNo", "SerialNo"), -- 将 OrderNo 和 SerialNo 设为联合主键
FOREIGN KEY ("OrderNo") REFERENCES CKXSCheck("OrderNo"), -- OrderNo 是 CKXSCheck 表的外键
FOREIGN KEY ("GoodsNo") REFERENCES JBGoods("GoodsNo") -- GoodsNo 是 JBGoods 表的外键
)""",
"JBGoods": """CREATE TABLE JBGoods (
"GoodsNo" INTEGER PRIMARY KEY, -- 商品序号,主键
"GoodsCode" VARCHAR, -- 商品编号
"GoodsName" VARCHAR, -- 商品名称
PRIMARY KEY ("GoodsNo") -- 将 GoodsNo 设为主键
)""",
"CKCurrStore": """CREATE TABLE CKCurrStore (
"StoreNo" VARCHAR, -- 仓库序号,外键
"GoodsNo" VARCHAR, -- 商品序号,外键
"Quantity" REAL, -- 库存数量
FOREIGN KEY ("StoreNo") REFERENCES CKXSCheck("JBStore"), -- StoreNo 是 JBStore 表的外键
FOREIGN KEY ("GoodsNo") REFERENCES JBGoods("GoodsNo") -- GoodsNo 是 JBGoods 表的外键
)""",
"JBStore": """CREATE TABLE JBStore (
"StoreNo" VARCHAR, -- 仓库序号,主键
"StoreName" VARCHAR, -- 仓库名称
PRIMARY KEY ("StoreNo") -- 将 GoodsNo 设为主键
)""",
}
db = SQLDatabase.from_uri(f"mssql+pymssql://{user_name}:{urlquote(psw)}@{ip}/{database}", include_tables=['CKXSCheck', 'CKXSCheckDetail', 'JBGoods', 'CKCurrStore', 'JBStore'], custom_table_info=custom_table_info)
llm=OpenAI(temperature=0)
class CustomSQLQueryChain(SQLDatabaseChain):
def _call(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
prompt = self.prompt or SQL_PROMPTS.get(self.database.dialect, PROMPT)
llm_chain = LLMChain(llm=self.llm, prompt=prompt)
input_text = f"{inputs[self.input_key]}\nSQLQuery:"
self.callback_manager.on_text(input_text, verbose=self.verbose)
table_names_to_use = inputs.get("table_names_to_use")
table_info = self.database.get_table_info(table_names=table_names_to_use)
llm_inputs = {
"input": input_text,
"top_k": self.top_k,
"dialect": self.database.dialect,
"table_info": table_info,
"stop": ["\nSQLResult:"],
}
intermediate_steps = []
sql_cmd = llm_chain.predict(**llm_inputs).strip() # simplified this line
intermediate_steps.append({"SQLQuery": sql_cmd})
self.callback_manager.on_text(sql_cmd, color="green", verbose=self.verbose)
chain_result: Dict[str, Any] = {
"intermediate_steps": intermediate_steps,
"result": sql_cmd,
}
return chain_result
custom_db_chain = CustomSQLQueryChain(llm=llm, database=db, prompt=PROMPT, verbose=True, return_intermediate_steps=True)
def display_table(tree, columns, data):
print(data)
print(columns)
# 删除所有的列
tree['columns'] = []
# 重新设置列
tree["columns"] = columns
tree["show"] = "headings"
for col in columns:
tree.heading(col, text=col)
tree.column(col, width=100)
# 删除所有的数据
for i in tree.get_children():
tree.delete(i)
# 插入新的数据
for row in data:
tree.insert("", "end", values=row)
root = tk.Tk()
root.title("Chat with your Tabular Data")
entry = ttk.Entry(root, font=("Arial", 14))
entry.pack(padx=20, pady=20, fill=tk.X)
def get_chinese_col_names(field_names, conn):
# 使用 XTSQLField 表查询中文列名
field_names_str = ', '.join(f"'{field_name}'" for field_name in field_names) # 将字段名列表转换为适合SQL查询的字符串
query = f"""
WITH numbered_rows AS (
SELECT SqFieldName,
(CASE SqFieldName WHEN 'GoodsCode' THEN '商品编号' WHEN 'GoodsName' THEN '商品名称' WHEN 'Quantity' THEN '数量' ELSE CClientName END) as CClientName,
ROW_NUMBER() OVER (PARTITION BY SqFieldName ORDER BY CClientName) AS rn
FROM XTSQLField
WHERE SqFieldName in ({field_names_str})
AND ASCII(LEFT(CClientName, 1)) > 127 -- 添加这个条件以仅选择CClientName为中文的行
)
SELECT SqFieldName, CClientName
FROM numbered_rows
WHERE rn = 1;
"""
print(query)
result_proxy = conn.execute(query)
name_mapping = {row['SqFieldName']: row['CClientName'] for row in result_proxy} # 创建一个字段名到中文名的映射字典
chinese_col_names = []
for field_name in field_names:
chinese_name = name_mapping.get(field_name, field_name) # 如果没有找到中文名,就使用原始的字段名
chinese_col_names.append(chinese_name)
return chinese_col_names
class DecimalEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, decimal.Decimal):
return float(obj)
return super(DecimalEncoder, self).default(obj)
app = Flask(__name__)
#分页查询
@app.route('/api/get-messages', methods=['POST'])
def get_messages():
# 获取前端传过来的分页参数
page = request.args.get('page', default = 1, type = int)
# 连接数据库
engine = create_engine(f"mssql+pymssql://{user_name}:{urlquote(psw)}@{ip}/{database}")
metadata = MetaData(bind=engine)
# 创建一个表对象,反映AiQuery表
AiQuery = Table('AiQuery', metadata, autoload_with=engine)
with engine.connect() as conn:
# 查询AiQuery表
pagesize = 4
select_stmt = select(AiQuery).order_by(AiQuery.columns.ID.desc()).limit(pagesize).offset((page - 1) * pagesize)
result_proxy = conn.execute(select_stmt)
# 将 LegacyRow 对象转换为字典
result_data = [dict(row) for row in result_proxy.fetchall()]
# 根据你的需求格式化返回的数据
messages = []
for data in result_data:
messages.append({
"text": data['QueryStr'],
"time": data['UsedTime'],
"records": data['ResultRecords'],
"timestamp": data['CreateDate'].strftime("%Y-%m-%d %H:%M"), # 将日期转换为字符串
"id": data['ID']
})
return jsonify(messages)
@app.route('/api/query', methods=['POST'])
def do_query():
data = request.get_json()
query = data.get('query')
start_time = time.time() # 记录查询开始的时间
engine = create_engine(f"mssql+pymssql://{user_name}:{urlquote(psw)}@{ip}/{database}")
metadata = MetaData(bind=engine)
# 创建一个表对象,反映AiQuery表
AiQuery = Table('AiQuery', metadata, autoload_with=engine)
inserted_id = None # 用于保存新插入的行的ID
with engine.connect() as conn:
# 检查AiQuery表中是否已经有该查询
select_stmt = select(AiQuery).where(AiQuery.columns.QueryStr == query)
result_proxy = conn.execute(select_stmt)
print(select_stmt.compile(compile_kwargs={"literal_binds": True}))
# 如果有结果,那么直接返回结果
row = result_proxy.fetchone()
print(row)
if row:
# 从 AiQuery 表中获取SQL查询字符串
sql_query = row.SQLStr
# 执行查询字符串,获取结果
result_proxy = conn.execute(sql_query)
# 将 LegacyRow 对象转换为字典
result_data = [dict(row) for row in result_proxy.fetchall()]
column_names = list(result_proxy.keys())
sorted_result_data = []
for resrow in result_data:
sorted_row = [resrow[column_name] for column_name in column_names]
sorted_result_data.append(sorted_row)
result_data = sorted_result_data
column_names = get_chinese_col_names(column_names, conn)
inserted_id = row.ID # 获取已经存在的行的ID
used_time = round(time.time() - start_time, 2) # 计算查询耗时
refresh_date = datetime.datetime.now() # 获取当前时间
result_records = len(result_data) # 获取结果记录数
isAdd = False
# 更新RefreshDate,ResultRecords和UsedTime字段
update_stmt = AiQuery.update(). \
where(AiQuery.columns.ID == inserted_id). \
values(RefreshDate=refresh_date, ResultRecords=result_records, UsedTime=used_time,
Result_data=json.dumps(result_data, cls=DecimalEncoder))
conn.execute(update_stmt)
else:
# 如果没有结果,那么执行GPT查询
result = custom_db_chain(query)
sql_query = result['result']
sql_query = add_table_prefix_to_columns(sql_query, custom_table_info)
print(sql_query)
result_proxy = conn.execute(sql_query)
# 将 LegacyRow 对象转换为字典
result_data = [dict(row) for row in result_proxy.fetchall()]
column_names = list(result_proxy.keys())
sorted_result_data = []
for resrow in result_data:
sorted_row = [resrow[column_name] for column_name in column_names]
sorted_result_data.append(sorted_row)
result_data = sorted_result_data
column_names = get_chinese_col_names(column_names, conn)
used_time = round(time.time() - start_time, 2) # 计算查询耗时
result_records = len(result_data) # 获取结果记录数
# 将结果保存到AiQuery表中,包括新的字段
insert_stmt = AiQuery.insert().values(QueryStr=query, SQLStr=sql_query, Column_names=json.dumps(column_names, cls=DecimalEncoder), Result_data=json.dumps(result_data, cls=DecimalEncoder), ResultRecords=result_records, UsedTime=used_time)
result = conn.execute(insert_stmt)
inserted_id = result.inserted_primary_key[0] # 获取新插入的行的ID
isAdd = True
return jsonify({'isAdd': isAdd, 'column_names': column_names, 'result_data': result_data, 'inserted_id': inserted_id})
if __name__ == '__main__':
app.run(debug=True)
tree = ttk.Treeview(root)
tree.pack(padx=20, pady=20, fill=tk.X)
def on_click():
query = entry.get()
result = custom_db_chain(query)
sql_query = result['result']
engine = create_engine(f"mssql+pymssql://{user_name}:{urlquote(psw)}@{ip}/{database}")
with engine.connect() as conn:
result_proxy = conn.execute(sql_query)
# 同样,将 LegacyRow 对象转换为字典
result_data = [dict(row) for row in result_proxy.fetchall()]
column_names = list(result_proxy.keys())
column_names = get_chinese_col_names(column_names, conn)
# 新增:清空 Treeview
for item in tree.get_children():
tree.delete(item)
# 新增:显示查询结果
display_table(tree, column_names, result_data)
# 新增:显示查询结果
# display_table(tree, [header["label"] for header in dataTable["headers"]], dataTable["contents"])
# except Exception as err:
# print("Error occurred:", err)
button = ttk.Button(root, text="Chat", command=on_click)
button.pack(padx=20, pady=20)
text = tk.Text(root, height=10, width=60, font=("Arial", 14))
text.pack(padx=20, pady=20)
root.mainloop()
| Handling Errors in SQL Statements That Involve Multiple Table Joins | https://api.github.com/repos/langchain-ai/langchain/issues/4832/comments | 2 | 2023-05-17T04:01:52Z | 2024-02-07T19:07:29Z | https://github.com/langchain-ai/langchain/issues/4832 | 1,713,063,873 | 4,832 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain Version: 0.0.170
Platform: Linux X86_64
Python: 3.9
### Who can help?
@SimFG
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to produce behaviour:
```python
from gptcache import Cache
from gptcache.adapter.api import init_similar_cache
from langchain.cache import GPTCache
# Avoid multiple caches using the same file, causing different llm model caches to affect each other
def init_gptcache(cache_obj: Cache, llm str):
init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{llm}")
langchain.llm_cache = GPTCache(init_gptcache)
llm = OpenAI(model_name="text-davinci-002", temperature=0.2)
llm("tell me a joke")
print("cached:", langchain.llm_cache.lookup("tell me a joke", llm_string))
# cached: None
```
the cache won't hits
### Expected behavior
the gptcache should have a hit | GPTCache keep creating new gptcache cache_obj | https://api.github.com/repos/langchain-ai/langchain/issues/4830/comments | 0 | 2023-05-17T03:26:37Z | 2023-05-18T16:42:38Z | https://github.com/langchain-ai/langchain/issues/4830 | 1,713,035,478 | 4,830 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version 0.0.171
python version 3.9.13
macos
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This is a problem with the generative agents.
To reproduce please follow the tutorial outlines here:
https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html
When you get to the following line of code you will get an error:
`print(tommie.get_summary(force_refresh=True))`
```
File ~/.pyenv/versions/3.9.13/lib/python3.9/site-packages/langchain/retrievers/time_weighted_retriever.py:14, in _get_hours_passed(time, ref_time)
12 def _get_hours_passed(time: datetime.datetime, ref_time: datetime.datetime) -> float:
13 """Get the hours passed between two datetime objects."""
---> 14 return (time - ref_time).total_seconds() / 3600
TypeError: unsupported operand type(s) for -: 'datetime.datetime' and 'NoneType'
```
### Expected behavior
The ref time should be a datetime and tommies summary should be printed. | TypeError: unsupported operand type(s) for -: 'datetime.datetime' and 'NoneType' | https://api.github.com/repos/langchain-ai/langchain/issues/4825/comments | 7 | 2023-05-17T02:24:24Z | 2023-05-22T22:47:05Z | https://github.com/langchain-ai/langchain/issues/4825 | 1,712,990,151 | 4,825 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain 0.0.171, Python 3.10.10
running the code:
index = pinecone.Index('ssk')
print(index.describe_index_stats())
vectorstore = Pinecone(index=index, embedding_function=OpenAIEmbeddings.embed_query, text_key='text')
documents = vectorstore.similarity_search('How can several llama_indexes be composed?')
print(index.describe_index_stats()) gives the following
{'dimension': 1536,
'index_fullness': 0.0,
'namespaces': {'': {'vector_count': 335}},
'total_vector_count': 335}
but gives an error in the vectorstore.similarity_search call
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[46], line 4
2 print(index.describe_index_stats())
3 vectorstore = Pinecone(index=index, embedding_function=OpenAIEmbeddings.embed_query, text_key='text')
----> 4 documents = vectorstore.similarity_search('How can several llama_indexes be composed?')
File ~/anaconda3/envs/langchain_play/lib/python3.10/site-packages/langchain/vectorstores/pinecone.py:155, in Pinecone.similarity_search(self, query, k, filter, namespace, **kwargs)
136 def similarity_search(
137 self,
138 query: str,
(...)
142 **kwargs: Any,
143 ) -> List[Document]:
144 """Return pinecone documents most similar to query.
145
146 Args:
(...)
153 List of Documents most similar to the query and score for each
154 """
--> 155 docs_and_scores = self.similarity_search_with_score(
156 query, k=k, filter=filter, namespace=namespace, **kwargs
157 )
158 return [doc for doc, _ in docs_and_scores]
File ~/anaconda3/envs/langchain_play/lib/python3.10/site-packages/langchain/vectorstores/pinecone.py:115, in Pinecone.similarity_search_with_score(self, query, k, filter, namespace)
113 if namespace is None:
114 namespace = self._namespace
--> 115 query_obj = self._embedding_function(query)
116 docs = []
117 results = self._index.query(
118 [query_obj],
119 top_k=k,
(...)
122 filter=filter,
123 )
TypeError: OpenAIEmbeddings.embed_query() missing 1 required positional argument: 'text'
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
run code like:
index = pinecone.Index('ssk')
print(index.describe_index_stats())
vectorstore = Pinecone(index=index, embedding_function=OpenAIEmbeddings.embed_query, text_key='text')
documents = vectorstore.similarity_search('How can several llama_indexes be composed?')
### Expected behavior
for a valid pinecone index, expect documents to be populated without error | pinecone.similarity_search -> TypeError: OpenAIEmbeddings.embed_query() missing 1 required positional argument: 'text' | https://api.github.com/repos/langchain-ai/langchain/issues/4821/comments | 4 | 2023-05-17T01:03:52Z | 2024-03-23T14:37:37Z | https://github.com/langchain-ai/langchain/issues/4821 | 1,712,933,277 | 4,821 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 171, gpt4 model in a region, text-embedding-ada002 in another
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create an embedding on ADA002 in a region, with os.environ settings :
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_KEY"] = ""
os.environ["OPENAI_API_BASE"] = ""
os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview"
embeddings = OpenAIEmbeddings(model="")
text = "This is a test document."
embeddings.embed_query(text)
this works.
If I try to add a LLM to do later doc retrieval, like this I got the followin exception:
Create an embedding on ADA002 in a region, with os.environ settings :
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_KEY"] = ""
os.environ["OPENAI_API_BASE"] = ""
os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview"
embeddings = OpenAIEmbeddings(model="")
llm = AzureChatOpenAI(
openai_api_key = "",
openai_api_base = "",
model_name=""
)
#result = llm([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
#print(result)
text = "This is a test document."
embeddings.embed_query(text)
Exception has occurred: InvalidRequestError
The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.
### Expected behavior
I should be able to have 2 distincts GPT and ADA setup not on the same API base. | Resource does not exists when using both OpenAIEmbeddings and AzureChatOpenAI in two different Azure region/ endpoints | https://api.github.com/repos/langchain-ai/langchain/issues/4819/comments | 6 | 2023-05-16T23:36:25Z | 2023-11-03T08:33:02Z | https://github.com/langchain-ai/langchain/issues/4819 | 1,712,871,337 | 4,819 |
[
"hwchase17",
"langchain"
]
| ### Discussed in https://github.com/hwchase17/langchain/discussions/4817
<div type='discussions-op-text'>
<sup>Originally posted by **markanth** May 16, 2023</sup>
Under Use Cases -> Code Understanding, you will find:
The full tutorial is available below.
[Twitter the-algorithm codebase analysis with Deep Lake](https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html): A notebook walking through how to parse github source code and run queries conversation.
[LangChain codebase analysis with Deep Lake](https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html): A notebook walking through how to analyze and do question answering over THIS code base.
In both full tutorials, I think that this line:
model = ChatOpenAI(model='gpt-3.5-turbo') # switch to 'gpt-4'
should be:
model = ChatOpenAI(model_name='gpt-3.5-turbo')
(model_name instead of model)
</div> | Typo in DeepLake Code Analysis Tutorials | https://api.github.com/repos/langchain-ai/langchain/issues/4818/comments | 0 | 2023-05-16T22:21:09Z | 2023-05-17T15:52:24Z | https://github.com/langchain-ai/langchain/issues/4818 | 1,712,813,069 | 4,818 |
[
"hwchase17",
"langchain"
]
| ### Feature request
The LlamaCpp wrapper doesn't implement the get_num_tokens functions, which then defauls to a GPT2 tokenizer, and returns a wrong number of tokens
### Motivation
-
### Your contribution
- | Implement get_num_tokens in LlamaCpp | https://api.github.com/repos/langchain-ai/langchain/issues/4815/comments | 1 | 2023-05-16T21:24:51Z | 2023-09-10T16:17:04Z | https://github.com/langchain-ai/langchain/issues/4815 | 1,712,756,362 | 4,815 |
[
"hwchase17",
"langchain"
]
| ### System Info
Just working my way through the AutoGPT instructions here https://python.langchain.com/en/latest/use_cases/autonomous_agents/autogpt.html
and getting the error:
---------------------------------------------------------------------------
```
ModuleNotFoundError Traceback (most recent call last)
Cell In[11], line 4
2 embeddings_model = OpenAIEmbeddings()
3 # Initialize the vectorstore as empty
----> 4 import faiss
6 embedding_size = 1536
7 index = faiss.IndexFlatL2(embedding_size)
ModuleNotFoundError: No module named 'faiss'
```
Will update this ticket if I can figure it out. We already have this line further up the code:
`from langchain.vectorstores import FAISS
`
**Edit**: Ok so I see that there's such a thing as a pip module called faiss.
However doing pip install faiss gives me:
```
ERROR: Could not find a version that satisfies the requirement faiss (from versions: none)
ERROR: No matching distribution found for faiss
```
**Edit 2**: Ah ok - For windows users you have to install the CPU version of faiss. See here: https://github.com/facebookresearch/faiss/blob/main/INSTALL.md
### Who can help?
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just follow the steps in the tutorial
### Expected behavior
Not sure - perhaps it was meant to be instantiating the FAISS class?
| No module named 'faiss' | https://api.github.com/repos/langchain-ai/langchain/issues/4810/comments | 2 | 2023-05-16T19:45:26Z | 2023-09-10T16:17:09Z | https://github.com/langchain-ai/langchain/issues/4810 | 1,712,642,408 | 4,810 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: v0.0.170
Platform: Linux/Debian
python: 3.9.2
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Have an api endpoint configured such that its parameters have both query parameters and headers

### Expected behavior
Given the endpoint is something like https://example.com/api, the agent is trying to hit the endpoint in the following way
https://example.com/api?Authorization=<token>&ph-org-code=<xxx>&ph-org-type=<xxx>&status=active
Here Authorization, ph-org-code, ph-org-type are headers as seen in the spec but they are passed as query parameters in
url. I also used RequestWrapper to wrap all the above 3 headers seperately and providing them at the time of creating the openapi agent but still the agent executor is not considering those values.
| OpenAPI agent treating 'headers' as query parameters for any endpoint in the openapi spec | https://api.github.com/repos/langchain-ai/langchain/issues/4807/comments | 1 | 2023-05-16T18:45:06Z | 2023-09-10T16:17:15Z | https://github.com/langchain-ai/langchain/issues/4807 | 1,712,558,489 | 4,807 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
How can one use SelfQueryRetriever to query `datetime`, I've added a `datetime` object as string to the metadata, not sure if that is right or if we should use `timestamp`, but things got weird with this prompt:
```
I want to watch a movie rated higher than 8.5 and released today
```
```json
{
"query": "",
"filter": "and(gt(\"rating\", 8.5), eq(\"released\", \"today\"))"
}
```
I'm not sure how to instruct langchain to convert today to a datetime/str.
Is there a from/to AttributeInfo so we can convert when saving and loading from the vectorstore?
Long story short, how would you guys approach this scenario?
Thanks in advance.
### Suggestion:
_No response_ | Question: How to use datetime type with SelfQueryRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/4801/comments | 4 | 2023-05-16T17:49:50Z | 2023-11-10T19:38:14Z | https://github.com/langchain-ai/langchain/issues/4801 | 1,712,469,831 | 4,801 |
[
"hwchase17",
"langchain"
]
| ### Feature request
`langchain.llms.LlamaCpp` wraps around `llama_cpp`, which recently added a `n_gpu_layers` argument. It would be great to have it in the wrapper.
Current workaround:
```
llm = LlamaCpp(...)
state = llm.client.__getstate__()
state["n_gpu_layers"] = n_gpu_layers
llm.client.__setstate__(state)
```
### Motivation
-
### Your contribution
- | Add `n_gpu_layers` arg to langchain.llms.LlamaCpp | https://api.github.com/repos/langchain-ai/langchain/issues/4797/comments | 1 | 2023-05-16T16:16:25Z | 2023-05-16T16:18:38Z | https://github.com/langchain-ai/langchain/issues/4797 | 1,712,335,442 | 4,797 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I propose a tool that can extract the content of each section from one `.tex` file or a latex project with multiple `.tex` files. Moreover, the tool is able to filter the unrequired contents like figure blocks, labels and comments and output the resulting contents in the form of a python dict as `{<section name>: <content>}`. With this tool, we can extract only the "introduction", "related works" and "conclusion" part of a paper and shorten the contents by filtering, which is beneficial for effective summary.
We can do the same thing to pdf files with no bookmarks based on [science-parse](https://github.com/allenai/science-parse), which can be set up as a docker server and we will only need an API to use it. It takes pdf as input and outputs the metadata and the division of sections in json form. So I propose an API wrapper for that in order to make use of this powerful tool.
### Motivation
The original `langchain.text_splitter.LatexTextSplitter` cannot handle multiple .tex files, while it cannot filter some contents that are not required for text analysis, like comments or figure blocks. Since many source files we download from arxiv.org will be a compressed project that has multiple `.tex` files with a `main.tex` that can link them together, we need a way to deal with them. Moreover, when dealing with the source files, some latex blocks are not necessary for text analysis, like figures and comments. By filtering them, we can shorten the contents and reduce the work of LLMs.
Moreover, when loading pdf with no bookmarks, we cannot seperate sections of them and be forced to use all of them at once. This may not be efficient when it comes to scenarios like `summarization`. So we may need to have a tool that can divide the pdf file without prior input like bookmarks.
### Your contribution
I want to create a PR for [document_loaders](https://github.com/hwchase17/langchain/tree/master/langchain/document_loaders) so there can be a way to load a latex project downloaded from arxiv.org in the form of `tar.gz` or`zip` . Then I want to create a PR for [text_splitter](https://github.com/hwchase17/langchain/blob/master/langchain/text_splitter.py) so I can implement the filtering and extraction for the latex file(s) I obtain from the `document_loaders`.
I also want to create an API wrapper for science-parse in the same file which can takes the pdf files as input directly by `pathlib.Path` in the `text_splitter` as another splitting function.
| A tool that can extract and divide sections from one or more .tex and pdf files | https://api.github.com/repos/langchain-ai/langchain/issues/4792/comments | 1 | 2023-05-16T15:34:27Z | 2023-09-17T17:17:55Z | https://github.com/langchain-ai/langchain/issues/4792 | 1,712,267,546 | 4,792 |
[
"hwchase17",
"langchain"
]
| ### Feature request
When you call `add_texts` and `add_docuemnts` methods from a Weaviate instance, it always generate UUIDs for you, which is a neat feature
https://github.com/hwchase17/langchain/blob/bee136efa4393219302208a1a458d32129f5d539/langchain/vectorstores/weaviate.py#L137
However, there are specific use cases where you want to generate UUIDs by yourself and pass them via `add_texts` and `add_docuemnts`.
Therefore, I'd like to support `uuids` field in `kwargs` argument to these methods, and use those values instead of generating new ones inside those methods.
### Motivation
Both `add_texts` and `add_documents` methods internally call [batch.add_data_object](https://weaviate-python-client.readthedocs.io/en/stable/weaviate.batch.html#weaviate.batch.Batch.add_data_object) method of a Weaviate client.
The document states as below:
> Add one object to this batch. NOTE: If the UUID of one of the objects already exists then the existing object will be replaced by the new object.
This behavior is extremely useful when you need to update and delete document from a known field of the document.
First of all, Weaviate expects UUIDv3 and UUIDv5 as UUID formats. You can find the information below:
https://weaviate.io/developers/weaviate/more-resources/faq#q-are-there-restrictions-on-uuid-formatting-do-i-have-to-adhere-to-any-standards
And UUIDv5 allows you to generate always the same value based on input string, as if it's a hash algorithm.
https://docs.python.org/2/library/uuid.html
Let's say you have unique identifier of the document, and use it to generate your own UUID.
This way you can directly update, delete or replace documents without searching the documents by metadata.
This will saves your time, your code, and network bandwidth and computer resources.
### Your contribution
I'm attempting to make a PR, | Accept UUID list as an argument to add texts and documents into Weaviate vectorstore | https://api.github.com/repos/langchain-ai/langchain/issues/4791/comments | 0 | 2023-05-16T15:31:48Z | 2023-05-16T22:26:48Z | https://github.com/langchain-ai/langchain/issues/4791 | 1,712,263,240 | 4,791 |
[
"hwchase17",
"langchain"
]
| ### System Info
I ran the below code
`output = agent.run("what is Grass Type which pokemnon has highest speed and lowest speed?")`
The above code gave the below output
```
> Entering new AgentExecutor chain...
Thought: I need to find the pokemon with the highest and lowest speed that are of type Grass
Action: python_repl_ast
Action Input: df[df['Type 1'] == 'Grass'][['Name', 'Speed']].sort_values('Speed')
Observation: Name Speed
658 Ferroseed 10
651 Foongus 15
659 Ferrothorn 20
207 Sunflora 30
511 AbomasnowMega Abomasnow 30
.. ... ...
556 Serperior 113
607 Whimsicott 116
274 Sceptile 120
551 ShayminSky Forme 127
275 SceptileMega Sceptile 145
[70 rows x 2 columns]
Thought: I now know the pokemon with the highest and lowest speed that are of type Grass
Final Answer: The Grass Type pokemon with the highest speed is SceptileMega Sceptile with 145 speed, and the Grass Type pokemon with the lowest speed is Ferroseed with 10 speed.
> Finished chain.
```
But I don't need the complete output. I only need text which is after Final Answer: i.e. The Grass Type pokemon with the highest speed is SceptileMega Sceptile with 145 speed, and the Grass Type pokemon with the lowest speed is Ferroseed with 10 speed.
How to get this output? Any ideas?
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
output = agent.run("what is Grass Type which pokemnon has highest speed and lowest speed?"
### Expected behavior
I'm just looking to filter out the output content. | How to return the text which is after Finished Chain or Final Answer? | https://api.github.com/repos/langchain-ai/langchain/issues/4783/comments | 13 | 2023-05-16T13:31:55Z | 2024-02-14T03:35:22Z | https://github.com/langchain-ai/langchain/issues/4783 | 1,712,039,344 | 4,783 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi guys,
it is my understanding that for GPT4 we have to use the ChatOpenAI api. Due to the more restrictive rate limit for GPT4 the use of map_reduce chains seems very limited.
### Suggestion:
Provide a configurable batch_size - like in #1073 - for the ChatOpenAI api | ChatOpenAI: Number of parallel jobs in the MapReduce chain | https://api.github.com/repos/langchain-ai/langchain/issues/4782/comments | 2 | 2023-05-16T13:29:08Z | 2023-10-16T14:08:56Z | https://github.com/langchain-ai/langchain/issues/4782 | 1,712,034,290 | 4,782 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Add GPT4All chat model integration to Langchain
### Motivation
I am building a chat-bot using langchain and the openAI Chat model. However I have seen that langchain added around the 0.0.130 version the integration with GPT4All to use it as a LLM provider. I would like to know if there is any intention to add Gpt4All Chat Model to langchain in a near future. I would like to build the chat-bot using LLMs stored locally.
### Your contribution
I have been going through all commits in order to upgrade from my local langchain version to the new one so I might be able to help a little bit if needed | GPT4All Chat Model Integration | https://api.github.com/repos/langchain-ai/langchain/issues/4779/comments | 6 | 2023-05-16T10:51:16Z | 2023-12-19T00:50:53Z | https://github.com/langchain-ai/langchain/issues/4779 | 1,711,766,113 | 4,779 |
[
"hwchase17",
"langchain"
]
| ### Feature request
The Airbyte loader should place some separator token between attributes from different records in the final document to help the LLM associating the right attributes with each other
### Motivation
The Airbyte loaders loads data from the Airbyte local JSON destination into documents. As Airbyte's atomic unit is a record in the form of a JSON object, the Airbyte loader is stringifying these into the form of `key: value\n`. However, if there are a lot of records, the final document looks like this:
Raw data
```
{"_airbyte_ab_id":"f0bcb1da-baaa-4f09-b210-68fa5747ad7c","_airbyte_emitted_at":1684226166938,"_airbyte_data":{"id":91,"make":"Pontiac","model":"Vibe","year":2006,"price":12134,"created_at":"2021-01-11T22:30:14+00:00"}}
{"_airbyte_ab_id":"cde6ea19-3f93-4f7a-9042-f5836ca752ac","_airbyte_emitted_at":1684226166938,"_airbyte_data":{"id":92,"make":"Volkswagen","model":"Eos","year":2011,"price":53128,"created_at":"2021-01-12T23:25:06+00:00"}}
{"_airbyte_ab_id":"dfbc15a5-bcb7-4676-8615-6341d29b21d3","_airbyte_emitted_at":1684226166939,"_airbyte_data":{"id":93,"make":"Mazda","model":"Mazdaspeed6","year":2007,"price":90902,"created_at":"2021-12-29T14:29:03+00:00"}}
```
Document:
```
id: 91
make: Pontiac
model: Vibe
year: 2006
price: 12134
created_at: 2021-01-11T22:30:14+00:00
id: 92
make: Volkswagen
model: Eos
year: 2011
price: 53128
created_at: 2021-01-12T23:25:06+00:00
id: 93
make: Mazda
model: Mazdaspeed6
year: 2007
price: 90902
created_at: 2021-12-29T14:29:03+00:00
```
Running a `RetrievalQA` on this document asking for `How much is a Volkswagen Eos?`, the final answer is `The price of a Volkswagen Eos is 12134` which is wrong (it's the price of the Pontiac right above it, but that's hard to tell from the list of attributes)
Adding a separator between the records, the document would look like this:
```
id: 91
make: Pontiac
model: Vibe
year: 2006
price: 12134
created_at: 2021-01-11T22:30:14+00:00
-end of record-
id: 92
make: Volkswagen
model: Eos
year: 2011
price: 53128
created_at: 2021-01-12T23:25:06+00:00
-end of record-
id: 93
make: Mazda
model: Mazdaspeed6
year: 2007
price: 90902
created_at: 2021-12-29T14:29:03+00:00
```
The same chain and question now gives the final answer `The price of a Volkswagen Eos is 53128.` which is correct.
Alternatively we could completely change the stringification strategy here and instead of producing key-value pairs serializing the array of records as YAML:
* Simple to do as there are libs for that
* Still little overhead for structural tokens (way less than JSON)
* Also has record-separators (the `-` and indentation)
* LLMs know how YAML works so it's probably beneficial for interpreting structure in complex nested records
### Your contribution
Happy to put together a PR for this, both options explained above are simple to do. | Improve Airbyte loader to help LLM differentiate entities | https://api.github.com/repos/langchain-ai/langchain/issues/4776/comments | 5 | 2023-05-16T08:49:32Z | 2023-09-19T16:10:46Z | https://github.com/langchain-ai/langchain/issues/4776 | 1,711,556,396 | 4,776 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.169
### Who can help?
@hwchase17
@ekzh
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
import os
import langchain
import openai
from langchain.llms import AzureOpenAI
from langchain.chat_models import AzureChatOpenAI
from langchain.embeddings import OpenAIEmbeddings
llmconfig = {
"openai_api_key": "<secret>",
"openai_api_base": "https://myllm.openai.azure.com/",
"deployment_name": "davinci",
}
chatconfig = {
"model_name": "gpt-35-turbo",
"openai_api_type": "azure",
"openai_api_version": "chatVERSION",
"openai_api_key": "<secret>",
"openai_api_base": "https://mychat.openai.azure.com/",
"deployment_name": "gpt-35-turbo",
}
embedderconfig = {
"openai_api_key": "<secret>",
"model": "ada",
"openai_api_base": "https://myembedder.openai.azure.com/",
"openai_api_version": "embedderVERSION",
"deployment": "ada",
}
# First time
llm = AzureOpenAI(**llmconfig)
print(openai.api_version)
chat = AzureChatOpenAI(**chatconfig)
print(openai.api_version)
embedder = OpenAIEmbeddings(**embedderconfig)
print(openai.api_version)
print("\n")
# Second time
llm = AzureOpenAI(**llmconfig)
print(openai.api_version)
chat = AzureChatOpenAI(**chatconfig)
print(openai.api_version)
embedder = OpenAIEmbeddings(**embedderconfig)
print(openai.api_version)
```
This code will return the following:
```
None
chatVERSION
embedderVERSION
embedderVERSION
chatVERSION
embedderVERSION
```
### Expected behavior
The LangChain classes should not alter the global openai module values, because this could cause conflicts when multiple classes are using those.
For example if using Chat/Completion API and Embeddings API use a different `api_version` value.
Or when using Chat/Completion from Azure and Embeddings from OpenAI, because the classes share the same openai global values, depending on the order of operations there will be unexpected behaviours.
Related issues:
#2683
#4352
Related PR:
https://github.com/hwchase17/langchain/pull/4234
https://github.com/pieroit/cheshire-cat/pull/195
Related code:
https://github.com/hwchase17/langchain/blob/a7af32c274860ee9174830804301491973aaee0a/langchain/chat_models/azure_openai.py#L79-L87
and
https://github.com/hwchase17/langchain/blob/a7af32c274860ee9174830804301491973aaee0a/langchain/embeddings/openai.py#L166-L178 | LangChain classes share openai global values | https://api.github.com/repos/langchain-ai/langchain/issues/4775/comments | 14 | 2023-05-16T08:48:53Z | 2023-06-13T18:15:13Z | https://github.com/langchain-ai/langchain/issues/4775 | 1,711,555,266 | 4,775 |
[
"hwchase17",
"langchain"
]
| ### Feature request
i find this version can only support faiss-gpu, does it support faiss-gpu. maybe we can add something like annoy, hnswlib and so on
### Motivation
add more embedding search tools, faiss, annoy, hnswlib
### Your contribution
support more embedding search tools | this version also supprot faiss-gpu version | https://api.github.com/repos/langchain-ai/langchain/issues/4773/comments | 1 | 2023-05-16T08:11:11Z | 2023-09-10T16:17:19Z | https://github.com/langchain-ai/langchain/issues/4773 | 1,711,486,994 | 4,773 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Context:
I'm trying to chat with my dataset as customer reviews from a restaurant.
I would like to have LLM make a summary for every single store individually, I found it difficult to generate the expected output using any types of chains, so alternatively I preprocess my dataset before ingest.
I save the reviews as one text file per store(there are around 20 stores, which means I created 20 text files per store)
Then I embedded 20 files into one vectordb, code as below
chain = RetrievalQAWithSourcesChain.from_chain_type(
llm=model,
chain_type="stuff",
retriever=db.as_retriever(),
chain_type_kwargs=chain_type_kwargs,
reduce_k_below_max_tokens=True
)
my prompt is sth like "make a summary of customer reviews per store", however only 4 stores with summary generated, I guess only 4 documents returned as context?Is there any solution with one single prompt I can indicate the LLM generate summaries for all 20 stores? thanks.
### Suggestion:
_No response_ | Challange when using Langchain for customer review analysis. | https://api.github.com/repos/langchain-ai/langchain/issues/4772/comments | 7 | 2023-05-16T08:06:51Z | 2023-09-19T16:10:51Z | https://github.com/langchain-ai/langchain/issues/4772 | 1,711,479,747 | 4,772 |
[
"hwchase17",
"langchain"
]
| ### System Info
windows 10
### Who can help?
@vowelparrot @hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I just follow the fake llm toturial:
https://python.langchain.com/en/latest/modules/models/llms/examples/fake_llm.html
my code is as flowing:
from langchain.llms.fake import FakeListLLM
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
tools = load_tools(["python_repl"])
responses=[
"Action: Python REPL\nAction Input: print(2 + 2)",
"Final Answer: 4"
]
llm = FakeListLLM(responses=responses)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("whats 2 + 2")
### Expected behavior
code works like the official toturial | KeyError: 'tools' when initialize_agent with python_repl tool | https://api.github.com/repos/langchain-ai/langchain/issues/4769/comments | 6 | 2023-05-16T06:44:22Z | 2023-09-19T16:10:56Z | https://github.com/langchain-ai/langchain/issues/4769 | 1,711,345,791 | 4,769 |
[
"hwchase17",
"langchain"
]
| ### System Info
env python == 3.10.10
langchain==0.0.170
mysql==5.7
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`
db_chain = SQLDatabaseSequentialChain.from_llm(llm,
db,
verbose=True,
return_direct=True,
use_query_checker=True,
return_intermediate_steps=True)
with get_openai_callback() as cb:
restult=db_chain("New energy vehicle sales in 2022?")
print(restult)
print(cb)
`
Expected behavior
Entering new SQLDatabaseSequentialChain chain...
Table names to use:
['t_passenger_car_monthly_sales']
Entering new SQLDatabaseChain chain
New energy vehicle sales in 2022?
SQLQuery:The original query seems correct and does not contain any of the common mistakes listed. Therefore, the original query is:
SELECT SUM(monthly_retail_sales) AS total_sales FROM t_passenger_car_monthly_sales WHERE passenger_car_type = 'passenger_car' AND yearly = 2022
---------------------------------------------------------------------------
ProgrammingError Traceback (most recent call last)
File [~/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sqlalchemy/engine/base.py:1900] in Connection._execute_context(self, dialect, constructor, statement, parameters, execution_options, *args, **kw)
1899 if not evt_handled:
-> 1900 self.dialect.do_execute(
1901 cursor, statement, parameters, context
1902 )
1904 if self._has_events or self.engine._has_events:
File [~/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sqlalchemy/engine/default.py:736], in DefaultDialect.do_execute(self, cursor, statement, parameters, context)
735 def do_execute(self, cursor, statement, parameters, context=None):
--> 736 cursor.execute(statement, parameters)
File [~/opt/anaconda3/envs/py310/lib/python3.10/site-packages/pymysql/cursors.py:158], in Cursor.execute(self, query, args)
156 query = self.mogrify(query, args)
--> 158 result = self._query(query)
159 self._executed = query
File [~/opt/anaconda3/envs/py310/lib/python3.10/site-packages/pymysql/cursors.py:325] in Cursor._query(self, q)
324 self._clear_result()
--> 325 conn.query(q)
326 self._do_get_result()
File [~/opt/anaconda3/envs/py310/lib/python3.10/site-packages/pymysql/connections.py:549], in Connection.query(self, sql, unbuffered)
548 self._execute_command(COMMAND.COM_QUERY, sql)
...
ProgrammingError: (pymysql.err.ProgrammingError) (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'The original query seems correct and does not contain any of the common mistakes' at line 1")
[SQL: The original query seems correct and does not contain any of the common mistakes listed. Therefore, the original query is:
SELECT SUM(monthly_retail_sales) AS total_sales FROM t_passenger_car_monthly_sales WHERE passenger_car_type = 'passenger_car' AND yearly = 2022]
(Background on this error at: https://sqlalche.me/e/14/f405)
| The exception 'SQLDatabaseSequentialChain or SQLDatabaseChain configuration parameter use_query_checker=True' occurred. | https://api.github.com/repos/langchain-ai/langchain/issues/4768/comments | 4 | 2023-05-16T06:26:18Z | 2023-10-30T16:07:03Z | https://github.com/langchain-ai/langchain/issues/4768 | 1,711,325,729 | 4,768 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
_No response_
### Suggestion:
_No response_ | Will the langchain support ChatGLM? | https://api.github.com/repos/langchain-ai/langchain/issues/4766/comments | 4 | 2023-05-16T05:59:26Z | 2023-10-17T16:07:29Z | https://github.com/langchain-ai/langchain/issues/4766 | 1,711,294,010 | 4,766 |
[
"hwchase17",
"langchain"
]
| ### System Info
I just set up a local tracing server, I change the port to 8005.

when I visit localhost:4173, It shows:

and the error is:
```
langchain-langchain-frontend-1 | ➜ Local: http://localhost:4173/
langchain-langchain-frontend-1 | ➜ Network: http://172.18.0.4:4173/
langchain-langchain-backend-1 | INFO: Application startup complete.
langchain-langchain-backend-1 | INFO: Uvicorn running on http://0.0.0.0:8005 (Press CTRL+C to quit)
langchain-langchain-frontend-1 | TypeError: fetch failed
langchain-langchain-frontend-1 | at fetch (/app/node_modules/undici/index.js:105:13)
langchain-langchain-frontend-1 | at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
langchain-langchain-frontend-1 | at async fetchData (file:///app/.svelte-kit/output/server/entries/pages/sessions/_page.server.ts.js:7:17)
langchain-langchain-frontend-1 | at async file:///app/.svelte-kit/output/server/index.js:489:86
langchain-langchain-frontend-1 | at async Promise.all (index 0)
langchain-langchain-frontend-1 | at async unwrap_promises (file:///app/.svelte-kit/output/server/index.js:489:9)
langchain-langchain-frontend-1 | at async load_server_data (file:///app/.svelte-kit/output/server/index.js:537:25)
langchain-langchain-frontend-1 | at async file:///app/.svelte-kit/output/server/index.js:1500:18
```
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
run ```langchain-server``` in terminal
### Expected behavior
how to fix this bug? | some thing wrong with tracing | https://api.github.com/repos/langchain-ai/langchain/issues/4762/comments | 2 | 2023-05-16T03:44:42Z | 2023-06-09T10:09:26Z | https://github.com/langchain-ai/langchain/issues/4762 | 1,711,172,396 | 4,762 |
[
"hwchase17",
"langchain"
]
| ### Feature request
本地局域网网络受限,需要通过反向代理访问api.openai.com地址,请问如何修改langchain包访问chatgpt的地址为我的代理地址
### Motivation
本地局域网网络受限,需要通过反向代理访问api.openai.com地址,请问如何修改langchain包访问chatgpt的地址为我的代理地址
### Your contribution
我使用的项目是gpt4-pdf-chatbot-langchain | 如何修改langchain包默认访问api.openai.com请求地址,我需要通过代理访问api.openai.com | https://api.github.com/repos/langchain-ai/langchain/issues/4759/comments | 4 | 2023-05-16T01:32:10Z | 2023-12-06T17:46:15Z | https://github.com/langchain-ai/langchain/issues/4759 | 1,711,067,591 | 4,759 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain=0.017 python=3.9.16
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from gptcache import Cache
from gptcache.manager.factory import manager_factory
from gptcache.processor.pre import get_prompt
from langchain.cache import GPTCache
import hashlib
# Avoid multiple caches using the same file, causing different llm model caches to affect each other
def get_hashed_name(name):
return hashlib.sha256(name.encode()).hexdigest()
def init_gptcache(cache_obj: Cache, llm: str):
hashed_llm = get_hashed_name(llm)
cache_obj.init(
pre_embedding_func=get_prompt,
data_manager=manager_factory(manager="map", data_dir=f"map_cache_{hashed_llm}"),
)
langchain.llm_cache = GPTCache(init_gptcache)
llm("Tell me a joke")
```
### Expected behavior
import hashlib | Have resolved:GPTcache :[Errno 63] File name too long: "similar_cache_[('_type', 'openai'), ('best_of', 2), ('frequency_penalty', 0), ('logit_bias', {}), ('max_tokens', 256), ('model_name', 'text-davinci-002'), ('n', 2), ('presence_penalty', 0), ('request_timeout', None), ('stop', None), ('temperature', 0.7), ('top_p', 1)] | https://api.github.com/repos/langchain-ai/langchain/issues/4757/comments | 1 | 2023-05-16T01:14:21Z | 2023-05-19T23:35:38Z | https://github.com/langchain-ai/langchain/issues/4757 | 1,711,055,714 | 4,757 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Capability to retrieve relevance scores along with document has been added to VectorStoreRetriever as part of PR#4359.
The search_type == "similarity_score_threshold" alternative is handled in the sync flow (VectorStoreRetriever.get_relevant_documents) but not in the async flow (VectorStoreRetriever.aget_relevant_documents).
This request is to add handling of search_type "similarity_score_threshold" to the VectorStoreRetriever async flow.
### Motivation
The feature is necessary to get relevancy/similarity scores as part of a chatbot using ConversationalRetrievalChain and vector stores in streaming (thus async) mode.
Prevous PR implementing search_type == "similarity_score_threshold": https://github.com/hwchase17/langchain/pull/4359
### Your contribution
I can eventually work on this feature and submit a PR after setting up the whole environment (it would be my first PR on this project though) | search_type "similarity_score_threshold" is missing on async aget_relevant_documents | https://api.github.com/repos/langchain-ai/langchain/issues/4756/comments | 2 | 2023-05-16T01:10:03Z | 2023-06-06T12:39:39Z | https://github.com/langchain-ai/langchain/issues/4756 | 1,711,052,281 | 4,756 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
i just tried the [gptcache](https://python.langchain.com/en/latest/modules/models/llms/examples/llm_caching.html?highlight=cache#gptcache) using ChatOpenAI
```python
from gptcache import Cache
from gptcache.adapter.api import init_similar_cache
from langchain.cache import GPTCache
# Avoid multiple caches using the same file, causing different llm model caches to affect each other
def init_gptcache(cache_obj: Cache, llm):
init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{llm}")
langchain.llm_cache = GPTCache(init_gptcache)
```
```python
%%time
llm([HumanMessage(content="Translate this sentence from English to Bahasa Indonesia. I love programming.")])
```
```
CPU times: user 30 ms, sys: 1.96 ms, total: 31.9 ms
Wall time: 1.15 s
AIMessage(content='Saya suka pemrograman.', additional_kwargs={}, example=False)
```
```python
%%time
llm([HumanMessage(content="Translate this sentence from English to Bahasa Indonesia. I love programming.")])
```
```
CPU times: user 4.15 ms, sys: 1.91 ms, total: 6.05 ms
Wall time: 1.34 s
AIMessage(content='Saya suka pemrograman.', additional_kwargs={}, example=False)
```
the second execution runs longer, obviously that the cache is miss hit.
can anyone confirm?
### Suggestion:
_No response_ | Question: Does Chat Model support caching? | https://api.github.com/repos/langchain-ai/langchain/issues/4755/comments | 1 | 2023-05-16T01:09:04Z | 2023-05-16T01:23:30Z | https://github.com/langchain-ai/langchain/issues/4755 | 1,711,051,594 | 4,755 |
[
"hwchase17",
"langchain"
]
| ### System Info
0.0.170, Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.31, Python 3.9
```python
Generative_Result_Message = """Given the following schema table, sql query and sql result. Provide a human readable answer to the question
{sql_answering_document}
Question: {question}
Resulting Query: {sql_query}
Return only the answer to the question and create your own human readable answer based off the sql result and sql query
Below is the query result:
"""
SQL_RESULT_PROMPT = PromptTemplate(
input_variables=["question", "sql_query", "sql_answering_document"],
template=Generative_Result_Message,
)
generative_result_llm = ChatOpenAI(
model_name="gpt-4",
temperature=self.temperature,
openai_api_key=settings.OPENAI_API_KEY,
client=get_client(),
)
generative_result_llm_chain = LLMChain(
llm=generative_result_llm, prompt=self.SQL_RESULT_PROMPT
)
generative_result_reduce_chain = StuffDocumentsChain(
llm_chain=generative_result_llm_chain,
document_variable_name="sql_answering_document",
)
combine_documents = MapReduceDocumentsChain(
llm_chain=generative_result_llm_chain,
combine_document_chain=generative_result_reduce_chain,
document_variable_name="sql_answering_document",
)
map_reduce = MapReduceChain(
combine_documents_chain=combine_documents,
text_splitter=CharacterTextSplitter(),
)
result = map_reduce.run(
{
"question": document["generated_question"],
"sql_query": sql_query,
"sql_answering_document": "sql_answering_document",
"input_text": query_result
})
```
This is the error log I'm getting
>
> answer = map_reduce(
> File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 142, in __call__
> raise e
> File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 136, in __call__
> self._call(inputs, run_manager=run_manager)
> File "/usr/local/lib/python3.9/site-packages/langchain/chains/mapreduce.py", line 89, in _call
> outputs = self.combine_documents_chain.run(
> File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 243, in run
> return self(kwargs, callbacks=callbacks)[self.output_keys[0]]
> File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 142, in __call__
> raise e
> File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 136, in __call__
> self._call(inputs, run_manager=run_manager)
> File "/usr/local/lib/python3.9/site-packages/langchain/chains/combine_documents/base.py", line 84, in _call
> output, extra_return_dict = self.combine_docs(
> File "/usr/local/lib/python3.9/site-packages/langchain/chains/combine_documents/map_reduce.py", line 144, in combine_docs
> results = self.llm_chain.apply(
> File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 160, in apply
> raise e
> File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 157, in apply
> response = self.generate(input_list, run_manager=run_manager)
> File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 80, in generate
> prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)
> File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 108, in prep_prompts
> selected_inputs = {k: inputs[k] for k in self.prompt.input_variables}
> File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 108, in <dictcomp>
> selected_inputs = {k: inputs[k] for k in self.prompt.input_variables}
> KeyError: 'question'
>
When I inspected the affected library files and log the data before the method that's triggering the error this is the output
> [2023-05-15 23:39:46,775: INFO/ForkPoolWorker-7] kwargs in chains/base.py (part two) kwargs: {'input_documents': [Document(page_content="[('2-3 times a week', 4), ('Twice a month', 2), ('Once a week', 2), ('On occasions', 1)]", metadata={})]} args: ()
> [2023-05-15 23:39:46,775: INFO/ForkPoolWorker-7] inputs from library file: {'input_documents': [Document(page_content="[('2-3 times a week', 4), ('Twice a month', 2), ('Once a week', 2), ('On occasions', 1)]", metadata={})]}
> [2023-05-15 23:39:46,776: INFO/ForkPoolWorker-7] input listings from prep_prompts [{'sql_answering_document': "[('2-3 times a week', 4), ('Twice a month', 2), ('Once a week', 2), ('On occasions', 1)]"}]
Which is weird, I'm not sure why the prompt inputs would not include the one from the llm I setup
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
N/A
### Expected behavior
N/A | LLM Not Receiving prompt ARGS | https://api.github.com/repos/langchain-ai/langchain/issues/4752/comments | 0 | 2023-05-15T23:51:55Z | 2023-06-03T21:41:05Z | https://github.com/langchain-ai/langchain/issues/4752 | 1,710,996,090 | 4,752 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
How do I stop the stream when using callbacks like async callback?
When I stop the stream, does the openAI still charges the remainders of the generation?
### Suggestion:
_No response_ | How to stop the stream? and does it stop the openai charging? | https://api.github.com/repos/langchain-ai/langchain/issues/4743/comments | 7 | 2023-05-15T20:07:08Z | 2024-06-09T09:32:30Z | https://github.com/langchain-ai/langchain/issues/4743 | 1,710,750,362 | 4,743 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
[similarity_search](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L174-L175) in weaviate uses near_text.
This requires weaviate to be set up with a [text2vec module](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules).
At the same time, the weaviate also takes an [embedding model](https://github.com/hwchase17/langchain/blob/09587a32014bb3fd9233d7a567c8935c17fe901e/langchain/vectorstores/weaviate.py#L86) as one of it's init parameters.
Why don't we use the embedding model to vectorize the search query and then use weaviate's near_vector operator to do the search?
### Suggestion:
If a user is using langchain with weaviate, we can assume that they want to use langchain's features to generate the embeddings and as such, will not have any text2vec module enabled. | Issue: Weaviate: why similarity_search uses with_near_text? | https://api.github.com/repos/langchain-ai/langchain/issues/4742/comments | 5 | 2023-05-15T18:37:07Z | 2023-05-17T02:43:16Z | https://github.com/langchain-ai/langchain/issues/4742 | 1,710,614,532 | 4,742 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.166
Python 3.10.9
Operating System: Kubuntu 23.04
KDE Plasma Version: 5.27.4
KDE Frameworks Version: 5.104.0
Qt Version: 5.15.8
Kernel Version: 6.2.0-20-generic (64-bit)
Graphics Platform: X11
Processors: 12 × AMD Ryzen 5 5600X 6-Core Processor
Memory: 31.2 GiB of RAM
Graphics Processor: NVIDIA GeForce GTX 1080/PCIe/SSE2
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
### Steps to reproduce
`git clone https://github.com/imartinez/privateGPT`
(follow project instructions)
```
pip install -r requirements.txt
wget https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin
wget https://huggingface.co/Pi3141/alpaca-native-7B-ggml/resolve/397e872bf4c83f4c642317a5bf65ce84a105786e/ggml-model-q4_0.bin
mkdir ./models
mv *.bin ./models/
cp example.env .env
python ingest.py
python privateGPT.py
```
### Expected behavior
while `ingest.py` or `privateGPT.py` is running the machine should crash (turn off)
debugger shows crash occurs at, `ingest.py`:
`llama = LlamaCppEmbeddings(model_path=llama_embeddings_model, n_ctx=model_n_ctx)` | LlamaCppEmbeddings crashing (reboot) Linux Kubuntu 23.04 machine | https://api.github.com/repos/langchain-ai/langchain/issues/4738/comments | 1 | 2023-05-15T17:15:38Z | 2023-09-10T16:17:24Z | https://github.com/langchain-ai/langchain/issues/4738 | 1,710,499,528 | 4,738 |
[
"hwchase17",
"langchain"
]
| ### System Info
Main branch.
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Docstring for `ConversationalChatAgent` and `ConversationalAgent` is identical. The user does not know the difference between the two classes.
https://github.com/hwchase17/langchain/blob/c70ae562b466ba9a6d0f587ab935fd9abee2bc87/langchain/agents/conversational_chat/base.py#L36-L37
https://github.com/hwchase17/langchain/blob/c70ae562b466ba9a6d0f587ab935fd9abee2bc87/langchain/agents/conversational/base.py#L20-L21
### Expected behavior
The difference should be explained in the doctring. | Identical Docstring for `ConversationalChatAgent` and `ConversationalAgent`. | https://api.github.com/repos/langchain-ai/langchain/issues/4736/comments | 4 | 2023-05-15T16:30:08Z | 2023-12-20T16:07:51Z | https://github.com/langchain-ai/langchain/issues/4736 | 1,710,433,310 | 4,736 |
[
"hwchase17",
"langchain"
]
| ### Feature request
In the [Chameleon paper](https://arxiv.org/abs/2304.09842), there are some prompt tricks different from langchain, such as:
1. There is a planner responsible for generating "steps to use the tool"
For example, a generated "steps" looks like:

Langchain also has the ["Plan and Execute" feature](https://python.langchain.com/en/latest/modules/agents/plan_and_execute.html) , but each step in the generated plan is a text goal, not a tool. For example:
<img width="775" alt="image" src="https://github.com/hwchase17/langchain/assets/26001097/2ba714d4-3937-429c-8fde-e9be50836eb1">
I'm not sure which of the two is better
2. Heuristically verify the plan generated by the planner
In the paper, the author used some rules to verify whether the generated steps are valid, such as verifying that "step x must be before step y, otherwise it will be considered invalid".
At present, langchain don't have such a mechanism. Maybe we can add this feature?
### Motivation
To make the plan more accurate
### Your contribution
I'm a python noob, maybe I can help coding | How about using the prompts in the Chameleon paper? | https://api.github.com/repos/langchain-ai/langchain/issues/4730/comments | 2 | 2023-05-15T15:17:46Z | 2023-09-10T16:17:29Z | https://github.com/langchain-ai/langchain/issues/4730 | 1,710,317,818 | 4,730 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain: 0.0.169
Python: 3.10.10
MacOS: 12.6.5 (21G531)
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When running the notebook featured here: [https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html](https://python.langchain.com/en/latest/modules/chains/index_examples/summarize.html), the following cell will fail.
```python
chain = load_summarize_chain(llm, chain_type="map_reduce")
chain.run(docs)
```
Unless you have pre-installed `tiktoken`, you will receive an error:
```text
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
File [/usr/local/lib/python3.10/site-packages/langchain/llms/openai.py:464](https://file+.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/site-packages/langchain/llms/openai.py:464), in BaseOpenAI.get_num_tokens(self, text)
463 try:
--> 464 import tiktoken
465 except ImportError:
ModuleNotFoundError: No module named 'tiktoken'
```
Installing `tiktoken` solves the immediate issue.
```python
%pip install tiktoken
```
### Expected behavior
The notebook runs without errors. | Summarization Notebook: No module named 'tiktoken' | https://api.github.com/repos/langchain-ai/langchain/issues/4728/comments | 2 | 2023-05-15T15:02:38Z | 2023-09-12T16:15:11Z | https://github.com/langchain-ai/langchain/issues/4728 | 1,710,288,233 | 4,728 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Enum's don't work well with the structured agent. The data validation works fine, but it would be great if we took the ideas from the `StructuredChatOutputParserWithRetries` and applied it to the StructuredTool's.
For example, when a validation error raises due to an enum breach, parse the error message into an LLM with the schema and input and get it to fix the tool input prior to continuing.
This would make the StructuredTools more robust IMO.
This may be a step to far, but it would also be nice to be able to handle these validation errors in different ways:
1. Correct the error similar to the `StructuredChatOutputParserWithRetries`;
2. Use a different tool that might collect some additional information from the user or tool.
### Motivation
I would like to constrain the input parameters of my StructuredTools to an enum so I can avoid bugs.
### Your contribution
I am happy to raise a PR for the parser code but I would need to seek guidance from a maintainer if and how this would work with the existing flow of the software. | Enums Don't Work Well With Structured Agent | https://api.github.com/repos/langchain-ai/langchain/issues/4724/comments | 3 | 2023-05-15T13:13:07Z | 2023-10-15T16:07:13Z | https://github.com/langchain-ai/langchain/issues/4724 | 1,710,079,527 | 4,724 |
[
"hwchase17",
"langchain"
]
| ### System Info
```
langchain version==0.0.169
python=3.10.10
platform=dev_containers
```
The code given below is not able to utilise memory for answering questions with references
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use the following code with the necessary changes on your end to replicate:
```
from dotenv import load_dotenv, find_dotenv
from qdrant_client import QdrantClient
from langchain.chains import ConversationalRetrievalChain
from langchain.chat_models import ChatOpenAI
from langchain.vectorstores import Qdrant
from langchain.embeddings import OpenAIEmbeddings
from langchain.memory import ConversationBufferMemory, RedisChatMessageHistory
import os
from loguru import logger
import redis
# Load environment variables from .env file
load_dotenv(find_dotenv("../app/.env"))
url = os.environ.get("QDRANT_URL")
collection_name = os.environ.get("QDRANT_COLLECTION_NAME")
openai_api_key = os.environ.get("OPENAI_API_KEY")
redis_host = os.environ.get("REDIS_HOST")
redis_port = os.environ.get("REDIS_PORT")
# Initialize Qdrant client and vector database
if url is not None and collection_name is not None:
client = QdrantClient(url=url, prefer_grpc=True)
embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key)
vectordb = Qdrant(client, collection_name, embeddings.embed_query)
else:
logger.error("Qdrant URL or Collection Name not set in environment variables")
# Initialize the LLM
if openai_api_key is not None:
llm = ChatOpenAI(openai_api_key=openai_api_key, temperature=0, model_name="gpt-3.5-turbo")
else:
logger.error("OpenAI API key not set in environment variables")
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory(
memory_key="chat_history",
return_messages=True,
output_key="answer"
)
def get_chat_history(inputs) -> str:
res = []
for message in inputs:
if isinstance(message, dict) and "content" in message:
res.append(message["content"])
return "\n".join(res)
from langchain.prompts import PromptTemplate
template = """Answer the question in your own words as truthfully as possible from the context given to you.
If you do not know the answer to the question, simply respond with "I don't know. Can you ask another question".
If questions are asked where there is no relevant context available, simply respond with "I don't know. Please ask a question relevant to the documents"
Context: {context}
{chat_history}
Human: {question}
Assistant:"""
prompt = PromptTemplate(
input_variables=["context", "chat_history", "question"], template=template
)
# Create the custom chain
if llm is not None and vectordb is not None:
chain = ConversationalRetrievalChain.from_llm(
llm=llm, retriever=vectordb.as_retriever(), memory=memory,
get_chat_history=get_chat_history, return_source_documents=True,
combine_docs_chain_kwargs={'prompt': prompt})
else:
logger.error("LLM or Vector Database not initialized")
# Initialize Redis connection
if redis_host is not None and redis_port is not None:
redis_client = redis.Redis(host=redis_host, port=redis_port)
else:
logger.error("Redis host or port not set in environment variables")
session_id = "sample_id"
# Retrieve chat history for session from Redis
chat_history = redis_client.get(session_id)
if chat_history is None:
# If chat history does not exist, create a new one
chat_history = RedisChatMessageHistory(session_id, url=f"redis://{redis_host}:{redis_port}")
else:
# If chat history exists, deserialize it from Redis
chat_history = RedisChatMessageHistory.deserialize(chat_history, url=f"redis://{redis_host}:{redis_port}")
# Retrieve answer from chain
chain({"question": "Who is Harry potter?", "chat_history": chat_history.messages})
chain({"question": "What are his qualities?", "chat_history": chat_history.messages})
```
### Expected behavior
`What are his qualities?` should return Harry Potter's qualities and not `I don't know. Please ask a question relevant to the documents.` | ConversationalRetrievalChain doesn't work with memory | https://api.github.com/repos/langchain-ai/langchain/issues/4722/comments | 10 | 2023-05-15T11:46:00Z | 2023-09-28T16:06:54Z | https://github.com/langchain-ai/langchain/issues/4722 | 1,709,924,459 | 4,722 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Add summarization task type for HuggingFace APIs.
This task type is described by [HuggingFace inference API](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task)
### Motivation
My project utilizes LangChain to connect multiple LLMs, including various HuggingFace models that support the summarization task. Integrating this task type is highly convenient and beneficial.
### Your contribution
I will submit a PR. | Add summarization task type for HuggingFace APIs | https://api.github.com/repos/langchain-ai/langchain/issues/4720/comments | 0 | 2023-05-15T11:23:49Z | 2023-05-15T23:26:20Z | https://github.com/langchain-ai/langchain/issues/4720 | 1,709,886,048 | 4,720 |
[
"hwchase17",
"langchain"
]
| ### Feature request
The qdrant vector store has "must" in filter, is it possible to add "must_not" and/or "should" as well?
Ref: https://qdrant.tech/documentation/filtering/
### Motivation
Having a filter is really nice, but its hard to use row level authorization without "must_not"
So we can say "must" include ID and "must_not" include ID2
To be able to filter correctly
### Your contribution
I am a front-end developer, and hoping someone with python competance can handle this. | Qdrant filtering methods | https://api.github.com/repos/langchain-ai/langchain/issues/4718/comments | 5 | 2023-05-15T10:49:29Z | 2023-09-15T22:12:57Z | https://github.com/langchain-ai/langchain/issues/4718 | 1,709,831,906 | 4,718 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I created an AgentExecutor with the ConversationalChatAgent and I could pass a system message as I initialize the agent executor. Is it possible to add system messages to individual prompts, not just one in the beginning? My code:
```
from langchain import PromptTemplate
from langchain.agents import ConversationalChatAgent, Tool, AgentExecutor
import pickle
import os
import datetime
import logging
# from controllers.user_controller import UserController
from langchain.llms import OpenAI
from langchain.document_loaders import DirectoryLoader
from langchain.text_splitter import CharacterTextSplitter
# from langchain.vectorstores import FAISS
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.chains import RetrievalQA
class ChatController(object):
def __init__(self):
self._create_chat_agent()
def _create_chat_agent(self):
self.llm = OpenAI(temperature=0, top_p=0.2, presence_penalty=0.4, frequency_penalty=0.2)
embeddings = OpenAIEmbeddings()
persist_directory = 'myvectordb'
vectorstore = Chroma(persist_directory=persist_directory, embedding_function = embeddings)
prompt_template = """If the context is not relevant,
please answer the question by using your own knowledge about the topic
{context}
Question: {question}
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
chain_type_kwargs = {"prompt": PROMPT}
# Initialise Langchain - QA chain
qa = RetrievalQA.from_chain_type(llm=self.llm,
chain_type="stuff",
retriever=vectorstore.as_retriever(),
chain_type_kwargs=chain_type_kwargs)
tools = [
Tool(
name="Document tool",
func=qa.run,
description="useful for when you need to answer questions."
),
]
system_msg = "You are a helpful assistant."
agent = ConversationalChatAgent.from_llm_and_tools(
llm=self.llm,
tools=tools,
system_message=system_msg
)
self.chat_agent = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=True, memory=ConversationBufferMemory(memory_key="chat_history",
return_messages=True)
)
def askAI(self, prompt: str):
response = self.chat_agent.run(input=prompt)
return {"answer": response}
```
### Suggestion:
_No response_ | Issue: Is it possible to add system message with the prompt? | https://api.github.com/repos/langchain-ai/langchain/issues/4716/comments | 2 | 2023-05-15T09:51:06Z | 2023-09-10T16:17:39Z | https://github.com/langchain-ai/langchain/issues/4716 | 1,709,729,606 | 4,716 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Right now, streaming in LLM's are can be seen in stdout in terminals but not as output responses. I'm using conversation chain where i see my output in the terminal as streaming but not while return the output through API.
### Motivation
Responses may get started typing and the user may have some patience for his/her query. because if the prompt, context is lengthier there is a delay in response too. Above implementation can helps
### Your contribution
- | Streaming Responses As Ouput Using FastAPI Support | https://api.github.com/repos/langchain-ai/langchain/issues/4715/comments | 16 | 2023-05-15T06:56:02Z | 2023-09-30T16:07:19Z | https://github.com/langchain-ai/langchain/issues/4715 | 1,709,434,878 | 4,715 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version:0.0.168
python version 3.10
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
When using the RetrievalQA chain, an error message "Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm'"
This code can run at version 0.0.164
```python
class Chain:
def __init__(self):
self.cb_mngr_stdout = AsyncCallbackManager([StreamingStdOutCallbackHandler()])
self.cb_mngr_aiter = AsyncCallbackManager([AsyncIteratorCallbackHandler()])
self.qa_stream = None
self.qa = None
self.make_chain()
def make_chain(self):
chain_type_kwargs = {"prompt": MyPrompt.get_retrieval_prompt()}
qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1), chain_type="stuff",
retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}),
chain_type_kwargs=chain_type_kwargs, return_source_documents=True)
qa_stream = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", max_tokens=1500, temperature=.1,
streaming=True, callback_manager=self.cb_mngr_aiter),
chain_type="stuff", retriever=Retrieval.vectordb.as_retriever(search_kwargs={"k": Retrieval.context_num}),
chain_type_kwargs=chain_type_kwargs, return_source_documents=True)
self.qa = qa
self.qa_stream = qa_stream
```
call function
```python
resp = await chains.qa.acall({"query": "xxxxxxx"}) # no problem
resp = await chains.qa_stream.acall({"query": "xxxxxxxx"}) # error
```
### Expected behavior
self.qa_stream return result like self.qa,or like langchain version 0.0.164 | Error in on_llm callback: 'AsyncIteratorCallbackHandler' object has no attribute 'on_llm' | https://api.github.com/repos/langchain-ai/langchain/issues/4714/comments | 2 | 2023-05-15T06:30:00Z | 2023-05-16T01:36:23Z | https://github.com/langchain-ai/langchain/issues/4714 | 1,709,405,469 | 4,714 |
[
"hwchase17",
"langchain"
]
| ### Feature request
https://platform.openai.com/docs/api-reference/embeddings/create?lang=python supports a user parameter, where we can pass user details to the openai api. https://github.com/hwchase17/langchain/blob/master/langchain/embeddings/openai.py#L66 can take a user parameter (optional) , which needs to be passed via embed_with_retry function.
### Motivation
We use it to track user details - could be appkey etc.
### Your contribution
Yes, can create a PR. Please let me know process . | Support for user parameter in OpenAI Embeddings Create class, which exists in OpenAI API | https://api.github.com/repos/langchain-ai/langchain/issues/4711/comments | 4 | 2023-05-15T05:26:16Z | 2023-12-09T16:06:51Z | https://github.com/langchain-ai/langchain/issues/4711 | 1,709,341,947 | 4,711 |
[
"hwchase17",
"langchain"
]
| I was using RetrievalQA.from_chain_type, to which I had passed parameters as:-
`RetrievalQA.from_chain_type(llm, chain_type, retriever = chroma_db.as_retriever(), return_source_documents = True)`
Here,
return_source_documents = True, only returns the chunks from which it generated the response. _Is there a way in which I can get similarity score also returned for matched chunks_ (say if there are 4 chunks it found most relevant to query, how to get scores in decreasing order based on similarity) | How to use return_source_documents to also extract similarity score?? | https://api.github.com/repos/langchain-ai/langchain/issues/4710/comments | 13 | 2023-05-15T04:44:31Z | 2024-05-14T16:15:02Z | https://github.com/langchain-ai/langchain/issues/4710 | 1,709,303,133 | 4,710 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html#customize-prompt
When looking at the Cutomize prompt example, the subsequent `db_chain.run()` command is just like the pre-prompt chain.
It is currently like:
`db_chain.run("How many employees are there in the foobar table?")`
Shouldn't it be something like:
`db_chain.run({'input': "How many employees are there in the foobar table?", 'table':'foobar', 'dialect':'testing'})`
Since we added the prompt to the db_chain
### Idea or request for content:
_No response_ | DOC: SQL Chain Example - Customise Prompt | https://api.github.com/repos/langchain-ai/langchain/issues/4703/comments | 17 | 2023-05-15T02:10:11Z | 2023-10-19T16:08:23Z | https://github.com/langchain-ai/langchain/issues/4703 | 1,709,196,707 | 4,703 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
The embedding models of cohere are
1. embed-english-light-v2.0
2. embed-english-v2.0
3. embed-multilingual-v2.0
The corresponding embedding wrapper in langchain will need to reflect that. Currently it is defaulted to large
### Suggestion:
_No response_ | Issue: The cohere embedding model has the model defaulted to large. These names are deprecated | https://api.github.com/repos/langchain-ai/langchain/issues/4694/comments | 0 | 2023-05-15T00:09:11Z | 2023-05-16T23:27:25Z | https://github.com/langchain-ai/langchain/issues/4694 | 1,709,123,191 | 4,694 |
[
"hwchase17",
"langchain"
]
| ### Feature request
It would be nice to have the ability to get the positions of the extracted texts - i.e. the beginning and end (character position) of the split text from the text body, or the line and line character position of the extracted text.
### Motivation
I'm working on a way ingest a code repo into a vector store and link it to a graph database. The line and character position would be incredibly useful in the metadata to interface the two. This could provide a richer context for tracking data positions and could offer mechanisms for testing.
### Your contribution
I'd be happy to submit a PR regarding this if it makes sense to others. | Ability to get the character or the line and line character positions from a split text | https://api.github.com/repos/langchain-ai/langchain/issues/4692/comments | 1 | 2023-05-14T23:10:03Z | 2023-09-10T16:17:45Z | https://github.com/langchain-ai/langchain/issues/4692 | 1,709,106,195 | 4,692 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version: 0.0.168
OS: Mac
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hi, I am trying to play with SQLDatabaseChain and I tried to connect it with the PostgreSQL database.
I tested with the URL, and it works well with the SQLAlchemy engine and I was able to execute queries successfully.
Here are my codes to use SQLDatabasechain:
```Python
db = SQLDatabase.from_uri(url,
sample_rows_in_table_info = 10,
)
```
However, it keeps showing that there are no tables. I used `db.get_table_info()`, it always return an empty set.
Do you have any ideas ?
Appreciate!
### Expected behavior
I expected it can inspect the schema correctly. | SQLDatabaseChain did not read PostgreSQL database table information correctly | https://api.github.com/repos/langchain-ai/langchain/issues/4690/comments | 6 | 2023-05-14T22:58:23Z | 2024-01-30T00:42:49Z | https://github.com/langchain-ai/langchain/issues/4690 | 1,709,102,621 | 4,690 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I encountered a `TypeError: string indices must be integers` error when using the library to retrieve chat messages from a PostgreSQL database. This error occurs specifically in the `messages_from_dict` function.
Upon further investigation, it appears that the error arises when trying to access the "type" field of a message dictionary. The `messages_from_dict` function is expected to convert a list of dictionaries into a list of `BaseMessage` objects, but it fails to handle the dictionary properly.
To reproduce the issue, follow these steps:
1. Set up the library to use a PostgreSQL database as the chat message history storage.
2. Start a conversation and exchange messages.
3. Retrieve the chat history using the `messages` property of the `PostgresChatMessageHistory` class.
The error occurs when the `messages` property executes the following code snippet:
```python
items = [record["message"] for record in self.cursor.fetchall()]
messages = messages_from_dict(items)
The messages_from_dict function attempts to convert each dictionary in the items list to a BaseMessage object. However, it fails to properly handle the dictionary structure, resulting in the TypeError.
Environment:
Library version: [Specify library version]
Python version: [Specify Python version]
PostgreSQL version: [Specify PostgreSQL version]
Operating system: [Specify operating system]
### Suggestion:
To resolve this issue, the implementation of the messages_from_dict function needs to be reviewed and updated accordingly. It should correctly handle the structure of each message dictionary and create BaseMessage objects with the expected attributes.
Additionally, it would be helpful to provide clearer documentation or examples on how to set up the PostgreSQL chat message history and ensure the expected structure of the messages stored in the message_store table. | Issue: TypeError: string indices must be integers when retrieving messages from PostgreSQL | https://api.github.com/repos/langchain-ai/langchain/issues/4684/comments | 6 | 2023-05-14T19:53:52Z | 2023-12-03T16:07:36Z | https://github.com/langchain-ai/langchain/issues/4684 | 1,709,054,913 | 4,684 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.168, Python 3.11.3
### Who can help?
@anihamde
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
db = DeepLake(dataset_path=f"hub://{username_activeloop}/{lake_name}", embedding_function=embeddings, overwrite=False)
### Expected behavior
Would expect overwrite to not take place; however, it does. This is easily resolvable, and I'll share a PR to address this shortly. | Setting overwrite to False on DeepLake constructor still overwrites | https://api.github.com/repos/langchain-ai/langchain/issues/4682/comments | 1 | 2023-05-14T19:15:22Z | 2023-09-10T16:17:56Z | https://github.com/langchain-ai/langchain/issues/4682 | 1,709,045,521 | 4,682 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hi team,
I am a developer relations engineer working at Google on PaLM API. I want to participate and contribute to adding Google PaLM potentially to LangChain. How is our current dev stage of adding Google PaLM API?
### Motivation
Better user experience with PaLM API :)
### Your contribution
Still need discussion, might be PRs, design discussions, or others. | Add Google PaLM API | https://api.github.com/repos/langchain-ai/langchain/issues/4681/comments | 20 | 2023-05-14T19:08:05Z | 2024-01-30T00:52:41Z | https://github.com/langchain-ai/langchain/issues/4681 | 1,709,043,509 | 4,681 |
[
"hwchase17",
"langchain"
]
| ### Feature request
LLM usually limits text by Tokens.
It may be useful to split a large text into chunks according to the number of Tokens rather than the number of characters.
For example, if LLM allows us to use 8000 tokens, and we want to split the text into chunks of up to 4000-tokens, then we can call
```python
text_splitter = RecursiveCharacterTextSplitter(chunk_tokens = 4000, ...
```
### Motivation
If we split a text by number of characters, it is not obvious how many tokens these chunks will be.
And at the same time if we want to split a text into bigger possible chunks and keep these chunks under certain LLM tokens limit, we cannot operate by number of characters.
### Your contribution
As an example of the `RecursiveCharacterTextSplitter(chunk_tokens` implementation it is very useful libraries that helps to split text into tokens:
https://github.com/openai/tiktoken
```python
import tiktoken
def split_large_text(large_text, max_tokens):
enc = tiktoken.get_encoding("cl100k_base")
tokenized_text = enc.encode(large_text)
chunks = []
current_chunk = []
current_length = 0
for token in tokenized_text:
current_chunk.append(token)
current_length += 1
if current_length >= max_tokens:
chunks.append(enc.decode(current_chunk).rstrip(' .,;'))
current_chunk = []
current_length = 0
if current_chunk:
chunks.append(enc.decode(current_chunk).rstrip(' .,;'))
return chunks
```
| Split by Tokens instead of characters: RecursiveCharacterTextSplitter | https://api.github.com/repos/langchain-ai/langchain/issues/4678/comments | 35 | 2023-05-14T18:16:05Z | 2024-06-21T16:37:58Z | https://github.com/langchain-ai/langchain/issues/4678 | 1,709,029,487 | 4,678 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am using Chroma's client-server configuration and I have trouble setting up a retriever for ConversationalRetrievalChain.from_llm() function.
I can't find anything related to this. Can someone guide me on how can I do that or if someone has any solution?
For the locally stored database you cust called db.as_retriever() and that was it.
But now, I can't find a solutionf or passing a retriever to from_llm() function.
My code snippet is:
```
`def askQuestion(self, thread_id, question):
collection = self.chroma_client.get_collection(name="my_collection5", embedding_function=self.embedding)
self.llm = ChatOpenAI(model_name=self.model_name, temperature=self.temperature,
openai_api_key=os.environ.get('OPENAI_API_KEY'))
self.memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True,
output_key='answer')
self.chain = ConversationalRetrievalChain.from_llm(self.llm, collection.as_retriever(),return_source_documents=True,verbose=VERBOSE, memory=self.memory)
result = self.chain({"question": question})
res_dict = {
"answer": result["answer"],
}
res_dict["source_documents"] = []
# add source docs
for source in result["source_documents"]:
res_dict["source_documents"].append({
"page_content": source.page_content,
"metadata": source.metadata
})
return res_dict`
```
### Suggestion:
_No response_ | Issue: Set up a Chroma retriever for client-server configuration of Chroma | https://api.github.com/repos/langchain-ai/langchain/issues/4676/comments | 3 | 2023-05-14T18:08:39Z | 2024-03-16T22:56:10Z | https://github.com/langchain-ai/langchain/issues/4676 | 1,709,027,393 | 4,676 |
[
"hwchase17",
"langchain"
]
| ### System Info
```
langchain==0.0.168
chromadb==0.3.22
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Clone official ChromaDB repository and run their docker-compose environment.
```
git clone [email protected]:chroma-core/chroma.git
docker-compose up
```
Create a folder called `my_data` and create a `test.txt` file into it with some random text.
```
mkdir my_data
cd my_data
echo "testingtestingtesting" > test.txt
```
Script to reproduce issue:
```
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from chromadb.config import Settings
with open('my_data/test.txt', 'r', encoding="utf-8") as file:
raw_text = file.read()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size = 1000,
chunk_overlap = 0,
length_function = len,
)
texts = text_splitter.split_text(raw_text)
embeddings = OpenAIEmbeddings()
client_settings = Settings(
chroma_api_impl="rest",
chroma_server_host="localhost",
chroma_server_http_port="8000"
)
collection_name = "chroma_test"
vectorstore = Chroma.from_texts(embedding=embeddings, texts=texts, client_settings=client_settings, collection_name=collection_name)
```
Set necessary `OPENAI_API_KEY` environment variables and run the script.
This will result in an error:
`Exception: {"error":"InvalidUUID","message":"Could not parse chroma_test as a UUID"}`
The same issue will not happen if you run ChromaDB locally like this. Only when calling the actual API you then run in to the issue.
```
vectorstore = Chroma.from_texts(embedding=embeddings, texts=texts, persist_directory="db")
```
### Expected behavior
The expected behaviour would be that Langchain would call the ChromaDB API correctly with the `UUID` instead of the plaintext name of the collection.
See chromaDB sourcecode and their API `chromadb\server\fastapi\__init__.py`
Line `105`
```
self.router.add_api_route(
"/api/v1/collections/{collection_id}/add",
self.add,
methods=["POST"],
status_code=status.HTTP_201_CREATED,
)
```
Line `196`
```
def add(self, collection_id: str, add: AddEmbedding) -> None:
try:
result = self._api._add(
collection_id=_uuid(collection_id),
embeddings=add.embeddings,
metadatas=add.metadatas,
documents=add.documents,
ids=add.ids,
increment_index=add.increment_index,
)
except InvalidDimensionException as e:
raise HTTPException(status_code=500, detail=str(e))
return result
```
Line `67`
```
def _uuid(uuid_str: str) -> UUID:
try:
return UUID(uuid_str)
except ValueError:
raise InvalidUUIDError(f"Could not parse {uuid_str} as a UUID")
```
| langchain chroma vectorstore calls ChromaDB API incorrectly when ChromaDB is running in Docker | https://api.github.com/repos/langchain-ai/langchain/issues/4674/comments | 6 | 2023-05-14T17:26:09Z | 2023-10-31T16:07:20Z | https://github.com/langchain-ai/langchain/issues/4674 | 1,709,016,097 | 4,674 |
[
"hwchase17",
"langchain"
]
| ### System Info
I'm trying to make use of the sequential chaining functionality by chaining together two prompts like so:
```
# importing frameworks and such
import os
from apikey import apikey
import streamlit as st
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain, SimpleSequentialChain
os.environ["OPENAI_API_KEY"] = apikey
# Defning and building out our OOTB app framework via Streamlit
st.title("Medical GPT")
prompt = st.text_input("Enter your prompt here")
# Defining our prompt template
illness_template = PromptTemplate(
input_variables=["condition"],
template="Summarise the common symptoms for {condition}"
)
treatment_template = PromptTemplate(
input_variables=["illness"],
template="Summarise the treatment for this illness ILLNESS: {illness}"
)
# Defining our LLM and chains
llm = OpenAI(temperature=0.7)
illness_chain = LLMChain(llm=llm,
prompt=illness_template,
verbose=True)
treatment_chain = LLMChain(llm=llm,
prompt=treatment_template,
verbose=True)
sequential_chain = SimpleSequentialChain(chains=[illness_chain, treatment_chain])
# Return prompt output to frontend when a prompt is given
if prompt:
response = sequential_chain.run(topic=prompt)
st.write(response)
```
For some reason, it keeps throwing the error:
```
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "/Users/user/Langchain_hacking/app.py", line 43, in <module>
response = sequential_chain.run(topic=prompt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 239, in run
return self(kwargs, callbacks=callbacks)[self.output_keys[0]]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 123, in __call__
inputs = self.prep_inputs(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 216, in prep_inputs
self._validate_inputs(inputs)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 83, in _validate_inputs
raise ValueError(f"Missing some input keys: {missing_keys}")
ValueError: Missing some input keys: {'input'}
```
I'm not entirely sure why it keeps throwing up this error, as far as the documentation goes I'm calling `SimpleSequentialChain` correctly unless I'm missing something?
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
if prompt:
response = sequential_chain.run(topic=prompt)
st.write(response)
```
### Expected behavior
LLM output | ValueError: Missing some input keys: {'input'} | https://api.github.com/repos/langchain-ai/langchain/issues/4673/comments | 3 | 2023-05-14T16:30:59Z | 2023-08-08T20:08:15Z | https://github.com/langchain-ai/langchain/issues/4673 | 1,709,000,599 | 4,673 |
[
"hwchase17",
"langchain"
]
| ### System Info
I tried to run this example: https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html
But when I set the LLM with AzureChatOpenAI doesn't work. The error is:
```
Traceback (most recent call last):
File "/home/adrian-ubuntu/projects/generative-agents/langchain_generative_agent.py", line 79, in <module>
print(tommie.get_summary())
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/experimental/generative_agents/generative_agent.py", line 215, in get_summary
self.summary = self._compute_agent_summary()
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/experimental/generative_agents/generative_agent.py", line 201, in _compute_agent_summary
self.chain(prompt)
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chains/base.py", line 239, in run
return self(kwargs, callbacks=callbacks)[self.output_keys[0]]
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chains/base.py", line 140, in __call__
raise e
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chains/base.py", line 134, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chains/llm.py", line 69, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chains/llm.py", line 79, in generate
return self.llm.generate_prompt(
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chat_models/base.py", line 142, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks)
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chat_models/base.py", line 90, in generate
raise e
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chat_models/base.py", line 82, in generate
results = [
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chat_models/base.py", line 83, in <listcomp>
self._generate(m, stop=stop, run_manager=run_manager)
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 293, in _generate
response = self.completion_with_retry(messages=message_dicts, **params)
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 254, in completion_with_retry
return _completion_with_retry(**kwargs)
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 252, in _completion_with_retry
return self.client.create(**kwargs)
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/openai/api_requestor.py", line 620, in _interpret_response
self._interpret_response_line(
File "/home/adrian-ubuntu/projects/.venv_generative-agents/lib/python3.10/site-packages/openai/api_requestor.py", line 683, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: Resource not found
Process finished with exit code 1
```
But with a simple example like:
```
model = AzureChatOpenAI(deployment_name="gpt-35-turbo", max_tokens=1500)
print(model([HumanMessage(content="Translate this sentence from English to French. I love programming.")]))
```
works perfectly (and both runs are configured with same env variables)
Version of langchain: 0.0.168
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
import logging
from langchain.chat_models import AzureChatOpenAI
from langchain.llms import AzureOpenAI
logging.basicConfig(level=logging.ERROR)
from datetime import datetime, timedelta
from typing import List
from termcolor import colored
from langchain.docstore import InMemoryDocstore
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.retrievers import TimeWeightedVectorStoreRetriever
from langchain.experimental.generative_agents import GenerativeAgent, GenerativeAgentMemory
import math
import faiss
def relevance_score_fn(score: float) -> float:
"""Return a similarity score on a scale [0, 1]."""
# This will differ depending on a few things:
# - the distance / similarity metric used by the VectorStore
# - the scale of your embeddings (OpenAI's are unit norm. Many others are not!)
# This function converts the euclidean norm of normalized embeddings
# (0 is most similar, sqrt(2) most dissimilar)
# to a similarity function (0 to 1)
return 1.0 - score / math.sqrt(2)
def create_new_memory_retriever():
"""Create a new vector store retriever unique to the agent."""
# Define your embedding model
embeddings_model = OpenAIEmbeddings(deployment="text-embedding-ada-002_deploy", chunk_size=1)
# Initialize the vectorstore as empty
embedding_size = 1536
index = faiss.IndexFlatL2(embedding_size)
vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {}, relevance_score_fn=relevance_score_fn)
return TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, other_score_keys=["importance"], k=15)
USER_NAME = "Person A" # The name you want to use when interviewing the agent.
LLM = AzureChatOpenAI(deployment_name="gpt-35-turbo", max_tokens=1500)
tommies_memory = GenerativeAgentMemory(
llm=LLM,
memory_retriever=create_new_memory_retriever(),
verbose=True,
reflection_threshold=8 # we will give this a relatively low number to show how reflection works
)
tommie = GenerativeAgent(name="Tommie",
age=25,
traits="anxious, likes design, talkative", # You can add more persistent traits here
status="looking for a job",
# When connected to a virtual world, we can have the characters update their status
memory_retriever=create_new_memory_retriever(),
llm=LLM,
memory=tommies_memory
)
# The current "Summary" of a character can't be made because the agent hasn't made
# any observations yet.
print(tommie.get_summary())
```
### Expected behavior
Just working | Generative Agents don't work with AzureChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/4670/comments | 1 | 2023-05-14T14:54:48Z | 2023-09-10T16:18:00Z | https://github.com/langchain-ai/langchain/issues/4670 | 1,708,972,374 | 4,670 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
https://python.langchain.com/en/latest/modules/agents/agents/custom_agent.html
### Idea or request for content:
I was going through the documentation for creating a custom agent (https://python.langchain.com/en/latest/modules/agents/agents/custom_agent.html) and noticed a potential typo. In the section discussing the components of a custom agent, the text mentions that an agent consists of "three parts" but only two are listed: "Tools" and "The agent class itself".
I believe the text should say "two parts" instead of "three". Could you please confirm if this is a typo, or if there's a missing third part that needs to be included in the list? | DOC: Typo in Custom Agent Documentation | https://api.github.com/repos/langchain-ai/langchain/issues/4668/comments | 0 | 2023-05-14T12:52:17Z | 2023-05-18T04:02:24Z | https://github.com/langchain-ai/langchain/issues/4668 | 1,708,934,167 | 4,668 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
In ingest process, a long document was splitted into multiple documents and embedding into vector DB.
In inference process, the top K trunks were returned as context feeding to LLM.
In most of the cases this machanism works well, but what if I want to make an overall summary of the document?
relying on the top K smilarity result won't be sufficient, or the query is relvevant to every trunks of document?
How can I make langchain digest every piece of document before inference?
### Suggestion:
_No response_ | Is it possible to digest every piece of document befero inferece? | https://api.github.com/repos/langchain-ai/langchain/issues/4667/comments | 3 | 2023-05-14T11:14:14Z | 2023-09-12T16:15:15Z | https://github.com/langchain-ai/langchain/issues/4667 | 1,708,906,341 | 4,667 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Rewriting Langchain codebase and library with Mojo.
https://www.modular.com/mojo
### Motivation
Up to 35000x faster than Python for ML / DL applications.
Utilize the full power of the hardware, including multiple cores, vector units, and exotic accelerator units, with the world's most advanced compiler and heterogenous runtime. Achieve performance on par with C++ and CUDA without the complexity.
Mojo leverages MLIR, which enables Mojo developers to take advantage of vectors, threads, and AI hardware units.
Experience true interoperability with the Python ecosystem. Seamlessly intermix arbitrary libraries like Numpy and Matplotlib and your custom code with Mojo.
### Your contribution
Will start tackling the topic with a team myself if:
1. Operating my own DGX cluster or
2. enough other people get on board to get the project started.
| Rewrite Langchain in Mojo | https://api.github.com/repos/langchain-ai/langchain/issues/4666/comments | 4 | 2023-05-14T09:17:24Z | 2023-12-14T16:08:38Z | https://github.com/langchain-ai/langchain/issues/4666 | 1,708,874,996 | 4,666 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain>=0.0.123
sqlalchemy==1.4.48
PyAthena[SQLAlchemy]>=1.2.0,<2.0.0
Python 3.10.11
### Who can help?
@hwchase17, @eyurtsev, @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. create aws athena engine and connect to athena
engine_athena=create_engine('awsathena+rest://<keys>/<keys>@athena.us-east-1.amazonaws.com:443/<db_name>?s3_staging_dir=<bucket name>/&work_group=primary')
db = SQLDatabase(engine_athena)
Connection is established successfully.
2. db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True, return_intermediate_steps=True)
3. db_chain(query) where query is 'how many claims are there?'
### Expected behavior
Expected behavior - SQL should run against Athena
Error - TypeError: __init__() got an unexpected keyword argument 'bind'
It seems bind was deprecated from sqlalchemy version 2.0 onwards. However, PyAthena recommended SQLAlchemy version is <2.0.0.
**How can this be resolved? here is detailed error message:**
```
SELECT count (policy_id_0) FROM claims ;
SQLQuery:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [25], in <cell line: 2>()
1 print(query)
----> 2 result = db_chain(sql)
3 result
File ~\Anaconda3\lib\site-packages\langchain\chains\base.py:140, in Chain.__call__(self, inputs, return_only_outputs, callbacks)
138 except (KeyboardInterrupt, Exception) as e:
139 run_manager.on_chain_error(e)
--> 140 raise e
141 run_manager.on_chain_end(outputs)
142 return self.prep_outputs(inputs, outputs, return_only_outputs)
File ~\Anaconda3\lib\site-packages\langchain\chains\base.py:134, in Chain.__call__(self, inputs, return_only_outputs, callbacks)
128 run_manager = callback_manager.on_chain_start(
129 {"name": self.__class__.__name__},
130 inputs,
131 )
132 try:
133 outputs = (
--> 134 self._call(inputs, run_manager=run_manager)
135 if new_arg_supported
136 else self._call(inputs)
137 )
138 except (KeyboardInterrupt, Exception) as e:
139 run_manager.on_chain_error(e)
File ~\Anaconda3\lib\site-packages\langchain\chains\sql_database\base.py:96, in SQLDatabaseChain._call(self, inputs, run_manager)
94 # If not present, then defaults to None which is all tables.
95 table_names_to_use = inputs.get("table_names_to_use")
---> 96 table_info = self.database.get_table_info(table_names=table_names_to_use)
97 llm_inputs = {
98 "input": input_text,
99 "top_k": self.top_k,
(...)
102 "stop": ["\nSQLResult:"],
103 }
104 intermediate_steps = []
File ~\Anaconda3\lib\site-packages\langchain\sql_database.py:167, in SQLDatabase.get_table_info(self, table_names)
164 continue
166 # add create table command
--> 167 create_table = str(CreateTable(table).compile(self._engine))
168 table_info = f"{create_table.rstrip()}"
169 has_extra_info = (
170 self._indexes_in_table_info or self._sample_rows_in_table_info
171 )
File ~\Anaconda3\lib\site-packages\sqlalchemy\sql\elements.py:503, in ClauseElement.compile(self, bind, dialect, **kw)
498 url = util.preloaded.engine_url
499 dialect = url.URL.create(
500 self.stringify_dialect
501 ).get_dialect()()
--> 503 return self._compiler(dialect, **kw)
File ~\Anaconda3\lib\site-packages\sqlalchemy\sql\ddl.py:32, in _DDLCompiles._compiler(self, dialect, **kw)
28 def _compiler(self, dialect, **kw):
29 """Return a compiler appropriate for this ClauseElement, given a
30 Dialect."""
---> 32 return dialect.ddl_compiler(dialect, self, **kw)
File ~\Anaconda3\lib\site-packages\pyathena\sqlalchemy_athena.py:178, in AthenaDDLCompiler.__init__(self, dialect, statement, bind, schema_translate_map, compile_kwargs)
169 def __init__(
170 self,
171 dialect,
(...)
175 compile_kwargs=util.immutabledict(),
176 ):
177 self._preparer = AthenaDDLIdentifierPreparer(dialect)
--> 178 super(AthenaDDLCompiler, self).__init__(
179 dialect=dialect,
180 statement=statement,
181 bind=bind,
182 schema_translate_map=schema_translate_map,
183 compile_kwargs=compile_kwargs,
184 )
TypeError: __init__() got an unexpected keyword argument 'bind'
```
| error while calling SQLDatabaseChain on AWS Athena | https://api.github.com/repos/langchain-ai/langchain/issues/4664/comments | 1 | 2023-05-14T08:21:50Z | 2023-05-18T04:24:46Z | https://github.com/langchain-ai/langchain/issues/4664 | 1,708,859,989 | 4,664 |
[
"hwchase17",
"langchain"
]
| ### System Info
Runs under jupyterlab in docker
platform : Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.29
python : 3.8.5
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When trying to use a Llama model on local documents I have the following very basic piece of code :
```from langchain.llms import GPT4All
from langchain.document_loaders import DirectoryLoader
loader = DirectoryLoader('./', glob="**/*.yml", show_progress=True)
local_model_path = './models/ggml-gpt4all-l13b-snoozy.bin'
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path=local_model_path)
from langchain.indexes import VectorstoreIndexCreator
index = VectorstoreIndexCreator(embeddings=llama).from_loaders([loader])
index.query("what are the CORE variables ?")
```
No specific requirement of any OpenAI tool, but I have the error below :
```
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
<ipython-input-12-26dba7aded6b> in <module>
3
4 from langchain.indexes import VectorstoreIndexCreator
----> 5 index = VectorstoreIndexCreator(embeddings=llama).from_loaders([loader])
6 index.query("what is the LHYFE variables ?")
/usr/local/lib/python3.8/dist-packages/pydantic/main.cpython-38-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__()
/usr/local/lib/python3.8/dist-packages/pydantic/main.cpython-38-x86_64-linux-gnu.so in pydantic.main.validate_model()
/usr/local/lib/python3.8/dist-packages/pydantic/fields.cpython-38-x86_64-linux-gnu.so in pydantic.fields.ModelField.get_default()
/usr/local/lib/python3.8/dist-packages/pydantic/main.cpython-38-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for OpenAIEmbeddings
__root__
Did not find openai_api_key, please add an environment variable `OPENAI_API_KEY` which contains it, or pass 'openai_api_key' as a named parameter. (type=value_error)
```
```
Is there any specific configuration I missed out ?
Many thanks for your kind help.
### Expected behavior
I would have expected it to use the model stated in the code without any need for an OpenAI account. | Using LLama Embedings still rely on OpenAI key | https://api.github.com/repos/langchain-ai/langchain/issues/4661/comments | 7 | 2023-05-14T07:04:36Z | 2023-12-26T16:07:56Z | https://github.com/langchain-ai/langchain/issues/4661 | 1,708,841,518 | 4,661 |
[
"hwchase17",
"langchain"
]
| ### System Info
error:
```
Traceback (most recent call last):
File "/Users/delip/workspace/tmp/main3.py", line 38, in <module>
asyncio.run(main())
File "/Users/delip/opt/anaconda3/envs/lhenv/lib/python3.8/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/Users/delip/opt/anaconda3/envs/lhenv/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/Users/delip/workspace/tmp/main3.py", line 32, in main
await generate_concurrently()
File "/Users/delip/workspace/tmp/main3.py", line 27, in generate_concurrently
await asyncio.gather(*tasks)
File "/Users/delip/workspace/tmp/main3.py", line 11, in async_generate
resp = await llm.agenerate(
File "/Users/delip/opt/anaconda3/envs/lhenv/lib/python3.8/site-packages/langchain/chat_models/base.py", line 128, in agenerate
raise e
File "/Users/delip/opt/anaconda3/envs/lhenv/lib/python3.8/site-packages/langchain/chat_models/base.py", line 118, in agenerate
results = await asyncio.gather(
File "/Users/delip/opt/anaconda3/envs/lhenv/lib/python3.8/site-packages/langchain/chat_models/openai.py", line 322, in _agenerate
message_dicts, params = self._create_message_dicts(messages, stop)
File "/Users/delip/opt/anaconda3/envs/lhenv/lib/python3.8/site-packages/langchain/chat_models/openai.py", line 304, in _create_message_dicts
message_dicts = [_convert_message_to_dict(m) for m in messages]
File "/Users/delip/opt/anaconda3/envs/lhenv/lib/python3.8/site-packages/langchain/chat_models/openai.py", line 304, in <listcomp>
message_dicts = [_convert_message_to_dict(m) for m in messages]
File "/Users/delip/opt/anaconda3/envs/lhenv/lib/python3.8/site-packages/langchain/chat_models/openai.py", line 92, in _convert_message_to_dict
raise ValueError(f"Got unknown type {message}")
ValueError: Got unknown type ('content', 'you are a helpful bot')
```
langchain version
```
conda env export | grep langchain
- langchain==0.0.168
```
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
minimum viable code to reproduce:
```python
import time
import asyncio
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage, SystemMessage
from keys import KEYS
async def async_generate(llm):
resp = await llm.agenerate(
[
SystemMessage(content="you are a helpful bot"),
HumanMessage(content="Hello, how are you?"),
]
)
print(resp)
async def generate_concurrently():
llm = ChatOpenAI(
temperature=0.9,
openai_api_key=KEYS["openai.api_key"],
openai_organization=KEYS["openai.organization"],
)
tasks = [async_generate(llm) for _ in range(3)]
await asyncio.gather(*tasks)
async def main():
start = time.perf_counter()
await generate_concurrently()
elapsed = time.perf_counter() - start
print("\033[1m" + f"Concurrent executed in {elapsed:0.2f} seconds." + "\033[0m")
if __name__ == "__main__":
asyncio.run(main())
```
### Expected behavior
Should produce 3 generation results. | ChatOpenAI.agenerate seems broken in 0.0.168? | https://api.github.com/repos/langchain-ai/langchain/issues/4643/comments | 4 | 2023-05-14T00:55:12Z | 2023-09-25T10:13:04Z | https://github.com/langchain-ai/langchain/issues/4643 | 1,708,780,067 | 4,643 |
[
"hwchase17",
"langchain"
]
| ### System Info
Google Colab
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.embeddings.openai import OpenAIEmbeddings
embeder = OpenAIEmbeddings(openai_api_key="redacted_api_key")
query_result = embeder.embed_query("show us the embeddings")
```
causes the following error
```
AuthenticationError Traceback (most recent call last)
[<ipython-input-30-45e396cd020f>](https://localhost:8080/#) in <cell line: 7>()
5
6 embeddings = OpenAIEmbeddings(openai_api_key="key")
----> 7 docsearch = Chroma.from_documents(texts,embeddings)
17 frames
[/usr/local/lib/python3.10/dist-packages/openai/api_requestor.py](https://localhost:8080/#) in _interpret_response_line(self, rbody, rcode, rheaders, stream)
685 stream_error = stream and "error" in resp.data
686 if stream_error or not 200 <= rcode < 300:
--> 687 raise self.handle_error_response(
688 rbody, rcode, resp.data, rheaders, stream_error=stream_error
689 )
AuthenticationError: <empty message>
```
### Expected behavior
Expected there to be no error message. I also checked that my API key is working | OpenAIEmbeddings has "AuthenticationError" | https://api.github.com/repos/langchain-ai/langchain/issues/4639/comments | 1 | 2023-05-13T23:44:10Z | 2023-05-16T20:08:06Z | https://github.com/langchain-ai/langchain/issues/4639 | 1,708,767,827 | 4,639 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
What is the best approach to create a rule based chatbot with LangChain?
Context: I need to create a chatbot that needs to collect some basic user's info at the beginning (things like name, email, phone) and then continue providing some general responses based on custom information.
Thanks in advance.
### Suggestion:
_No response_ | Rule based chatbot using LangChain | https://api.github.com/repos/langchain-ai/langchain/issues/4634/comments | 2 | 2023-05-13T17:03:41Z | 2023-09-10T16:18:10Z | https://github.com/langchain-ai/langchain/issues/4634 | 1,708,685,295 | 4,634 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Per title, request is to add feature for streaming output response, something like this:
```python
from langchain.llms.huggingface_text_gen_inference import HuggingFaceTextGenInference
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
llm = HuggingFaceTextGenInference(
inference_server_url='http://localhost:8010',
max_new_tokens=512,
top_k=10,
top_p=0.95,
typical_p=0.95,
temperature=0.01,
stop_sequences=['</s>'],
repetition_penalty=1.03,
stream=True
)
print(llm("What is deep learning?", callbacks=[StreamingStdOutCallbackHandler()]))
```
### Motivation
Having streaming response output is useful for chat situations to reduce perceived latency for the user. Current implementation of HuggingFaceTextGenInference class implemented in [PR 4447](https://github.com/hwchase17/langchain/pull/4447) does not support streaming.
### Your contribution
Feature added in [PR #4633](https://github.com/hwchase17/langchain/pull/4633) | [feature] Add support for streaming response output to HuggingFaceTextGenInference LLM | https://api.github.com/repos/langchain-ai/langchain/issues/4631/comments | 0 | 2023-05-13T16:16:48Z | 2023-05-15T14:59:14Z | https://github.com/langchain-ai/langchain/issues/4631 | 1,708,671,913 | 4,631 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Components containing LLM are hard to be unit-tested, because their output is not deterministic, and they rely on API which could fail.
So I propose a method to mock LLM output by simply recording and replaying the responses.
### Motivation
It could be helpful in TDD-based workflow, in which we want to do refactoring, without changing the behavior.
### Your contribution
I've made an example in my personal project, which dumps output to JSON file.
The implementation:
```python
class MockOpenAI(OpenAI):
from_file: Path = None
to_file: Path = None
records: List[LLMResult] = []
# it overrides the generate() method
```
https://github.com/ofey404/WalkingShadows/blob/2cd39f6286193845ba3018bb2bcd42a7ff736fe9/src/backend/services/world/internal/llm/llm.py#L18-L21
The usage:
```python
MockOpenAI(
# to_file=Path(__file__).parent / "test_world.json"
from_file=Path(__file__).parent
/ "test_world.json"
)
```
https://github.com/ofey404/WalkingShadows/blob/2cd39f6286193845ba3018bb2bcd42a7ff736fe9/src/backend/services/world/api/world/test/test_world.py#L13C1-L17
If it's proper, I'd like to contribute it to langchain, and I would refine the interface to make it more generic.
Anyone is interested in this? I'd like to find some support from maintainers.
| [feature] Mock LLM by record and replay responses | https://api.github.com/repos/langchain-ai/langchain/issues/4629/comments | 5 | 2023-05-13T15:46:06Z | 2023-11-14T16:24:55Z | https://github.com/langchain-ai/langchain/issues/4629 | 1,708,662,472 | 4,629 |
[
"hwchase17",
"langchain"
]
| ### Feature request
The official gpt4all python bindings now exist in the `gpt4all` pip package. [Langchain](https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html) currently relies on the no-longer maintained pygpt4all package. Langchain should use the `gpt4all` python package with source found here: https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python
### Motivation
The source at https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python supports multiple OS's and platforms (other bindings do not). Nomic AI will be officially maintaining these bindings.
### Your contribution
I will be happy to review a pull request and ensure that future changes are PR'd upstream to langchains :) | GPT4All Python Bindings out of date [move to new multiplatform bindings] | https://api.github.com/repos/langchain-ai/langchain/issues/4628/comments | 2 | 2023-05-13T15:15:06Z | 2023-09-10T16:18:15Z | https://github.com/langchain-ai/langchain/issues/4628 | 1,708,650,720 | 4,628 |
[
"hwchase17",
"langchain"
]
| ### System Info
v.0.0.167
MacOS 13.3.1 (a) (22E772610a)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.llms import AzureOpenAI
from langchain.chains import RetrievalQAWithSourcesChain
from flask import Flask, request, jsonify, render_template
embeddings = OpenAIEmbeddings(model="text-search-davinci-query-001",chunk_size=1)
persist_directory = "db"
db = Chroma(persist_directory=persist_directory, embedding_function=embeddings)
retriever = db.as_retriever()
llm = AzureOpenAI(deployment_name="foo")
chain = RetrievalQAWithSourcesChain.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever)
while True:
question = input(f"Ask a question: ")
answer = chain({"question": question}, return_only_outputs=True)
print(answer)
```
### Expected behavior
In 0.0.123 the above snippet works. In 0.0.167, I get the following:
```
swlib.py", line 119, in _check_dimensionality
raise InvalidDimensionException(
chromadb.errors.InvalidDimensionException: Dimensionality of (1536) does not match index dimensionality (12288)
``` | `chromadb.errors.InvalidDimensionException` introduced somewhere between v0.0.123 and 0.0.167 | https://api.github.com/repos/langchain-ai/langchain/issues/4627/comments | 3 | 2023-05-13T14:58:47Z | 2023-10-16T16:08:04Z | https://github.com/langchain-ai/langchain/issues/4627 | 1,708,641,207 | 4,627 |
[
"hwchase17",
"langchain"
]
| ### Feature request
There is a need for graph databases to be integrated in langchain. NetworkX isn't suitable for scalable graph databases that would be desired to be queried, particularly with tens of thousands or more nodes and edges. This is necessary for graph databases to compete with vector databases on the level for information extraction within langchain.
There is already a [medium article](https://towardsdatascience.com/integrating-neo4j-into-the-langchain-ecosystem-df0e988344d2) and [GitHub repo](https://github.com/tomasonjo/langchain2neo4j) talking about one way in which this is implemented, but it would be ideal if something like this was integrated into langchain itself. This implementation also has Neo4j as embeddings as an option, which should be implemented as well.
### Motivation
The [Graph Index Creator](https://python.langchain.com/en/latest/modules/chains/index_examples/graph_qa.html?highlight=GraphIndexCreator) and other small forms of graphs within LangChain use NetworkX which isn't scalable for production for full blown knowledge graphs on the size of the vector databases. I know that I have a particular need to use a graph database in production along with langchain due to a work level project.
### Your contribution
Yes, I am willing to contribute. I haven't contributed to LangChain directly before but I am familiar with the source code investigating it. Would love to collaborate on what kind of framework/interface we would need to expand graph indexes with a similar scope as vector database indexes. | Integrate Neo4j as a Graph Index, Vector Index, and as tools in the ecosystem | https://api.github.com/repos/langchain-ai/langchain/issues/4625/comments | 10 | 2023-05-13T13:47:57Z | 2023-06-12T14:29:19Z | https://github.com/langchain-ai/langchain/issues/4625 | 1,708,610,568 | 4,625 |
[
"hwchase17",
"langchain"
]
| ### System Info
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
[<ipython-input-54-a735363693fb>](https://localhost:8080/#) in <cell line: 2>()
1 # sql chain
----> 2 db_chain = SQLDatabaseChain.from_llm(llm, db,
3 return_intermediate_steps=False, # returns query and steps
4 verbose=True, use_query_checker=True, # self-correcting small mistakes
5 top_k=3 # limit the number of rows returned
1 frames
/usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for SQLDatabaseChain
use_query_checker
extra fields not permitted (type=value_error.extra)
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from __future__ import annotations
import warnings
from typing import Any, Dict, List, Optional
from pydantic import Extra, Field, root_validator
from langchain.base_language import BaseLanguageModel
from langchain.callbacks.manager import CallbackManagerForChainRun
from langchain.chains.base import Chain
from langchain.chains.llm import LLMChain
from langchain.chains.sql_database.prompt import DECIDER_PROMPT, PROMPT, SQL_PROMPTS
from langchain.prompts.base import BasePromptTemplate
from langchain.prompts.prompt import PromptTemplate
from langchain.sql_database import SQLDatabase
from langchain.tools.sql_database.prompt import QUERY_CHECKER
# initialize database
llm = OpenAI(temperature=0)
db = SQLDatabase.from_uri("sqlite:////content/drive/My Drive/09PHD/sql-murder-mystery.db",
sample_rows_in_table_info=1, # examples of rows from each table, consumes tokens
# custom_table_info=custom_table_info # we can define custom table info which will override the default sample_rows_in_table_info parameter
)
# sql chain
db_chain = SQLDatabaseChain.from_llm(llm, db,
return_intermediate_steps=False, # returns query and steps
verbose=True,
# use_query_checker=True, # self-correcting small mistakes NOT WORKING
top_k=3 # limit the number of rows returned
)
### Expected behavior
The use_query_checker=True parameter in SQLDatabaseChain spits out an error. | use_query_checker in SQLDatabaseChain not working | https://api.github.com/repos/langchain-ai/langchain/issues/4624/comments | 4 | 2023-05-13T12:52:25Z | 2023-09-19T16:11:02Z | https://github.com/langchain-ai/langchain/issues/4624 | 1,708,593,108 | 4,624 |
[
"hwchase17",
"langchain"
]
| ### System Info
I was trying out the langchain arxiv chain and I got the cannot parse LLM error.
Here is some additional info that might help.
`
> Entering new AgentExecutor chain...
I need to search for papers related to AI in the oil and gas industry.
Action: Arxiv
Action Input: "AI in oil and gas industry"
Observation: Published: 2023-04-27
Title: Industrial Engineering with Large Language Models: A case study of ChatGPT's performance on Oil & Gas problems
Authors: Oluwatosin Ogundare, Srinath Madasu, Nathanial Wiggins
Summary: Large Language Models (LLMs) have shown great potential in solving complex
problems in various fields, including oil and gas engineering and other
industrial engineering disciplines like factory automation, PLC programming
etc. However, automatic identification of strong and weak solutions to
fundamental physics equations governing several industrial processes remain a
challenging task. This paper identifies the limitation of current LLM
approaches, particularly ChatGPT in selected practical problems native to oil
and gas engineering but not exclusively. The performance of ChatGPT in solving
complex problems in oil and gas engineering is discussed and the areas where
LLMs are most effective are presented.
Published: 2022-02-23
Title: Cybersecurity Challenges in the Offshore Oil and Gas Industry: An Industrial Cyber-Physical Systems (ICPS) Perspective
Authors: Abubakar Sadiq Mohammed, Philipp Reinecke, Pete Burnap, Omer Rana, Eirini Anthi
Summary: The offshore oil and gas industry has recently been going through a
digitalisation drive, with use of `smart' equipment using technologies like the
Industrial Internet of Things (IIoT) and Industrial Cyber-Physical Systems
(ICPS). There has also been a corresponding increase in cyber attacks targeted
at oil and gas companies. Oil production offshore is usually in remote
locations, requiring remote access and control. This is achieved by integrating
ICPS, Supervisory, Control and Data Acquisition (SCADA) systems, and IIoT
technologies. A successful cyber attack against an oil and gas offshore asset
could have a devastating impact on the environment, marine ecosystem and safety
of personnel. Any disruption to the world's supply of oil and gas (O\&G) can
also have an effect on oil prices and in turn, the global economy. This makes
it important to secure the industry against cyber threats. We describe the
potential cyberattack surface within the oil and gas industry, discussing
emerging trends in the offshore sub-sector, and provide a timeline of known
cyberattacks. We also present a case study of a subsea control system
architecture typically used in offshore oil and gas operations and highlight
potential vulnerabilities affecting the components of the system. This study is
the first to provide a detailed analysis on the attack vectors in a subsea
control system and is crucial to understanding key vulnerabilities, primarily
to implement efficient mitigation methods that safeguard the safety of
personnel and the environment when using such systems.
Published: 2017-05-11
Title: Cloud-based Fault Detection and Classification for Oil & Gas Industry
Authors: Athar Khodabakhsh, Ismail Ari, Mustafa Bakir
Summary: Oil & Gas industry relies on automated, mission-critical equipment and
complex systems built upon their interaction and cooperation. To assure
continuous operation and avoid any supervision, architects embed Distributed
Control Systems (DCS), a.k.a. Supervisory Control and Data Acquisition (SCADA)
systems, on top of their equipment to generate data, monitor state and make
critical online & offline decisions.
In this paper, we propose a new Lambda architecture for oil & gas industry
for unified data and analytical processing on data received from DCS, discuss
cloud integration issues and share our experiences with the implementation of
sensor fault-detection and classification modules inside the proposed
architecture.
Thought:I have found three papers related to AI in the oil and gas industry, but I need to narrow down my search to find the best ones.
Action: Arxiv
Action Input: "Best papers on AI in oil and gas industry"
Observation: Published: 2023-04-27
Title: Industrial Engineering with Large Language Models: A case study of ChatGPT's performance on Oil & Gas problems
Authors: Oluwatosin Ogundare, Srinath Madasu, Nathanial Wiggins
Summary: Large Language Models (LLMs) have shown great potential in solving complex
problems in various fields, including oil and gas engineering and other
industrial engineering disciplines like factory automation, PLC programming
etc. However, automatic identification of strong and weak solutions to
fundamental physics equations governing several industrial processes remain a
challenging task. This paper identifies the limitation of current LLM
approaches, particularly ChatGPT in selected practical problems native to oil
and gas engineering but not exclusively. The performance of ChatGPT in solving
complex problems in oil and gas engineering is discussed and the areas where
LLMs are most effective are presented.
Published: 2017-05-11
Title: Cloud-based Fault Detection and Classification for Oil & Gas Industry
Authors: Athar Khodabakhsh, Ismail Ari, Mustafa Bakir
Summary: Oil & Gas industry relies on automated, mission-critical equipment and
complex systems built upon their interaction and cooperation. To assure
continuous operation and avoid any supervision, architects embed Distributed
Control Systems (DCS), a.k.a. Supervisory Control and Data Acquisition (SCADA)
systems, on top of their equipment to generate data, monitor state and make
critical online & offline decisions.
In this paper, we propose a new Lambda architecture for oil & gas industry
for unified data and analytical processing on data received from DCS, discuss
cloud integration issues and share our experiences with the implementation of
sensor fault-detection and classification modules inside the proposed
architecture.
Published: 2019-02-26
Title: Intelligent Internet of Things (IoT) Node Demonstrator for Device Monitoring and Control in the Oil and Gas Sector
Authors: Stephen Ugwuanyi, James Irvine
Summary: Internet of Things (IoT) is the new industrial slogan for connecting
intelligent and unintelligent devices to the web. The problem of security of
data transfer, interoperability of different proposed methodologies, the
ubiquity of Wi-Fi and the development of low power consuming MCUs has broadened
the search for the best alternative technology for IoT in the oil and gas
sector. This paper focus on the communication method for IoT devices to
determine the level of functionality and the efficiency of interfacing the new
MOD-WIFI-ESP8266-DEV Wi-Fi unit based on the IEEE 802.11 standard with MSP430
by Texas Instrument. The system controls LEDs and monitors Temperature/Humidity
sensor (DHT11) using Android application and web service. The system presents
in three-layered structure an ecosystem of lightweight, small size, reduced
cost and low power IoT system. It is expected that industries/users of this
system would be able to control, monitor, and analyse data generated by the web
of connected devices.
Thought:
---------------------------------------------------------------------------
OutputParserException Traceback (most recent call last)
[<ipython-input-4-ef16113c8f19>](https://localhost:8080/#) in <cell line: 1>()
----> 1 agent_chain.run(
2 "What are some of the best papers on AI in oil an gas industry??",
3 )
7 frames
[/usr/local/lib/python3.10/dist-packages/langchain/agents/mrkl/output_parser.py](https://localhost:8080/#) in parse(self, text)
24 match = re.search(regex, text, re.DOTALL)
25 if not match:
---> 26 raise OutputParserException(f"Could not parse LLM output: `{text}`")
27 action = match.group(1).strip()
28 action_input = match.group(2)
OutputParserException: Could not parse LLM output: `Based on the summaries, the best papers on AI in the oil and gas industry are "Industrial Engineering with Large Language Models: A case study of ChatGPT's performance on Oil & Gas problems" and "Cloud-based Fault Detection and Classification for Oil & Gas Industry".`
`
### Who can help?
@hwchase17 , @eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Input :
agent_chain.run(
"What are some of the best papers on AI in oil an gas industry??",
)
### Expected behavior
A proper answer as provided by an LLM. | Arxiv chain : cannot parse output | https://api.github.com/repos/langchain-ai/langchain/issues/4622/comments | 1 | 2023-05-13T11:43:21Z | 2023-09-10T16:18:20Z | https://github.com/langchain-ai/langchain/issues/4622 | 1,708,571,207 | 4,622 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi. I have used an integration of langchain with Pinecone, as well as ChromaDB. My question is whether you recommend any alternative vector database that is free?
### Suggestion:
_No response_ | Issue: Free Vector Database? | https://api.github.com/repos/langchain-ai/langchain/issues/4621/comments | 4 | 2023-05-13T10:03:49Z | 2023-09-19T16:11:06Z | https://github.com/langchain-ai/langchain/issues/4621 | 1,708,544,666 | 4,621 |
[
"hwchase17",
"langchain"
]
| Knowing that Pandas and spark cannot be compared to the speed of Polars, please can you create a Polars Dataframe Agent? It is 15x faster than Pandas. | Polars Dataframe Agent Needed | https://api.github.com/repos/langchain-ai/langchain/issues/4620/comments | 3 | 2023-05-13T08:46:09Z | 2024-03-08T12:38:36Z | https://github.com/langchain-ai/langchain/issues/4620 | 1,708,521,121 | 4,620 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.167
python=3.10.10
system: Windows
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`# !pip install langchain==0.0.167
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
db = SQLDatabase.from_uri("mysql+pymysql://user:pass@some_mysql_db_address/db_name")
llm = OpenAI(temperature=0, verbose=True)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
db_chain.run("How many employees are there?")
`
### Expected behavior
Should not throw AttributeError: type object 'SQLDatabaseChain' has no attribute 'from_llm' | SQLDatabaseChain has no attribute from_llm | https://api.github.com/repos/langchain-ai/langchain/issues/4618/comments | 6 | 2023-05-13T07:40:59Z | 2024-03-26T09:21:00Z | https://github.com/langchain-ai/langchain/issues/4618 | 1,708,504,243 | 4,618 |
[
"hwchase17",
"langchain"
]
| ### Feature request
It would be great if LangChain could support more HuggingFace embedding models. Prompt techniques don't work very well with currently available sentence transformer models. Open-source-powered technology could benefit from the adoption of updated models like Cerebras-GPT and Dolly.
### Motivation
While creating a QA model with HuggingFace embedding and models, I find out that its performance could be with new models like Cerebras-GPT and Dolly.
### Your contribution
Yes, if someone guides me. | Support for new Hugging Face models like Cerebras-GPT, Dolly and others. | https://api.github.com/repos/langchain-ai/langchain/issues/4617/comments | 3 | 2023-05-13T06:48:25Z | 2023-09-15T16:13:37Z | https://github.com/langchain-ai/langchain/issues/4617 | 1,708,471,497 | 4,617 |
[
"hwchase17",
"langchain"
]
| Hello 👋
I run a security community that finds and fixes vulnerabilities in OSS. A researcher (@r3pwnx) has found a potential issue, which I would be eager to share with you.
Could you add a `SECURITY.md` file with an e-mail address for me to send further details to? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository) a security policy to ensure issues are responsibly disclosed, and it would help direct researchers in the future.
Looking forward to hearing from you 👍
(cc @huntr-helper) | Add a security policy | https://api.github.com/repos/langchain-ai/langchain/issues/4614/comments | 1 | 2023-05-13T04:20:13Z | 2023-09-10T16:18:36Z | https://github.com/langchain-ai/langchain/issues/4614 | 1,708,426,607 | 4,614 |
[
"hwchase17",
"langchain"
]
| ### System Info
def _split_list_of_docs(
docs: List[Document], length_func: Callable, token_max: int, **kwargs: Any
) -> List[List[Document]]:
new_result_doc_list = []
_sub_result_docs = []
for doc in docs:
_sub_result_docs.append(doc)
_num_tokens = length_func(_sub_result_docs, **kwargs)
if _num_tokens > token_max:
if len(_sub_result_docs) == 1:
raise ValueError(
"A single document was longer than the context length,"
" we cannot handle this."
)
if len(_sub_result_docs) == 2:
raise ValueError(
"A single document was so long it could not be combined "
"with another document, we cannot handle this."
)
new_result_doc_list.append(_sub_result_docs[:-1])
_sub_result_docs = _sub_result_docs[-1:]
new_result_doc_list.append(_sub_result_docs)
return new_result_doc_list
I encountered an issue with the following error message: "A single document was so long it could not be combined with another document, we cannot handle this." I suspect this could be a bug. The error might occur when the combined length of the summaries of two docs exceed the token_max limit. In this case, I believe that the two docs should be summarized separately and then merged. Could you provide a callback function allowing users to handle the logic of the _split_list_of_docs function by themselves?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://github.com/hwchase17/langchain/blob/01531cb16d09b9290fc091434b0c69cb91a8f500/langchain/chains/combine_documents/map_reduce.py#L22
### Expected behavior
I believe that the two docs should be summarized separately and then merged. Could you provide a callback function allowing users to handle the logic of the _split_list_of_docs function by themselves? | map_reduce._split_list_of_docs has bugs | https://api.github.com/repos/langchain-ai/langchain/issues/4613/comments | 7 | 2023-05-13T04:04:27Z | 2023-10-12T16:09:49Z | https://github.com/langchain-ai/langchain/issues/4613 | 1,708,422,949 | 4,613 |
[
"hwchase17",
"langchain"
]
|
How do i add memory to RetrievalQA.from_chain_type? or, how do I add a custom prompt to ConversationalRetrievalChain?
For the past 2 weeks ive been trying to make a chatbot that can chat over documents (so not in just a semantic search/qa so with memory) but also with a custom prompt. I've tried every combination of all the chains and so far the closest I've gotten is ConversationalRetrievalChain, but without custom prompts, and RetrievalQA.from_chain_type but without memory
| How do i add memory to RetrievalQA.from_chain_type? or, how do I add a custom prompt to ConversationalRetrievalChain? | https://api.github.com/repos/langchain-ai/langchain/issues/4608/comments | 21 | 2023-05-13T02:41:24Z | 2024-06-07T00:21:07Z | https://github.com/langchain-ai/langchain/issues/4608 | 1,708,402,102 | 4,608 |
[
"hwchase17",
"langchain"
]
| https://github.com/hwchase17/langchain/blob/ed0d557ede8776921cc3c5ca1f3aef81d3d0c7b5/langchain/chat_models/google_palm.py#L65
Fix: `if author == "ai or author == "1":` seems to do the trick.
Happy to submit a patch if y'all agree!
The [Google Palm Chat API](https://developers.generativeai.google/tutorials/chat_quickstart#conversation_history) returns a "1" for the AI response (and a "0" for the human).
| GooglePalm `author` is returned as "1" but code is expecting "ai" | https://api.github.com/repos/langchain-ai/langchain/issues/4606/comments | 3 | 2023-05-12T23:48:59Z | 2023-09-15T16:13:43Z | https://github.com/langchain-ai/langchain/issues/4606 | 1,708,344,927 | 4,606 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain ver 0.0.167, MacBook Pro 2018, Mac OS Ventura.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1.) Download a model.
2.) Download the tokens file.
3.) Run code.
from langchain.llms.rwkv import RWKV
# Test the model
def generate_prompt(instruction, input=None):
if input:
return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
# Instruction:
{instruction}
# Input:
{input}
# Response:
"""
else:
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
# Instruction:
{instruction}
# Response:
"""
model = RWKV(model="~/Downloads/Q8_0RWKV.bin", strategy="cpu 8bit", tokens_path="./rwkv.tokens")
response = model(generate_prompt("Once upon a time, "))
### Expected behavior
I expect the sample to produce some text.
What I get is an error. It appears that the rwkv library is not installed, but it is...
Traceback (most recent call last):
File "/Users/John/Documents/Projects/langchainstufff/rwkv.py", line 29, in <module>
model = RWKV(model="~/Downloads/Q8_0RWKV.bin", strategy="cpu 8bit", tokens_path="./rwkv.tokens")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
vectorstore: VectorStore,
pydantic.error_wrappers.ValidationError: 1 validation error for RWKV
__root__ -> __root__
Could not import rwkv python package. Please install it with `pip install rwkv`. (type=value_error)
| RWKV | https://api.github.com/repos/langchain-ai/langchain/issues/4604/comments | 4 | 2023-05-12T22:41:16Z | 2023-10-09T16:07:52Z | https://github.com/langchain-ai/langchain/issues/4604 | 1,708,300,793 | 4,604 |
[
"hwchase17",
"langchain"
]
| Hi ,
how can i remove the escape sequences that serves to coloring in langchain output. I want to pars the output and this escape are really a problem
thanks | how to remove all coloring escape sequences in sqlquery and result output | https://api.github.com/repos/langchain-ai/langchain/issues/4600/comments | 4 | 2023-05-12T20:38:36Z | 2023-09-04T07:05:03Z | https://github.com/langchain-ai/langchain/issues/4600 | 1,708,182,875 | 4,600 |
[
"hwchase17",
"langchain"
]
| ### System Info

### Who can help?
@vowelparrot
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

### Expected behavior

| Even use "print", the chatgpt is still "hallucinating"?? | https://api.github.com/repos/langchain-ai/langchain/issues/4599/comments | 2 | 2023-05-12T20:31:16Z | 2023-05-12T21:21:14Z | https://github.com/langchain-ai/langchain/issues/4599 | 1,708,173,986 | 4,599 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
When I try to perform a similarity_search on an opensearch index on AWS of size 50GB (approx 2 million documents with vectors for each), i sometimes get this error. It does not happen all the time. Usually the first time when I make a request the response is successful, subsequent calls immediately result in this error. Is this error, because langchain is doing a similarity search over lot of vectors and open search ran out of memory? -
```
embeddings = OpenAIEmbeddings()
docsearch = OpenSearchVectorSearch(index_name="xxxxxxxx", embedding_function=embeddings, opensearch_url=opensearch_url)
query = "Whats the xxxxxxx"
docs = docsearch.similarity_search(query, k=1, search_type = "approximate_search", vector_field="sentence_embedding")
```
TransportError Traceback (most recent call last)
Cell In[100], line 2
1 query = "Whats the warranty on labor and materials for the work performed by V.A.M.P. L.L.C."
----> 2 docs = docsearch.similarity_search(query, k=1, search_type = "approximate_search", vector_field="sentence_embedding")
3 docs
File [~/training/semantic-search-elasticsearch-openai-langchain/vevn/lib/python3.9/site-packages/langchain/vectorstores/opensearch_vector_search.py:426](https://file+.vscode-resource.vscode-cdn.net/Users/AXG143/ananth/training/semantic-search-elasticsearch-openai-langchain/~/ananth/training/semantic-search-elasticsearch-openai-langchain/vevn/lib/python3.9/site-packages/langchain/vectorstores/opensearch_vector_search.py:426), in OpenSearchVectorSearch.similarity_search(self, query, k, **kwargs)
423 else:
424 raise ValueError("Invalid `search_type` provided as an argument")
--> 426 response = self.client.search(index=self.index_name, body=search_query)
427 hits = [hit["_source"] for hit in response["hits"]["hits"][:k]]
428 documents = [
429 Document(
430 page_content=hit[text_field],
(...)
435 for hit in hits
436 ]
File [~/training/semantic-search-elasticsearch-openai-langchain/vevn/lib/python3.9/site-packages/opensearchpy/client/utils.py:178](https://file+.vscode-resource.vscode-cdn.net/training/semantic-search-elasticsearch-openai-langchain/~/training/semantic-search-elasticsearch-openai-langchain/vevn/lib/python3.9/site-packages/opensearchpy/client/utils.py:178), in query_params.._wrapper.._wrapped(*args, **kwargs)
176 if p in kwargs:
177 params[p] = kwargs.pop(p)
--> 178 return func(*args, params=params, headers=headers, **kwargs)
File [~/training/semantic-search-elasticsearch-openai-langchain/vevn/lib/python3.9/site-packages/opensearchpy/client/__init__.py:1551](https://file+.vscode-resource.vscode-cdn.net/training/semantic-search-elasticsearch-openai-langchain/~/training/semantic-search-elasticsearch-openai-langchain/vevn/lib/python3.9/site-packages/opensearchpy/client/__init__.py:1551), in OpenSearch.search(self, body, index, params, headers)
...
--> 301 raise HTTP_EXCEPTIONS.get(status_code, TransportError)(
302 status_code, error_message, additional_info
303 )
TransportError: TransportError(500, 'search_phase_execution_exception')
### Suggestion:
Expected response - langchain Opensearch similarity_search should work consistently on multiple calls. | langchain OpenSearchVectorSearch similarity_search error | https://api.github.com/repos/langchain-ai/langchain/issues/4597/comments | 3 | 2023-05-12T19:53:04Z | 2023-11-21T16:07:20Z | https://github.com/langchain-ai/langchain/issues/4597 | 1,708,132,505 | 4,597 |
[
"hwchase17",
"langchain"
]
| ### System Info
Mac OSX10.16
python 3.9
langchain 0.0.166
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am following the example for the SelfQueryRetriever (https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query_retriever.html), but I am using Chroma
When I try to create the SelfQueryRetriever from llm_chain
SelfQueryRetriever.from_llm(
vectorstore=db_chroma,
llm=llm,
document_contents=document_content_info,
metadata_field_info=metadata_field_info
)
or I try to create the chain with
load_query_constructor_chain(
llm=llm,
document_contents=document_content_info,
attribute_info=metadata_field_info,
allowed_comparators=ChromaTranslator.allowed_comparators,
allowed_operators=ChromaTranslator.allowed_operators)
I receive the same error:
def get_parser(
126 allowed_comparators: Optional[Sequence[Comparator]] = None,
127 allowed_operators: Optional[Sequence[Operator]] = None,
128 ) -> Lark:
--> 129 transformer = QueryTransformer(
130 allowed_comparators=allowed_comparators, allowed_operators=allowed_operators
131 )
132 return Lark(GRAMMAR, parser="lalr", transformer=transformer, start="program")
TypeError: 'NoneType' object is not callable
I tried to create a QueryTransformer as follow:
QueryTransformer(allowed_comparators=ChromaTranslator.allowed_comparators,
allowed_operators=ChromaTranslator.allowed_operators)
Same error.
### Expected behavior
I would expect to create the selfQueryRetriever to be able to retrieve the documents from the retriever. | Problem with SelfQueryRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/4587/comments | 5 | 2023-05-12T16:54:43Z | 2023-05-12T17:32:26Z | https://github.com/langchain-ai/langchain/issues/4587 | 1,707,934,037 | 4,587 |
[
"hwchase17",
"langchain"
]
| ### System Info
0.0.166
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.chat_models import ChatOpenAI
from langchain.callbacks import get_openai_callback
```python
question = "What is the answer of the meaning of life?"
prompt = PromptTemplate(
input_variables=["input"],
template="{input}",
)
llm = ChatOpenAI(temperature=0.7, max_tokens=2000, streaming=True)
chain = LLMChain(llm=llm, prompt=prompt)
with get_openai_callback() as cb:
print(chain.run(question))
print("\n\n")
print(cb)
```
result
```
As an AI language model, I do not have a personal belief system or opinion, and therefore, I do not have an answer to this question. The meaning of life is a philosophical and subjective topic that varies from person to person. It is up to individuals to find their own purpose and meaning in life.
Tokens Used: 0
Prompt Tokens: 0
Completion Tokens: 0
Successful Requests: 1
Total Cost (USD): $0.0
```
when set streaming = False, it works.
### Expected behavior
should return with token usage info with streaming = True or False | get_openai_callback dosen't work with streaming = True | https://api.github.com/repos/langchain-ai/langchain/issues/4583/comments | 12 | 2023-05-12T15:07:29Z | 2024-07-30T10:15:36Z | https://github.com/langchain-ai/langchain/issues/4583 | 1,707,797,834 | 4,583 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
PlanAndExecute agent is rising errors when no questions are asked (e.g. a greeting interaction).
Is there any Chat implementation where a simple LLM chat interaction is available among the available tools?
Thank you in advance.
### Suggestion:
_No response_ | Issue: PlanAndExecute agent fails in chat mode when a simple chat interaction is required. | https://api.github.com/repos/langchain-ai/langchain/issues/4582/comments | 4 | 2023-05-12T14:25:16Z | 2023-09-19T16:11:11Z | https://github.com/langchain-ai/langchain/issues/4582 | 1,707,730,929 | 4,582 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am having issues with the flow of a conversation and the chat memory. When deployed in a flask app and queried using netlify the memory is not mantained. Any suggestions?
### Suggestion:
_No response_ | Issue:Problems with serverless architecture and ConversationBufferMemory | https://api.github.com/repos/langchain-ai/langchain/issues/4581/comments | 4 | 2023-05-12T13:41:33Z | 2023-11-01T16:07:30Z | https://github.com/langchain-ai/langchain/issues/4581 | 1,707,660,358 | 4,581 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version == 0.0.166
Embeddings = OpenAIEmbeddings - model: text-embedding-ada-002 version 2
LLM = AzureOpenAI
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Set up azure openai embeddings by providing key, version etc..
2. Load a document with a loader
3. Set up a text splitter so you get more then 2 documents
4. add them to chromadb with `.add_documents(List<Document>)`
This is some example code:
```py
pdf = PyPDFLoader(url)
documents = pdf.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
vectordb.add_documents(texts)
vectordb.persist()
```
### Expected behavior
Embeddings be added to the database, instead it returns the error `openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.`
This is because Microsoft only allows one embedding at a time while the script tries to add the documents all at once.
The following code is where the issue comes up (I think): https://github.com/hwchase17/langchain/blob/258c3198559da5844be3f78680f42b2930e5b64b/langchain/embeddings/openai.py#L205-L214
The input should be a 1 dimentional array and not multi. | AzureOpenAI InvalidRequestError: Too many inputs. The max number of inputs is 1. | https://api.github.com/repos/langchain-ai/langchain/issues/4575/comments | 28 | 2023-05-12T12:38:50Z | 2024-02-28T03:59:37Z | https://github.com/langchain-ai/langchain/issues/4575 | 1,707,564,739 | 4,575 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
CHAT_CONVERSATIONAL_REACT_DESCRIPTION agent is able to use only 1 tool per turn. The same is not happening for CONVERSATIONAL_REACT_DESCRIPTION.
Is that done on purpose? How can I fix the agent to allow more than 1 tool per turn?
Thank you.
### Suggestion:
_No response_ | Issue: CHAT_CONVERSATIONAL_REACT_DESCRIPTION only uses 1 tool per turn | https://api.github.com/repos/langchain-ai/langchain/issues/4574/comments | 4 | 2023-05-12T12:38:32Z | 2023-12-25T16:10:39Z | https://github.com/langchain-ai/langchain/issues/4574 | 1,707,564,297 | 4,574 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi!
I implemented a chatbot with gpt-4 and a docx file which is provided as context. If I ask questions according to this context, it is returning relevant answers, but if I want to ask a question which is out of this context, it responses 'Based on the provided context I cannot answer this question' or something like that.
How can I implement it in such a way, where it uses the context for every question, but if it cant find relevant answer for it in the context provided, it should take a look in its own language model.
My AgentExecutor instance looks like this:
```
def _create_chat_agent(self):
self.llm = OpenAI(temperature=0, model_name="gpt-4", top_p=0.2, presence_penalty=0.4, frequency_penalty=0.2)
# Data Ingestion
word_loader = DirectoryLoader(DOCUMENTS_DIRECTORY, glob="*.docx")
documents = []
documents.extend(word_loader.load())
# Chunk and Embeddings
text_splitter = CharacterTextSplitter(chunk_size=768, chunk_overlap=200)
documents = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vectorstore = FAISS.from_documents(documents, embeddings)
# Initialise Langchain - QA chain
qa = RetrievalQA.from_chain_type(llm=self.llm, chain_type="stuff", retriever=vectorstore.as_retriever())
tools = [
Tool(
name="...",
func=qa.run,
description="..."
),
]
system_msg = "You are a helpful assistant."
agent = ConversationalChatAgent.from_llm_and_tools(
llm=self.llm,
tools=tools,
system_message=system_msg
)
self.chat_agent = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=True, memory=ConversationBufferMemory(memory_key="chat_history", return_messages=True)
)
```
### Suggestion:
_No response_ | Issue: Not answering questions out of context using RetrievalQA Chain and ConversationalChatAgent | https://api.github.com/repos/langchain-ai/langchain/issues/4573/comments | 24 | 2023-05-12T12:08:52Z | 2024-08-08T03:46:58Z | https://github.com/langchain-ai/langchain/issues/4573 | 1,707,522,559 | 4,573 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
i have a chromadb store that contains 3 to 4 pdfs stored, and i need to search the database for documents with metadata by the filter={'source':'PDFname'}, so it doesnt return with different docs containing similar data,
the same is done with using similaritysearch() without any problems,
```
chain = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0),
docsearch.as_retriever(),
memory=memory)
print(chain({'question':query}))
```
but i dont understand how to, when trying to use filters with ConversationalRetrievalchain, i have tried doing
` docsearch.as_retriever(kwargs={'filter':{'source':'pdfname'}),` but it doesnt seem to work
i saw something
```
retriever = vector_store.as_retriever()
retriever.search_kwargs = {'k':1}
```
but it doesnt seem to recognise the [dot]search_kwargs
any help would be appreciated
### Suggestion:
_No response_ | can't seem to add filters in ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/4572/comments | 6 | 2023-05-12T11:52:19Z | 2023-07-17T07:58:48Z | https://github.com/langchain-ai/langchain/issues/4572 | 1,707,488,023 | 4,572 |
[
"hwchase17",
"langchain"
]
| I tried incorporating confluence document loader in my code. Its throwing some error. Can anyone help me out. Attaching the screenshots and required information.
Code :
```
from langchain.document_loaders import ConfluenceLoader
loader = ConfluenceLoader(
url="https://vs2001.atlassian.net/wiki/spaces/<space-key>/pages/<page-id>/<doc-name>",
username="<user-name>",
api_key="<api-key>"
)
documents = loader.load(space_key="<space-key>")
print(documents)
```
Here username I use is the prefix in the email till @ symbol
api_key generated in the settings of confluence.
Screenshot of the error:
<img width="1195" alt="Screenshot 2023-05-12 at 4 30 41 PM" src="https://github.com/hwchase17/langchain/assets/62723522/f1a83ce2-9632-418e-b272-954c7780696a">
Can anyone tell what am I doing wrong here? | Confluence Document Loader not working | https://api.github.com/repos/langchain-ai/langchain/issues/4571/comments | 3 | 2023-05-12T11:09:50Z | 2023-05-15T14:23:41Z | https://github.com/langchain-ai/langchain/issues/4571 | 1,707,431,483 | 4,571 |
[
"hwchase17",
"langchain"
]
| ### System Info
Mac
vs code
python :Python 3.10.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. embedding some text into Chroma
2. query and run load_qa_chain with OpenAI
```python
docs = docsearch.similarity_search(query="some txt",k=2)
llm = OpenAI(
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()],
temperature=0.1)
chain = load_qa_chain(llm=llm,chain_type="stuff",verbose=True)
result = chain.run(input_documents=docs,question=query,return_only_outputs=True)
```
3. The result in Chinese keeps 127 ~131 words, English will finish the whole sentence.
example:
```
我们***是一家专注于*****机构,近些年来,我们的学员人数突破****,遍布全国***个城市,海外**个国家,这自然是我们家长对于****最好的认可。我们深知宝贝一开始有兴趣,后来因为各种的枯燥变得不愿意学了,因此,我们采用三方合作配合的模式,即家长
```
```
我们***是一家专注于*****机构,近些年来,我们的学员人数突破****,遍布全国***个城市,海外**个国家,这自然是我们家长对于***最好的认可。我们深知宝贝一开始有兴趣,后来因为各种的枯燥变得不愿意学了的顾虑,因此我们采用了一种科学的学习模式
```
### Expected behavior
I think this was posted while working on characters, looking forward to a fix. | When using embedding, the Chinese reply will be incomplete | https://api.github.com/repos/langchain-ai/langchain/issues/4569/comments | 4 | 2023-05-12T11:02:03Z | 2023-05-24T07:43:59Z | https://github.com/langchain-ai/langchain/issues/4569 | 1,707,420,231 | 4,569 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
### Describe the bug
When using the db.get_usable_table_names() function with a MS SQL database, it doesn't return any table names. However, when using the same function with SQLite3, it works as expected. Interestingly, the db.run() method works correctly, returning expected records for direct SQL queries like 'select * from Shops'.
### To Reproduce
`db = SQLDatabase.from_uri("mssql+pymssql://user:[email protected]:port/KK_ANA")`
- Call db.get_table_names(). The return value is an empty set. [return "set()"]
- Run a direct SQL query using db.run('select * from Shops'). It correctly returns the expected records.
Run the SQLDatabaseSequentialChain:
`llm = ChatOpenAI(temperature=0)
db_chain = SQLDatabaseSequentialChain.from_llm(llm, db, verbose=True)
db_chain.run('show me list of tables')`
### Output

### Environment
- Langchain version: 0.0.165
- Python version: 3.10
- SQLAlchemy Version: 2.0.12 (problem also occurs with version 1.4.x)
- pymssql Version: 2.2.7
### Suggestion:
_No response_ | Issue: db.get_usable_table_names() return nothing | https://api.github.com/repos/langchain-ai/langchain/issues/4565/comments | 5 | 2023-05-12T09:28:35Z | 2024-03-11T13:35:37Z | https://github.com/langchain-ai/langchain/issues/4565 | 1,707,286,145 | 4,565 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain = "^0.0.154"
Platform - macos
Python Version - python 3.10.4
### Who can help?
@eyurtsev There is no directory called .credentials in my home directory thats why getting this error. Is this intional? why not create this directory before opening token_path to write the token json.
Code reference = https://github.com/hwchase17/langchain/blob/master/langchain/document_loaders/youtube.py#L94
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Was trying to use official Youtube loader by following [this](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/youtube_transcript.html#youtube-loader-from-google-cloud) tutorial.
### Expected behavior
It should not throw this error | FileNotFoundError: [Errno 2] No such file or directory: '$HOME/.credentials/token.json' | https://api.github.com/repos/langchain-ai/langchain/issues/4564/comments | 1 | 2023-05-12T08:55:44Z | 2023-09-10T16:18:46Z | https://github.com/langchain-ai/langchain/issues/4564 | 1,707,232,005 | 4,564 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hi i am using ConversationalRetrievalChain with agent and agent.run function is not returning source documents.
Is this by functionality or is it a missing feature?
```
def llm_answer(query):
chat_history = []
result = qa({"question": query, "chat_history": chat_history})
print('result is')
print(result)
print('-----------------------------')
print(result['source_documents'][0])
print('-----------------------------')
#populateHistory(query, result)
return result
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), db.as_retriever(), return_source_documents=True)
class requestModel(BaseModel):
question: str
app = FastAPI()
tools = [
Tool.from_function(
func=llm_answer,
name = "Email knowledge base",
description="useful for when you need to answer questions from emails in knowledge base",
args_schema=requestModel
)
#more tools
]
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Define API endpoint for querying the model
@app.post('/answer')
async def answer(request: requestModel):
print('request received')
source_documents = []
# Get data from request
print("Query is " + request.question)
q_answer2 = agent.run(request.question)
print("answer is ")
print(q_answer2)
#construct a json object to return answer and sources to the client
reply = {'answer' : str(q_answer2), 'sources' : []}
# for x in q_answer2["source_documents"]:
# reply['sources'].append(x.metadata["source"])
return reply
```
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
use code sample provided
### Expected behavior
agent.run should provide with both answers and source_documents | ConversationalRetrievalChain with Custom Agent is not returning source documents | https://api.github.com/repos/langchain-ai/langchain/issues/4562/comments | 10 | 2023-05-12T08:18:15Z | 2024-03-04T14:51:38Z | https://github.com/langchain-ai/langchain/issues/4562 | 1,707,179,648 | 4,562 |
[
"hwchase17",
"langchain"
]
| ### Feature request
TLDR:
Working on using `chat-conversational-react-description` agent and `RetrievalQA` as tool to answer queries using vectorDB.
Issue:
If question is asked in japanese (Vectordb is in japanese as well), the agent's initial `action_input` is complete nonsensical (agent automatically translated it to english) which results in wrong final answer.
Request:
(1) It would be helpful to somehow manipulate the `action_input` for agents to not rephrase the input queries when using vector db or prompt support of agents with different languages.
(2) It would be more helpful to have someway to see what knowledge the agent is using. Currently, I need to rely on only passing user query to `RetrievalQA_chain` with `return_source_documents=True` to check.
Code for reference:
```python
retriever = RetrievalQA.from_chain_type(
llm=LLM,
chain_type="stuff",
retriever=db_retriever,
return_source_documents=False,
)
retriever_tool_description = """Use this tool when you need to answer specific or game related questions. This tool can also be used for follow up questions from the user.
"""
tools = [
Tool(
func=retriever.run, description=retriever_tool_description, name="Game Data DB"
),
]
memory = ConversationBufferWindowMemory(
memory_key="chat_history",
input_key="input",
output_key="output",
k=3,
return_messages=True,
)
conversational_agent = initialize_agent(
agent="chat-conversational-react-description",
tools=tools,
llm=LLM,
verbose=True,
max_iterations=2,
early_stopping_method="generate",
memory=memory,
return_intermediate_steps=True,
)
sys_msg = """Your role is to answer the game user's questions in a human-like manner"""
prompt = conversational_agent.agent.create_prompt(system_message=sys_msg, tools=tools)
conversational_agent.agent.llm_chain.prompt = prompt
conversational_agent(input_query)
```
Output:
The top json output is when calling retriever directly on user query.
Latter part is output when initializing agent.
<img width="1957" alt="image" src="https://github.com/hwchase17/langchain/assets/130352102/4c09e1c1-0493-4805-815b-b68871c6757e">
### Motivation
Its inconvenient to not be able to manipulate what the agents initial action_inputs are. Plus other languages can greatly benefit from such support.
### Your contribution
I would like to hear from other people first and then make a PR. | Using agents with custom tools completely changes the input if question is asked in different language | https://api.github.com/repos/langchain-ai/langchain/issues/4561/comments | 12 | 2023-05-12T08:10:18Z | 2023-10-31T01:54:50Z | https://github.com/langchain-ai/langchain/issues/4561 | 1,707,169,022 | 4,561 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
我通过langchain向openai进行提问,langchain会将我的提问与我的聊天历史记录合并为一个新的问题,然后再向openai进行提问。但是新的提问有时候与我旧的提问相差不小。
比如说我问langchain:你好
langchain提问openai:How can I assist you today?
所以我有的时候会得到一些奇怪的回答。为什么会这样以及我该怎么处理呢?
(
i talk to openai by langchain, but langchain will trans my question to a new question by my question and chat history. sometimes i got strange answer! for example
i talk to langchain: hello
langchian talk to openai: How can I assist you today?
so, i got strange answer semms unrelated to my origin question. why langchain do this,and how can i fix this ?
)
ConversationalRetrievalChain(BaseConversationalRetrievalChain._call)

### Suggestion:
_No response_ | 我的提问被langchain修改了(langchain change my question) | https://api.github.com/repos/langchain-ai/langchain/issues/4555/comments | 9 | 2023-05-12T06:01:15Z | 2024-03-18T16:04:29Z | https://github.com/langchain-ai/langchain/issues/4555 | 1,706,981,762 | 4,555 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am using colab.

### Who can help?
@hwchase17
@vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

### Expected behavior

| Langchain python agent calculation is wrong!!! | https://api.github.com/repos/langchain-ai/langchain/issues/4551/comments | 4 | 2023-05-12T04:59:05Z | 2023-08-06T22:13:05Z | https://github.com/langchain-ai/langchain/issues/4551 | 1,706,926,023 | 4,551 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
When I using agent with a llm of gpt-3.5 and a search tool of google, the AI's response is always in English, regardless of my input being in Chinese. Are there any ideas on how to ensure that the input and output languages are consistent?
### Suggestion:
_No response_ | Any ideas on making input and output languages consistent? | https://api.github.com/repos/langchain-ai/langchain/issues/4550/comments | 12 | 2023-05-12T04:45:55Z | 2024-01-19T08:29:11Z | https://github.com/langchain-ai/langchain/issues/4550 | 1,706,917,102 | 4,550 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.