issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi, I am trying to stream the response from the llm back to the client by using a callback with a custom StreamHandler, but the on_llm_new_token also includes the output from the rephrase_question step. while the final response does not include the rephrased answer. I don't want the rephrased question to be present in the response that is streaming.
the StreamHandler class is given below
`class StreamHandler(BaseCallbackHandler):
def __init__(self):
self.text = ""
def on_llm_new_token(self, token: str, **kwargs):
old_text = self.text
self.text += token
# Calculate the new content since the last emission
new_content = self.text[len(old_text) :]
socketio.emit("update_response", {"response": new_content})`
The qa-chain is defined as below:
qa_chain = ConversationalRetrievalChain.from_llm(
llm=chat,
retriever=MyVectorStoreRetriever(
vectorstore=vectordb,
search_type="similarity_score_threshold",
search_kwargs={"score_threshold": SIMILARITY_THRESHOLD, "k": 1},
),
return_source_documents=True,
rephrase_question=False)
response = qa_chain(
{
"question": user_input,
"chat_history":chat_history,
},callbacks=[stream_handler] )
### Suggestion:
_No response_ | Issue: Rephrased question in included in the on_llm_new_token method while streaming the response from the LLM | https://api.github.com/repos/langchain-ai/langchain/issues/14703/comments | 7 | 2023-12-14T08:47:45Z | 2024-06-30T16:03:41Z | https://github.com/langchain-ai/langchain/issues/14703 | 2,041,224,728 | 14,703 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
my code:
only_recall_inputs = RunnableParallel({
"question": itemgetter('question'),
"history": ????????,
"docs": itemgetter('question') | retriever,
})
just a simple chain
I want the "history" part to be [] or ''
how to do this?
### Suggestion:
_No response_ | Issue: How to set the Chain with valid/empty input | https://api.github.com/repos/langchain-ai/langchain/issues/14702/comments | 1 | 2023-12-14T08:45:49Z | 2024-03-21T16:06:32Z | https://github.com/langchain-ai/langchain/issues/14702 | 2,041,221,683 | 14,702 |
[
"hwchase17",
"langchain"
] | ### System Info
python 3.10
langchain version:0.0.350
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
when use elasticsearchStore add_doccuments The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
langchain 0.0.317 is ok
upgrade langchain 0.0.350 exist error
### Expected behavior
fix it | The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() | https://api.github.com/repos/langchain-ai/langchain/issues/14701/comments | 2 | 2023-12-14T08:24:32Z | 2024-03-21T16:06:27Z | https://github.com/langchain-ai/langchain/issues/14701 | 2,041,187,377 | 14,701 |
[
"hwchase17",
"langchain"
] | ### System Info
LangChain: 0.0.348
langchain-google-genai: 0.0.3
python: 3.11
os: macOS11.6
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
langchain_google_genai.chat_models.ChatGoogleGenerativeAIError: Message of 'system' type not supported by Gemini. Please only provide it with Human or AI (user/assistant) messages.
### Expected behavior
no error | Gemini not support SystemMessage and raise an error | https://api.github.com/repos/langchain-ai/langchain/issues/14700/comments | 8 | 2023-12-14T07:07:13Z | 2024-03-26T16:07:11Z | https://github.com/langchain-ai/langchain/issues/14700 | 2,041,050,685 | 14,700 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
what is this issue, and how can i resolve it:
```python
os.environ["AZURE_OPENAI_API_KEY"] = AZURE_OPENAI_API_KEY
os.environ["AZURE_OPENAI_ENDPOINT"] = AZURE_OPENAI_ENDPOINT
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_VERSION"] = "2023-05-15"
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
embedding = OpenAIEmbeddings()
COLLECTION_NAME = "network_team_documents"
CONNECTION_STRING = PGVector.connection_string_from_db_params(
driver=os.environ.get(DB_DRIVER, DB_DRIVER),
host=os.environ.get(DB_HOST, DB_HOST),
port=int(os.environ.get(DB_PORT, DB_PORT)),
database=os.environ.get(DB_DB, DB_DB),
user=os.environ.get(DB_USER, DB_USER),
password=os.environ.get(DB_PASS, DB_PASS),
)
store = PGVector(
collection_name=COLLECTION_NAME,
connection_string=CONNECTION_STRING,
embedding_function=embedding,
extend_existing=True,
)
gpt4 = AzureChatOpenAI(
azure_deployment="GPT4",
openai_api_version="2023-05-15",
)
retriever = store.as_retriever(search_type="similarity", search_kwargs={"k": 10})
qa_chain = RetrievalQA.from_chain_type(llm=gpt4,
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
return qa_chain
```
```python
Traceback (most recent call last):
File "/opt/network_tool/chatbot/views.py", line 21, in chat
chat_object = create_session()
File "/opt/network_tool/chatbot/chatbot_functions.py", line 95, in create_session
store = PGVector(
File "/opt/klevernet_venv/lib/python3.10/site-packages/langchain_community/vectorstores/pgvector.py", line 199, in __init__
self.__post_init__()
File "/opt/klevernet_venv/lib/python3.10/site-packages/langchain_community/vectorstores/pgvector.py", line 207, in __post_init__
EmbeddingStore, CollectionStore = _get_embedding_collection_store()
File "/opt/klevernet_venv/lib/python3.10/site-packages/langchain_community/vectorstores/pgvector.py", line 66, in _get_embedding_collection_store
class CollectionStore(BaseModel):
File "/opt/klevernet_venv/lib/python3.10/site-packages/sqlalchemy/orm/decl_api.py", line 195, in __init__
_as_declarative(reg, cls, dict_)
File "/opt/klevernet_venv/lib/python3.10/site-packages/sqlalchemy/orm/decl_base.py", line 247, in _as_declarative
return _MapperConfig.setup_mapping(registry, cls, dict_, None, {})
File "/opt/klevernet_venv/lib/python3.10/site-packages/sqlalchemy/orm/decl_base.py", line 328, in setup_mapping
return _ClassScanMapperConfig(
File "/opt/klevernet_venv/lib/python3.10/site-packages/sqlalchemy/orm/decl_base.py", line 578, in __init__
self._setup_table(table)
File "/opt/klevernet_venv/lib/python3.10/site-packages/sqlalchemy/orm/decl_base.py", line 1729, in _setup_table
table_cls(
File "", line 2, in __new__
File "/opt/klevernet_venv/lib/python3.10/site-packages/sqlalchemy/util/deprecations.py", line 281, in warned
return fn(*args, **kwargs) # type: ignore[no-any-return]
File "/opt/klevernet_venv/lib/python3.10/site-packages/sqlalchemy/sql/schema.py", line 436, in __new__
return cls._new(*args, **kw)
File "/opt/klevernet_venv/lib/python3.10/site-packages/sqlalchemy/sql/schema.py", line 468, in _new
raise exc.InvalidRequestError(
sqlalchemy.exc.InvalidRequestError: Table 'langchain_pg_collection' is already defined for this MetaData instance. Specify 'extend_existing=True' to redefine options and columns on an existing Table object.
```
### Suggestion:
_No response_ | sqlalchemy.exc.InvalidRequestError: Table 'langchain_pg_collection' is already defined for this MetaData instance. | https://api.github.com/repos/langchain-ai/langchain/issues/14699/comments | 15 | 2023-12-14T06:51:40Z | 2023-12-27T19:12:07Z | https://github.com/langchain-ai/langchain/issues/14699 | 2,041,031,199 | 14,699 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain version = 0.0.344
Python version = 3.11.5
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Here is my code for connecting to Snowflake database and getting the tables and executing them in langchain SQLagent.. Howver i always get AttributeError: items.
from snowflake.sqlalchemy import URL
llm=AzureChatOpenAI(temperature=0.0,deployment_name=gpt-4-32k)
snowflake_url = URL(
account='xxxxx,
user='xxxxxx',
password='xxxxxx',
database='xxxxx',
schema='xxxxxx',
warehouse='xxxxxxx'
)
db = SQLDatabase.from_uri(snowflake_url, sample_rows_in_table_info=1, include_tables=['gc'])
# Create the SQLDatabaseToolkit
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
agent_executor = create_sql_agent(
llm=llm,
toolkit=toolkit,
verbose=True,
prefix=MSSQL_AGENT_PREFIX,
format_instructions = MSSQL_AGENT_FORMAT_INSTRUCTIONS,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
top_k=30,
early_stopping_method="generate",
handle_parsing_errors = True,
)
question = "List top 10 records from gc"
response = agent_executor.run(question)
Entering new AgentExecutor chain...
Action: sql_db_list_tables
Action Input: ""
Observation: gc
Thought:The 'gc' table is available in the database. I should now check the schema of the 'gc' table to understand its structure and the data it contains.
Action: sql_db_schema
Action Input: "gc"
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File [c:\Anaconda_3\Lib\site-packages\sqlalchemy\sql\base.py:1150](file:///C:/Anaconda_3/Lib/site-packages/sqlalchemy/sql/base.py:1150), in ColumnCollection.__getattr__(self, key)
[1149](file:///C:/Anaconda_3/Lib/site-packages/sqlalchemy/sql/base.py:1149) try:
-> [1150](file:///C:/Anaconda_3/Lib/site-packages/sqlalchemy/sql/base.py:1150) return self._index[key]
[1151](file:///C:/Anaconda_3/Lib/site-packages/sqlalchemy/sql/base.py:1151) except KeyError as err:
KeyError: 'items'
The above exception was the direct cause of the following exception:
AttributeError Traceback (most recent call last)
Cell In[66], [line 33](vscode-notebook-cell:?execution_count=66&line=33)
[31](vscode-notebook-cell:?execution_count=66&line=31) from langchain.globals import set_debug
[32](vscode-notebook-cell:?execution_count=66&line=32) set_debug(False)
---> [33](vscode-notebook-cell:?execution_count=66&line=33) response = agent_executor.run(question)
File [~\AppData\Roaming\Python\Python311\site-packages\langchain\chains\base.py:507](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/W67529/Autogen/~/AppData/Roaming/Python/Python311/site-packages/langchain/chains/base.py:507), in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
[505](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/W67529/Autogen/~/AppData/Roaming/Python/Python311/site-packages/langchain/chains/base.py:505) if len(args) != 1:
[506](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/W67529/Autogen/~/AppData/Roaming/Python/Python311/site-packages/langchain/chains/base.py:506) raise ValueError("`run` supports only one positional argument.")
--> [507](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/W67529/Autogen/~/AppData/Roaming/Python/Python311/site-packages/langchain/chains/base.py:507) return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
[508](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/W67529/Autogen/~/AppData/Roaming/Python/Python311/site-packages/langchain/chains/base.py:508) _output_key
[509](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/W67529/Autogen/~/AppData/Roaming/Python/Python311/site-packages/langchain/chains/base.py:509) ]
[511](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/W67529/Autogen/~/AppData/Roaming/Python/Python311/site-packages/langchain/chains/base.py:511) if kwargs and not args:
[512](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/W67529/Autogen/~/AppData/Roaming/Python/Python311/site-packages/langchain/chains/base.py:512) return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
...
[201](file:///C:/Anaconda_3/Lib/site-packages/sqlalchemy/util/compat.py:201) # https://cosmicpercolator.com/2016/01/13/exception-leaks-in-python-2-and-3/
[202](file:///C:/Anaconda_3/Lib/site-packages/sqlalchemy/util/compat.py:202) # as the __traceback__ object creates a cycle
[203](file:///C:/Anaconda_3/Lib/site-packages/sqlalchemy/util/compat.py:203) del exception, replace_context, from_, with_traceback
AttributeError: items
### Expected behavior
Should have executed the question. | SQLagent always giving me AttributeError: items for Snowflake tables | https://api.github.com/repos/langchain-ai/langchain/issues/14697/comments | 8 | 2023-12-14T06:29:48Z | 2024-03-21T16:06:17Z | https://github.com/langchain-ai/langchain/issues/14697 | 2,041,006,966 | 14,697 |
[
"hwchase17",
"langchain"
] | ### System Info
**Environment Details**
**Langchain version 0.0.336
Python 3.9.2rc1**
**Error encountered while executing the sample code mentioned in the "Semi_structured_multi_modal_RAG_LLaMA2.ipynb" notebook from the cookbook.**
File [c:\Users\PLNAYAK\Documents\Local_LLM_Inference\llms\lib\site-packages\unstructured\file_utils\filetype.py:551](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/unstructured/file_utils/filetype.py:551), in add_metadata_with_filetype.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
[549](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/unstructured/file_utils/filetype.py:549) @functools.wraps(func)
...
--> [482](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/unstructured_inference/inference/layout.py:482) model = get_model(model_name, **kwargs)
[483](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/unstructured_inference/inference/layout.py:483) if isinstance(model, UnstructuredObjectDetectionModel):
[484](file:///C:/Users/PLNAYAK/Documents/Local_LLM_Inference/llms/lib/site-packages/unstructured_inference/inference/layout.py:484) detection_model = model
**TypeError: get_model() got an unexpected keyword argument 'ocr_languages'**
I would appreciate any assistance in resolving this issue. Thank you.
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from typing import Any
from pydantic import BaseModel
from unstructured.partition.pdf import partition_pdf
# Get elements
raw_pdf_elements = partition_pdf(
filename=path + "Employee-Stock-Option-Plans-ESOP-Best-Practices-2.pdf",# Unstructured first finds embedded image blocks
infer_table_structure=True,
# Post processing to aggregate text once we have the title
max_characters=4000,
new_after_n_chars=3800,
combine_text_under_n_chars=2000,
image_output_dir_path=path,
languages=['eng'],
)
### Expected behavior
The notebook should run without any issues and produce the expected output as documented in the cookbook | TypeError: get_model() got an unexpected keyword argument 'ocr_languages' | https://api.github.com/repos/langchain-ai/langchain/issues/14696/comments | 16 | 2023-12-14T06:18:44Z | 2024-05-21T16:08:06Z | https://github.com/langchain-ai/langchain/issues/14696 | 2,040,991,652 | 14,696 |
[
"hwchase17",
"langchain"
] | ### System Info
### **SYSTEM INFO**
LangChain version : 0.0.345
Python version : 3.9.6
### **ISSUE**
I create this custom function which will throw an error if the vectorestore cannot retrieve any relevant document
```
def check_threshold(inp, vecs):
query = inp['question']
threshold = inp['threshold']
d = [doc for doc,score in vecs.similarity_search_with_relevance_scores(query) if score >= threshold]
if len(d) < 1:
raise Exception("Not found!")
return "\n\n".join([x.page_content for x in d])
```
I want to use another chain if the main chain fails by using `with_fallbacks` function in the main chain
```
main_chain = ({
"context" : lambda x: check_threshold(x, vecs), "question" : lambda x: x['question']
} | prompt | llm | StrOutputParser()
).with_fallbacks([fallback_chain])
```
In the above code, the fallback_chain never gets triggered.
**PS : The above code is just an example, the original code uses more complicated calculations with many exceptions raise in several custom functions. Therefore, It is not feasible to use built-in Python try-except error handler**
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
```
def check_threshold(inp, vecs):
query = inp['question']
threshold = inp['threshold']
d = [doc for doc,score in vecs.similarity_search_with_relevance_scores(query) if score >= threshold]
if len(d) < 1:
raise Exception("Not found!")
return "\n\n".join([x.page_content for x in d])
main_chain = ({
"context" : lambda x: check_threshold(x, vecs), "question" : lambda x: x['question']
} | prompt | llm | StrOutputParser()
).with_fallbacks([fallback_chain])
main_chain.invoke({"question":"Hello, good morning"})
```
### Expected behavior
fallback_chain get triggered whenever the main_chain raise an exception | PYTHON ISSUE : Fallback does not catch exception in custom function using LCEL | https://api.github.com/repos/langchain-ai/langchain/issues/14695/comments | 1 | 2023-12-14T04:54:06Z | 2024-03-21T16:06:12Z | https://github.com/langchain-ai/langchain/issues/14695 | 2,040,901,181 | 14,695 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Recently, langchain released the google gemini independent package to connect the google gemini LLM capabilities.
This is also the first independent package released by langchain, which is a very big progress and change.
But I noticed that the name of this package is langchain-google-genai, which may not be very systematic.
Perhaps, we can use python's namespace feature to manage all langchain-related packages.
Documentation about namespace package:
- https://packaging.python.org/guides/packaging-namespace-packages/
- https://www.python.org/dev/peps/pep-0420/
### Suggestion:
Use the namespace capability to manage and publish all independent packages of langchain. The specific directory structure is as follows:
docs: https://packaging.python.org/en/latest/guides/packaging-namespace-packages/
```
pyproject.toml # AND/OR setup.py, setup.cfg
src/
langchain/ # namespace package
# No __init__.py here.
google-genai/
# Regular import packages have an __init__.py.
__init__.py
module.py
```
and then, you can use like:
```python
import langchain.google-genai
from langchain import google-genai
# ... code
``` | Issue: Use python namespace capabilities to manage standalone packages | https://api.github.com/repos/langchain-ai/langchain/issues/14694/comments | 1 | 2023-12-14T03:49:26Z | 2024-03-21T16:06:07Z | https://github.com/langchain-ai/langchain/issues/14694 | 2,040,852,615 | 14,694 |
[
"hwchase17",
"langchain"
] | ### Feature request
With Gemini Pro going GA today (Dec. 13th).
When can users of LangChain expect an update to use the new LLM?
### Motivation
This will allow users of LangChain to use the latest LLM that Google is providing along with their safety settings.
### Your contribution
I can try and help. Happy to contribute where needed | Google Gemini | https://api.github.com/repos/langchain-ai/langchain/issues/14671/comments | 9 | 2023-12-13T19:02:58Z | 2024-02-07T23:45:17Z | https://github.com/langchain-ai/langchain/issues/14671 | 2,040,302,557 | 14,671 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain: 0.0.349
Langchain-community: v0.0.1
Langchain-core: 0.0.13
Python: 3.12
Platform: Mac OS
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have the following BaseModel and BaseTool classes created.
```python
class TaskPost(BaseModel):
"""
TaskPost
""" # noqa: E501
due_date: Optional[datetime] = Field(default=None, description="ISO 8601 Due date on the task. REQUIRED for scheduled tasks", alias="dueDate")
duration: Optional[TaskDuration] = None
status: Optional[StrictStr] = Field(default=None, description="Defaults to workspace default status.")
auto_scheduled: Optional[AutoScheduledInfo] = Field(default=None, alias="autoScheduled")
name: Annotated[str, Field(min_length=1, strict=True)] = Field(description="Name / title of the task")
project_id: Optional[StrictStr] = Field(default=None, alias="projectId")
workspace_id: StrictStr = Field(alias="workspaceId")
description: Optional[StrictStr] = Field(default=None, description="Input as GitHub Flavored Markdown")
priority: Optional[StrictStr] = 'MEDIUM'
labels: Optional[List[StrictStr]] = None
assignee_id: Optional[StrictStr] = Field(default=None, description="The user id the task should be assigned to", alias="assigneeId")
__properties: ClassVar[List[str]] = ["dueDate", "duration", "status", "autoScheduled", "name", "projectId", "workspaceId", "description", "priority", "labels", "assigneeId"]
model_config = {
"populate_by_name": True,
"validate_assignment": True
}
class CreateTaskTool(BaseTool):
name = "create_task"
description = (
"""Use this to create a new task from all available args that you have. Always make sure date and time inputs are in ISO format""")
args_schema: Type[BaseModel] = openapi_client.TaskPost
verbose = True
```
The agent will use the alias names instead of the field name. i.e (workspaceId, dueDate) instead of (workspace_id, due_date)
```linux
[tool/start] [1:chain:AgentExecutor > 7:tool:create_task] Entering Tool run with input:
"{'name': 'Update code', 'workspaceId': 'xxxxxyyyyyzzzzz', 'dueDate': '2023-12-14T00:00:00'}"
```
When the agent calls `_parse_input` function from [langchian_core/tools.py and reaches line 247](https://github.com/langchain-ai/langchain/blob/14bfc5f9f477fcffff3f9aa564a864c5d5cd5300/libs/core/langchain_core/tools.py#L247) the results are filtered out because the results have the field names and the tool_input has the alias names which do not match.
```
CreateTaskTool -> _parse_input -> parse_obj -> result: due_date=datetime.datetime(2023, 12, 14, 0, 0) duration=None status=None auto_scheduled=None name='Update code' project_id=None workspace_id='xxxxxyyyyyzzzzz' description=None priority='MEDIUM' labels=None assignee_id=None
CreateTaskTool -> _parse_input -> parse_obj -> result keys: dict_keys(['due_date', 'duration', 'status', 'auto_scheduled', 'name', 'project_id', 'workspace_id', 'description', 'priority', 'labels', 'assignee_id'])
CreateTaskTool -> _parse_input -> parse_obj -> tool_input keys: dict_keys(['name', 'workspaceId', 'dueDate'])
CreateTaskTool -> _parse_input -> parse_obj -> finalResults: {'name': 'Update code'}
```
### Expected behavior
```
CreateTaskTool -> _parse_input -> parse_obj -> result keys: dict_keys(['due_date', 'duration', 'status', 'auto_scheduled', 'name', 'project_id', 'workspace_id', 'description', 'priority', 'labels', 'assignee_id'])
CreateTaskTool -> _parse_input -> parse_obj -> tool_input keys: dict_keys(['name', 'workspaceId', 'dueDate'])
CreateTaskTool -> _parse_input -> finalResults: {'name': 'Update code','workspaceId': 'xxxxxyyyyyzzzzz', 'dueDate': '2023-12-14T00:00:00'}
or
CreateTaskTool -> _parse_input -> finalResults: {'name': 'Update code','workspace_Id': 'xxxxxyyyyyzzzzz', 'due_date': '2023-12-14T00:00:00'}
``` | LangChain Agent bug when parsing tool inputs that use alias field names | https://api.github.com/repos/langchain-ai/langchain/issues/14663/comments | 1 | 2023-12-13T16:51:16Z | 2024-03-20T16:06:48Z | https://github.com/langchain-ai/langchain/issues/14663 | 2,040,113,293 | 14,663 |
[
"hwchase17",
"langchain"
] | ### System Info
LangChain version 0.0.348
Python version 3.10
Operating System MacOS Monterey version 12.6
SQLAlchemy version 2.0.23
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```# Connect to db
# Create an SQLAlchemy engine
engine = create_engine("mysql+mysqlconnector://user:pass@host/database")
# Test the database connection
try:
# Connect and execute a simple query
with engine.connect() as connection:
query = text("SELECT 1")
result = connection.execute(query)
for row in result:
print("Connection successful, got row:", row)
except Exception as e:
print("Error connecting to database:", e)
# Create an instance of SQLDatabase
db = SQLDatabase(engine)
# using Llama2
llm = LlamaCpp(
model_path="/path_to/llama-2-7b.Q4_K_M.gguf",
verbose=True,
n_ctx=2048)
# using Default and Suffix prompt template
PROMPT = PromptTemplate(
input_variables=["input", "table_info", "dialect", "top_k"],
template=_DEFAULT_TEMPLATE + PROMPT_SUFFIX,
)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, return_sql=True,
prompt=PROMPT)
langchain.debug = True
response = db_chain.run(formatted_prompt)
print(response
### Expected behavior
Expected Behavior: Expected to generate SQL queries without errors.
Actual Behavior: Received TypeError: wrong type and context window exceedance errors.
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_cpp/llama.py", line 1325, in _create_completion
f"Requested tokens ({len(prompt_tokens)}) exceed context window of {llama_cpp.llama_n_ctx(self._ctx)}"
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_cpp/llama_cpp.py", line 612, in llama_n_ctx
return _lib.llama_n_ctx(ctx)
ctypes.ArgumentError: argument 1: <class 'TypeError'>: wrong type``` | SQLDatabaseChain with LlamaCpp Llama2 "Chain Run Errored With Error: ArgumentError: <class 'TypeError'>: wrong type" | https://api.github.com/repos/langchain-ai/langchain/issues/14660/comments | 2 | 2023-12-13T16:38:34Z | 2024-03-20T16:06:44Z | https://github.com/langchain-ai/langchain/issues/14660 | 2,040,091,551 | 14,660 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am attempting to call an instance of ConversationalRetrieverChain with a list of dictionary objects that I've pre-processed with a similarity search and cohere reranker. I've created an extension of BaseRetriever in order to pass my list of dictionary objects to the "retriever=" parameter. However when my extended class instantiates, I get an error saying "seai_retriever object has no field "documents". My code is below. What am I doing wrong?
```
from langchain.schema.retriever import BaseRetriever
from langchain.schema.document import Document
from langchain.callbacks.manager import CallbackManagerForRetrieverRun
from typing import List
class seai_retriever(BaseRetriever):
def __init__(self, documents):
self.documents = documents
def retrieve(self, query, top_n=10):
retrieved_docs = [doc for doc in self.documents if query.lower() in doc['content'].lower()]
retrieved_docs = sorted(retrieved_docs, key=lambda x: x['content'].find(query), reverse=True)[:top_n]
return retrieved_docs
def _get_relevant_documents(self, query: str, *, run_manager: CallbackManagerForRetrieverRun) -> List[Document]:
retrieved_docs = [doc for doc in self.documents if query.lower() in doc['content'].lower()]
retrieved_docs = sorted(retrieved_docs, key=lambda x: x['content'].find(query), reverse=True)[:top_n]
return retrieved_docs
```
### Suggestion:
_No response_ | Getting "object has no field "documents" error with extended BaseRetriever classs | https://api.github.com/repos/langchain-ai/langchain/issues/14659/comments | 5 | 2023-12-13T16:29:16Z | 2024-06-01T00:07:37Z | https://github.com/langchain-ai/langchain/issues/14659 | 2,040,074,533 | 14,659 |
[
"hwchase17",
"langchain"
] | ### System Info
Mac Studio
M1 Max 32GB
macOS 14.1.2
Using rye
Python 3.11.6
langchain==0.0.350
langchain-community==0.0.2
langchain-core==0.1.0
### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**Error**
```
File "/venv/lib/python3.11/site-packages/langchain_community/document_loaders/xml.py", line 41, in _get_elements
from unstructured.partition.xml import partition_xml
ImportError: cannot import name 'partition_xml' from partially initialized module 'unstructured.partition.xml' (most likely due to a circular import) (/venv/lib/python3.11/site-packages/unstructured/partition/xml.py)
```
I tried to load XML document like this link(from langchain document)
https://python.langchain.com/docs/integrations/document_loaders/xml
```
from langchain.document_loaders import UnstructuredXMLLoader
loader = UnstructuredXMLLoader(
"aaa.xml",
)
docs = loader.load()
docs[0]
```
### Expected behavior
no circular import | UnstructuredXMLLoader import error (circular import) | https://api.github.com/repos/langchain-ai/langchain/issues/14658/comments | 1 | 2023-12-13T16:02:17Z | 2024-03-20T16:06:38Z | https://github.com/langchain-ai/langchain/issues/14658 | 2,040,021,283 | 14,658 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi, I'm trying to use StreamingStdOutCallbackHandler for a conversation chain. But it prints out the memory after the response. Is there a way to not printing memory without using an agent?
### Suggestion:
_No response_ | Issue: streaming issues | https://api.github.com/repos/langchain-ai/langchain/issues/14656/comments | 2 | 2023-12-13T15:47:29Z | 2024-01-10T03:38:15Z | https://github.com/langchain-ai/langchain/issues/14656 | 2,039,990,933 | 14,656 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I'm currently working on a project where I need to fetch all the sub-URLs from a website using Langchain.
How can we achieve this, below is my code
` loader = UnstructuredURLLoader(urls=urls)
urlDocument = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=50)
texts = text_splitter.split_documents(documents=urlDocument)`
### Suggestion:
_No response_ | Issue: I'm currently working on a project where I need to fetch all the sub-URLs from a website using Langchain. | https://api.github.com/repos/langchain-ai/langchain/issues/14651/comments | 3 | 2023-12-13T13:27:30Z | 2024-03-27T16:08:12Z | https://github.com/langchain-ai/langchain/issues/14651 | 2,039,714,207 | 14,651 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain Version = 0.0.311
Python Version = 3.9
Tried it on my local system as well on Company's hosted Jupyter Hub as well
### Who can help?
@eyurtsev @agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.text_splitter import TokenTextSplitter
token_splitter_model_name = "gpt-3.5-turbo"
SPLIT_CHUNK_SIZE = 1024
CHUNK_OVERLAP = 256
text_splitter = TokenTextSplitter.from_tiktoken_encoder(model_name=token_splitter_model_name, chunk_size=SPLIT_CHUNK_SIZE , chunk_overlap = CHUNK_OVERLAP)
blog_content= ' your text here'
blog_splits=text_splitter.split_text(blog_content)
```
### Expected behavior
The ways this token text splitter works , isn't how it is intended to work .
For Example :-
When specified chunk_size = 1024 and overlap = 256 with input text tokens = 991 , it made two chunks ;
first = token[0 : 991]
second = token[768 : 991]
but logically it should work this ways .
If input text token >1024 , there should be two chunks .
first = token[0 : 1024]
second = token[768 : ]
and so on ..............
If input text token <=1024 , there should be one chunk only .
first = token[ 0 : ]
More details on this issue raised previously :- https://github.com/langchain-ai/langchain/issues/5897 | Bug in Text splitting while using langchain.text_splitter.split_text_on_tokens¶ | https://api.github.com/repos/langchain-ai/langchain/issues/14649/comments | 3 | 2023-12-13T10:31:27Z | 2024-03-25T16:07:21Z | https://github.com/langchain-ai/langchain/issues/14649 | 2,039,411,662 | 14,649 |
[
"hwchase17",
"langchain"
] | ### System Info
from langchain.document_transformers import DoctranTextTranslator
from langchain.schema import Document
documents = [Document(page_content=sample_text)]
qa_translator = DoctranTextTranslator(language="spanish")
translated_document = await qa_translator.atransform_documents(documents)
TypeError Traceback (most recent call last)
[<ipython-input-18-c526f9c55393>](https://localhost:8080/#) in <cell line: 8>()
6 openai_api_model="gpt-3.5-turbo", language="chinese")
7
----> 8 translated_document = await qa_translator.atransform_documents(documents)
9
10
[/usr/local/lib/python3.10/dist-packages/langchain_community/document_transformers/doctran_text_translate.py](https://localhost:8080/#) in atransform_documents(self, documents, **kwargs)
61 ]
62 for i, doc in enumerate(doctran_docs):
---> 63 doctran_docs[i] = await doc.translate(language=self.language).execute()
64 return [
65 Document(page_content=doc.transformed_content, metadata=doc.metadata)
TypeError: object Document can't be used in 'await' expression
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
test
### Expected behavior
test | TypeError: object Document can't be used in 'await' expression | https://api.github.com/repos/langchain-ai/langchain/issues/14645/comments | 1 | 2023-12-13T07:52:41Z | 2024-03-20T16:06:28Z | https://github.com/langchain-ai/langchain/issues/14645 | 2,039,150,807 | 14,645 |
[
"hwchase17",
"langchain"
] | I have two code.
The 1st code is to load PDF documents and use ParentDocumentRetriever to add_documents() and save vectorstore to local disk.
The 2nd code is to load vectorstore and call ParentDocumentRetriever.get_relevant_documents
The problem is, the ParentDocumentRetriever.get_relevant_documents() got empty result.
Any idea??
What is the inMemoryStore for??
Here are the two codes:
### the 1st code: LoadDocumentsAndSaveVectorstore.py
```
loader = DirectoryLoader('data/',
glob='**/*.pdf',
loader_cls=PyPDFLoader)
docs = loader.load()
store = InMemoryStore()
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=1000)
child_splitter = RecursiveCharacterTextSplitter(chunk_size=128, chunk_overlap=64)
texts = [""]
vectorstore = FAISS.from_texts(texts, embedding_function)
retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter,
)
retriever.add_documents(docs, ids=None)
vectorstore.save_local("ppstore")
print("save data to ppstore")
```
The 1st code works fine.
The 2nd file is to load the vectorstore and query.
The strange thing is, the **retriever1.get_relevant_documents(query)** got empty result.
### the 2nd code: load vectorstore and use retriever to get relevant_documents
```
print("load ppstore")
store = InMemoryStore()
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=1000)
child_splitter = RecursiveCharacterTextSplitter(chunk_size=128, chunk_overlap=64)
db = FAISS.load_local("ppstore", embedding_function)
retriever1 = ParentDocumentRetriever(
vectorstore=db,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter,
)
query = "Please describe Sardina dataset"
print("query:", query)
sub_docs = db.similarity_search(query)
print("=== check small chunk ===")
print(sub_docs[0].page_content)
print(len(sub_docs[0].page_content)) ## this respond OK, the len is a little bit smaller than 128
retrieved_docs = retriever1.get_relevant_documents(query) ## I got empty result
print("=== check larger chunk ===")
print(retrieved_docs[0].page_content)
print(len(retrieved_docs[0].page_content))
```
| ParentDocumentRetriever.get_relevant_documents() got empty result | https://api.github.com/repos/langchain-ai/langchain/issues/14643/comments | 5 | 2023-12-13T07:31:59Z | 2024-01-11T20:34:16Z | https://github.com/langchain-ai/langchain/issues/14643 | 2,039,122,156 | 14,643 |
[
"hwchase17",
"langchain"
] | ### System Info
# Dependency Versions
langchain==0.0.349
langchain-community==0.0.1
langchain-core==0.0.13
openai==1.3.8
# Python Version
Python 3.11.4
# Redis Stack Version
redis-stack-server 6.2.0
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.globals import set_llm_cache
from langchain.cache import RedisSemanticCache
from langchain.embeddings import OpenAIEmbeddings
from langchain_community.chat_models import ChatOpenAI
from redis import Redis
import time
llm = ChatOpenAI(openai_api_key='<OPENAI_API_KEY>', model_name='gpt-3.5-turbo')
cache = RedisSemanticCache(redis_url="redis://localhost:6379",
embedding=OpenAIEmbeddings(openai_api_key='<OPENAI_API_KEY>'),
score_threshold=0.95)
set_llm_cache(cache)
start = time.time()
response = llm.predict("""Tell me about USA in only two sentences""")
print(time.time()-start)
print(response)
start = time.time()
response = llm.predict("""Tell me about INDIA in only two sentences""")
print(time.time()-start)
print(response)
start = time.time()
response = llm.predict("""What is LLMs in the context of GEN AI ?""")
print(time.time()-start)
print(response)
### Expected behavior
As the score_threshold is set to 0.98, I expect all the three prompts to give three different responses. But we are getting one response for all the three prompts.
Output from running the script :
4.252941131591797
The United States of America is a federal republic consisting of 50 states, a federal district, five major self-governing territories, and various possessions. It is a diverse and influential country known for its cultural and economic power on the global stage.
0.3903520107269287
The United States of America is a federal republic consisting of 50 states, a federal district, five major self-governing territories, and various possessions. It is a diverse and influential country known for its cultural and economic power on the global stage.
0.6625611782073975
The United States of America is a federal republic consisting of 50 states, a federal district, five major self-governing territories, and various possessions. It is a diverse and influential country known for its cultural and economic power on the global stage.
| (RedisSemanticCache + ChatOpenAI + OpenAIEmbeddings) - Not working as expected - Wanted to understand, if I am doing something wrong here. | https://api.github.com/repos/langchain-ai/langchain/issues/14640/comments | 2 | 2023-12-13T06:34:44Z | 2024-04-25T16:12:16Z | https://github.com/langchain-ai/langchain/issues/14640 | 2,039,051,024 | 14,640 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Issue 1:
I am working on the summarization using the stuff and map-reduce of the Langchain. I have integrated it with the AWS Bedrock's anthropic llm which has a token limit of 100000. It is working fine but when the pdf with 40000 tokens is passed, the bedrock is throwing an error:
i) with VPN connected:
An error occurred: Error raised by bedrock service: Could not connect to the endpoint URL: "https://bedrock-runtime.us-east-1.amazonaws.com/model/anthropic.claude-v2/invoke".
ii) without VPN connected:
An error occurred: Error raised by bedrock service: An error occurred (ExpiredTokenException) when calling the InvokeModel operation: The security token included in the request is expired
Any reason why this is happening? thanks in advance!
Issue 2:
2) map-reduce takes a lot of time to produce the summary for the 40000+ tokens documents when the anthropic threshold is reduced to 40000. Sometimes it is taking a lot of time but erroring out. Any help is appreciated. Thanks in advance!
### Suggestion:
_No response_ | Issue: bedrock is throwing an error for the langchain stuff method using the anthropic model for the summarization. | https://api.github.com/repos/langchain-ai/langchain/issues/14639/comments | 1 | 2023-12-13T05:48:17Z | 2023-12-15T07:47:12Z | https://github.com/langchain-ai/langchain/issues/14639 | 2,039,002,274 | 14,639 |
[
"hwchase17",
"langchain"
] | ### System Info
Hello, after forking and cloning the repo on my machine, I tried to open it using docker and specifically in VS Code with the option to "Reopen in Container". While building, the final command of [dev.Dockerfile](https://github.com/langchain-ai/langchain/blob/ca7da8f7ef9bc7a613ff07279c4603cad5fd175a/libs/langchain/dev.Dockerfile#L44) resulted in the following error:
```logs
#0 1.241 Directory ../core does not exist
```
After investigating, I found out that the issue lies in [pyproject.toml](https://github.com/langchain-ai/langchain/blob/ca7da8f7ef9bc7a613ff07279c4603cad5fd175a/libs/langchain/pyproject.toml) which is using relative paths like `../core` and `../community` in some occasions. Additionally, even after replacing `../` with `libs/` (which I am not sure if it breaks something else), the actual `core` and `community` directories are never copied over in [dev.Dockerfile](https://github.com/langchain-ai/langchain/blob/ca7da8f7ef9bc7a613ff07279c4603cad5fd175a/libs/langchain/dev.Dockerfile). These should also be copied in the created docker container, similarly to [line 41](https://github.com/langchain-ai/langchain/blob/ca7da8f7ef9bc7a613ff07279c4603cad5fd175a/libs/langchain/dev.Dockerfile#L41).
After making these two changes, the container was successfully built. I'll check out whether the change of paths in pyproject.toml is affecting any other files, and if not I will create a PR for this.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Fork and clone the repo on your machine
2. Open it with VS Code (with Dev Containers extension installed)
3. Run the VS Code command: "Dev Containers: Rebuild Container"
### Expected behavior
Build the development docker container without errors | Dockerfile issues when trying to build the repo using .devcontainer | https://api.github.com/repos/langchain-ai/langchain/issues/14631/comments | 3 | 2023-12-12T23:17:45Z | 2023-12-28T16:25:05Z | https://github.com/langchain-ai/langchain/issues/14631 | 2,038,690,605 | 14,631 |
[
"hwchase17",
"langchain"
] | ### Feature request
Azure OpenAI now previews the DALLE-3 model. Today, DALLEAPIWrapper only supports the openai API.
### Motivation
My customers are using Azure OpenAI and would like to use DALL-E-3 in their solutions.
### Your contribution
PR may not be possible but I'd like to help anyway I can. | DALLEAPIWrapper to support Azure OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/14625/comments | 2 | 2023-12-12T22:22:43Z | 2024-03-20T16:06:23Z | https://github.com/langchain-ai/langchain/issues/14625 | 2,038,636,476 | 14,625 |
[
"hwchase17",
"langchain"
] | ### Feature request
Add first-class support for Vertex AI Endpoints in Langchain. This would involve providing a similar interface to the existing SageMakerEndpoint class, allowing users to easily connect to and interact with Vertex AI Endpoints.
### Motivation
Although VertexAIModelGarden already exist, there may be instances where users require custom models with unique input and output formats. To address this need, a more versatile class could be developed, upon which VertexAIModelGarden could be built. This would allow for seamless integration of custom models without compromising the functionality of the existing Model Garden class.
### Your contribution
The implementation taking inspiration from SageMakerEndpoint if pertinent. | Add support for Vertex AI Endpoint | https://api.github.com/repos/langchain-ai/langchain/issues/14622/comments | 1 | 2023-12-12T20:35:10Z | 2024-03-19T16:06:32Z | https://github.com/langchain-ai/langchain/issues/14622 | 2,038,512,133 | 14,622 |
[
"hwchase17",
"langchain"
] | ### Feature request
Hi, it seems the only DocStore available are InMemory or Google. I'd like to submit a feature request for an S3DocStore.
### Motivation
Many people have raised issues related to limited DocStore options. | S3DocStore | https://api.github.com/repos/langchain-ai/langchain/issues/14616/comments | 3 | 2023-12-12T18:55:18Z | 2024-04-10T16:11:48Z | https://github.com/langchain-ai/langchain/issues/14616 | 2,038,377,269 | 14,616 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain Version: 0.0.348
Python: 3.9.16
Docs suggests to use proxy entry as follows but it does not work:
from slacktoolkit import SlackToolkit
# Proxy settings
proxies = {
'http': 'http://proxy.example.com:8080',
'https': 'https://proxy.example.com:8080'
}
# Initialize SlackToolkit with proxy
slack_toolkit = SlackToolkit(proxies=proxies)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [x] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Proxy settings
proxies = {
'http': 'your_proxy',
'https': 'your_proxy'
}
# Initialize SlackToolkit with proxy
toolkit = SlackToolkit(proxies=proxies)
tools = toolkit.get_tools()
llm = OpenAI(temperature=0)
agent = initialize_agent(
tools=toolkit.get_tools(),
llm=llm,
verbose=True,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
)
agent.run("Send a greeting to my coworkers in the #slack1 channel. Your name is chatbot. Set the sender name as chatbot.")
### Expected behavior
slack message sent in #slack1 channel | SlackToolkit() does not support proxy configuration | https://api.github.com/repos/langchain-ai/langchain/issues/14608/comments | 2 | 2023-12-12T17:16:53Z | 2023-12-15T04:00:14Z | https://github.com/langchain-ai/langchain/issues/14608 | 2,038,224,081 | 14,608 |
[
"hwchase17",
"langchain"
] | ### System Info
Not relevant
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use the QA With Sources chain with the default prompt. If the chain type is `stuff` or `map_reduce` the default prompts used are [this](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/qa_with_sources/stuff_prompt.py) and [this](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/qa_with_sources/map_reduce_prompt.py) respectively. These files are massive, and easily adds a 1000+ tokens to every request. In models like PALM2, there's barely any tokens left for other questions.
### Expected behavior
These prompts should be much more short, even if as a sample input. | QaWithSources default prompt is massive | https://api.github.com/repos/langchain-ai/langchain/issues/14596/comments | 4 | 2023-12-12T13:40:37Z | 2024-03-20T16:06:19Z | https://github.com/langchain-ai/langchain/issues/14596 | 2,037,780,695 | 14,596 |
[
"hwchase17",
"langchain"
] | ### Feature request
As of the current implementation of the QA with sources chain, if `return_source_documents` is set to `True`, [all the sources](https://github.com/langchain-ai/langchain/blob/76905aa043e3e604b5b34faf5e91d0aedb5ed6dd/libs/langchain/langchain/chains/qa_with_sources/base.py#L165) that are retrieved from the vector DB are returned. The `sources` field returns a list of file names that were used by the LLM. I propose we could do something like this
1. Assign each `Document` a unique UUID as the `source` before passing it to the LLM.
2. Once the LLM returns the relevant sources, we can backmap this to the actual `Document`s that were used by the LLM as opposed to getting just the filename.
### Motivation
This information seems vastly more useful than the entire response from the vector DB. For our current use cases, we've ended up overriding these functions to add this functionality.
### Your contribution
I can raise a PR adding this if this is something that you'd find useful as well.
An additional issue to report, the `_split_sources` [function](https://github.com/langchain-ai/langchain/blob/76905aa043e3e604b5b34faf5e91d0aedb5ed6dd/libs/langchain/langchain/chains/qa_with_sources/base.py#L124) splits at the first instance of `SOURCE` which seems to be a bit problematic. I can fix this to split at the last occurence. | Sources returned in QaWithSources can be optimised | https://api.github.com/repos/langchain-ai/langchain/issues/14595/comments | 1 | 2023-12-12T13:30:30Z | 2024-03-19T16:06:24Z | https://github.com/langchain-ai/langchain/issues/14595 | 2,037,760,278 | 14,595 |
[
"hwchase17",
"langchain"
] | ### Feature request
How to use custom tracing tool like opentelemetry or tempo
### Motivation
If I don't want to use LangSmith
### Your contribution
N/A | How to use custom tracing tool like opentelemetry or tempo | https://api.github.com/repos/langchain-ai/langchain/issues/14594/comments | 1 | 2023-12-12T12:34:07Z | 2024-03-19T16:06:17Z | https://github.com/langchain-ai/langchain/issues/14594 | 2,037,660,794 | 14,594 |
[
"hwchase17",
"langchain"
] | ### System Info
OS: Apple M1 Max
______________________
Name: langchain
Version: 0.0.349
Summary: Building applications with LLMs through composability
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Requires: aiohttp, async-timeout, dataclasses-json, jsonpatch, langchain-community, langchain-core, langsmith, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity
Required-by:
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
I have followed the instructions provided here : https://python.langchain.com/docs/integrations/llms/llamacpp.
Though not able inference it correctly.
Model path : https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF
```
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.chains import LLMChain, QAGenerationChain
from langchain.llms import LlamaCpp
from langchain.prompts import PromptTemplate
template = """Question: {question}
Answer: Let's work this out in a step by step way to be sure we have the right answer."""
prompt = PromptTemplate(template=template, input_variables=["question"])
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
n_gpu_layers = 1 # Change this value based on your model and your GPU VRAM pool.
n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
llm = LlamaCpp(
model_path="../models/deepcoder-gguf/deepseek-coder-6.7b-instruct.Q2_K.gguf",
n_gpu_layers=n_gpu_layers,
max_tokens=2000,
top_p=1,
n_batch=n_batch,
callback_manager=callback_manager,
f16_kv=True,
verbose=True, # Verbose is required to pass to the callback manager
)
llm(
"Question: Write python program to add two numbers ? Answer:"
)
```
Result: ` < """"""""""""""""""""""/"`
Requesting you to look into it.
Please let me know in case you need more information.
Thank you.
I have tried the same model file with **[llama-cpp-python](https://github.com/abetlen/llama-cpp-python)** package and it works as expected.
Please find below the code that I have tried:
```
import json
import time
from llama_cpp import Llama
n_gpu_layers = 1 # Change this value based on your model and your GPU VRAM pool.
n_batch = 512
llm = Llama(model_path="../models/deepcoder-gguf/deepseek-coder-6.7b-instruct.Q5_K_M.gguf" , chat_format="llama-2", n_gpu_layers=n_gpu_layers,n_batch=n_batch)
start_time = time.time()
pp = llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are an python language assistant."},
{
"role": "user",
"content": "Write quick sort ."
}
])
end_time = time.time()
print("execution time:", {end_time - start_time})
print(pp["choices"][0]["message"]["content"])
```
Output :
```
## Quick Sort Algorithm in Python
Here is a simple implementation of the quicksort algorithm in Python:
```python
def partition(arr, low, high):
i = (low-1) # index of smaller element
pivot = arr[high] # pivot
for j in range(low , high):
if arr[j] <= pivot:
i += 1
arr[i],arr[j] = arr[j],arr[i]
arr[i+1],arr[high] = arr[high],arr[i+1]
return (i+1)
def quickSort(arr, low, high):
if low < high:
pi = partition(arr,low,high)
quickSort(arr, low, pi-1)
quickSort(arr, pi+1, high)
# Test the code
n = int(input("Enter number of elements in array: "))
print("Enter elements: ")
arr = [int(input()) for _ in range(n)]
quickSort(arr,0,n-1)
print ("Sorted array is:")
for i in range(n):
print("%d" %arr[i]),
This code first defines a helper function `partition()` that takes an array and two indices. It then rearranges the elements of the array so that all numbers less than or equal to the pivot are on its left, while all numbers greater than the pivot are on its right. The `quickSort()` function is then defined which recursively applies this partitioning process until the entire array is sorted.
The user can input their own list of integers and the program will output a sorted version of that list.
[/code]
Conclusion
In conclusion, Python provides several built-in functions for sorting lists such as `sort()` or `sorted()` but it's also possible to implement quick sort algorithm from scratch using custom function. This can be useful in situations where you need more control over the sorting process or when dealing with complex data structures.
```
### Expected behavior
It should inference the model just like the native [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) package. | Not able to inference deepseek-coder-6.7b-instruct.Q5_K_M.gguf | https://api.github.com/repos/langchain-ai/langchain/issues/14593/comments | 6 | 2023-12-12T11:20:20Z | 2024-05-25T13:36:24Z | https://github.com/langchain-ai/langchain/issues/14593 | 2,037,539,816 | 14,593 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
i have created a chatbot to chat with the sql database using openai and langchain, but how to store or output data into excel using langchain.
I got some idea from chatgpt which i have integrated with my code, but there is an error while importing the modules
below is my code
import pandas as pd
import sqlalchemy as sal
import os, sys, openai
import constants
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
from langchain_experimental.sql import SQLDatabaseChain
from sqlalchemy import create_engine
# import ChatOpenAI
from langchain.chat_models import ChatOpenAI
from typing import List, Optional
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.callbacks.manager import CallbackManagerForToolRun
from langchain.chat_models import ChatOpenAI
from langchain_experimental.plan_and_execute import (
PlanAndExecute,
load_agent_executor,
load_chat_planner,
)
from langchain.sql_database import SQLDatabase
from langchain.text_splitter import TokenTextSplitter
from langchain.tools import BaseTool
from langchain.tools.sql_database.tool import QuerySQLDataBaseTool
from secret_key import openapi_key
from langchain import PromptTemplate
from langchain.models import ChatGPTClient
from langchain.utils import save_conversation
os.environ['OPENAI_API_KEY'] = openapi_key
def chat(question):
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
# custom_suffix = """""
# If the SQLResult is empty, the Answer should be "No results found". DO NOT hallucinate an answer if there is no result."""
engine = create_engine(connection_uri)
model_name="gpt-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
# db = SQLDatabase(engine, view_support=True, include_tables=['egv_emp_acadamics_ChatGPT'])
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
from langchain.prompts import PromptTemplate
PROMPT = """
Given an input question, first create a syntactically correct mssql query to run,
then look at the results of the query and return the answer.
The question: {db_chain.run}
"""
prompt_template = """" Use the following pieces of context to answer the question at the end.
If you don't know the answer, please think rationally and answer from your own knowledge base.
Don't consider the table which are not mentioned, if no result is matching with the keyword Please return the answer as invalid question
{context}
Question: {questions}
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "questions"]
)
def split_text(text, chunk_size, chunk_overlap=0):
text_splitter = TokenTextSplitter(
chunk_size=chunk_size, chunk_overlap=chunk_overlap
)
yield from text_splitter.split_text(text)
class QuerySQLDatabaseTool2(QuerySQLDataBaseTool):
def _run(
self,
query: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> str:
result = self.db.run_no_throw(query)
return next(split_text(result, chunk_size=14_000))
class SQLDatabaseToolkit2(SQLDatabaseToolkit):
def get_tools(self) -> List[BaseTool]:
tools = super().get_tools()
original_query_tool_description = tools[0].description
new_query_tool = QuerySQLDatabaseTool2(
db=self.db, description=original_query_tool_description
)
tools[0] = new_query_tool
return tools
return db_chain.run(question)
# answer=chat("give the names of employees who have completed PG")
answer= chat("give the list of employees joined in january and february of 2023 with Employee ID, Name, Department,Date of join")
print(answer)
conversation_data= chatgpt.chat(prompt="convert into .csv and .xlsx only if the multiple values are asked in the question, if one a single thing is asked, just give the answer in chatbot no need to save the answer")
# conversation_data = chat("convert into .csv and .xlsx only if the multiple values are asked in the question, if one a single thing is asked, just give the answer in chatbot no need to save the answer")
save_conversation(conversation_data, "chat_data.csv")
df = pd.read_csv("chat_data.csv")
path = r"C:\Users\rndbcpsoft\OneDrive\Desktop\test\chat_data.xlsx"
df.to_excel(path, index=False)
print(f"Conversation data has been saved to '{path}' in Excel format.")
### Suggestion:
_No response_ | Issue: <How to store/export the output of a chatbot to excel ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/14592/comments | 5 | 2023-12-12T11:08:08Z | 2024-03-21T16:05:52Z | https://github.com/langchain-ai/langchain/issues/14592 | 2,037,519,560 | 14,592 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.9.12, LangChain 0.0.346
### Who can help?
@agola11 @3coins
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When you use caching with LangChain, it does not distinguish different LLM models. For example, the response for LLama2 was used for a prompt for Claude 2.
```
def ask_name(model_id):
question = 'what is your name?'
bedrock = Bedrock(model_id=model_id, model_kwargs={'temperature': 0.1})
print('me: ' + question)
t0 = datetime.datetime.now()
print(f'{bedrock.model_id}: ' + bedrock(question).strip())
print('({:.2f} sec)'.format((datetime.datetime.now() - t0).total_seconds()))
print()
model_ids = ['meta.llama2-70b-chat-v1','anthropic.claude-v2',]
for model_id in model_ids:
ask_name(model_id)
ask_name(model_id)
```
==>
```
me: what is your name?
meta.llama2-70b-chat-v1: Answer: My name is LLaMA, I'm a large language model trained by a team of researcher at Meta AI.
(2.24 sec)
me: what is your name?
meta.llama2-70b-chat-v1: Answer: My name is LLaMA, I'm a large language model trained by a team of researcher at Meta AI.
(0.00 sec)
me: what is your name?
anthropic.claude-v2: Answer: My name is LLaMA, I'm a large language model trained by a team of researcher at Meta AI.
(0.00 sec)
```
This is because of https://github.com/langchain-ai/langchain/blob/db6bf8b022c17353b46f97ab3b9f44ff9e88a488/libs/langchain/langchain/llms/bedrock.py#L235
```
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
_model_kwargs = self.model_kwargs or {}
return {
**{"model_kwargs": _model_kwargs},
}
```
My current workaround is subclassing the `Bedrock` class:
```
class MyBedrock(Bedrock):
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {
'model_id': self.model_id,
**BedrockBase._identifying_params.__get__(self)
}
```
This seems working.
### Expected behavior
LLama 2 should replay with "My name is LLaMa..." while Claude 2 should reply with "My name is Claude." | Caching with Bedrock does not distinguish models or params | https://api.github.com/repos/langchain-ai/langchain/issues/14590/comments | 1 | 2023-12-12T10:17:59Z | 2024-03-19T16:06:07Z | https://github.com/langchain-ai/langchain/issues/14590 | 2,037,430,016 | 14,590 |
[
"hwchase17",
"langchain"
] | ### System Info
python: 3.11
langchain: 0.0.347-0.0.349
langchain_core: 0.0.12, 0.0.13
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
import langchain_core.load
class Custom(langchain_core.load.Serializable):
@classmethod
def is_lc_serializable(cls) -> bool:
return True
out = langchain_core.load.dumps(Custom())
langchain_core.load.loads(
out,
valid_namespaces=["langchain", "__main__"],
)
```
### Expected behavior
I'm expecting it's possible to make a custom class serializable, but since langchain_core 0.0.13 the `valid_namespaces` is effectively ignored as it relies on a whitelist of what can be serialized `SERIALIZABLE_MAPPING`.
So I get the error:
> ValueError: Trying to deserialize something that cannot be deserialized in current version of langchain-core: ('__main__', 'Custom')
Triggered in [load.py#L68](https://github.com/langchain-ai/langchain/blob/v0.0.349/libs/core/langchain_core/load/load.py#L68)
---
I'm not sure if serialization was ever intended to be part of the public API, but I've found it convenient to be able to make my custom parts of chains also abide by the serialization protocol and still be able to dump/load my chains | Unable to dump/load custom classes since langchain_core 0.0.13 | https://api.github.com/repos/langchain-ai/langchain/issues/14589/comments | 3 | 2023-12-12T10:16:56Z | 2023-12-13T10:44:16Z | https://github.com/langchain-ai/langchain/issues/14589 | 2,037,428,125 | 14,589 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.9.13, LangChain 0.0.347, Windows 10
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
Running this snippet:
```python
answers = (answer for answer in ["answer1", "answer2"])
class CustomLLM(LLM):
def _llm_type(self) -> str:
return "custom"
def _call(self, prompt: str, **_) -> str:
return next(answers)
class CustomOutputParser(BaseOutputParser[str]):
reject_output: bool = True
def parse(self, text: str) -> str:
if self.reject_output:
self.reject_output = False
raise OutputParserException(f"Parsing failed")
return text
def get_format_instructions(self) -> str:
return "format instructions"
class CustomCallbackHandler(BaseCallbackHandler):
def on_llm_end(self, response: LLMResult, **_):
print(f"received LLM response: {response}")
llm = CustomLLM()
chain = LLMChain(
llm=llm,
prompt=PromptTemplate.from_template("Testing prompt"),
output_key="chain_output",
verbose=True,
output_parser=OutputFixingParser.from_llm(llm, CustomOutputParser()),
)
result = chain({}, callbacks=[CustomCallbackHandler()])
print(f"Chain result is {result}")
```
produces the following output:
```
> Entering new LLMChain chain...
Prompt after formatting:
Testing prompt
received LLM response: generations=[[Generation(text='answer1')]] llm_output=None run=None
> Finished chain.
Chain result is {'chain_output': 'answer2'}
```
### Expected behavior
The output line `received LLM response: generations=[[Generation(text='answer1')]] llm_output=None run=None` should also appear with `answer2`, because that one is also generated while running the chain that was configured to use the `CustomCallbackHandler`.
The callbacks should also be used in the chain that is created inside the OutputFixingParser. In my opinion, the chain doing the retry, should have the overall chain as a parent in the callback methods (`on_chain_start()` an so on). | OutputFixingParser should use callbacks | https://api.github.com/repos/langchain-ai/langchain/issues/14588/comments | 1 | 2023-12-12T09:35:53Z | 2024-03-19T16:06:02Z | https://github.com/langchain-ai/langchain/issues/14588 | 2,037,349,229 | 14,588 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
below is my code
` confluence_url = config.get("confluence_url", None)
username = config.get("username", None)
api_key = config.get("api_key", None)
space_key = config.get("space_key", None)
documents = []
embedding = OpenAIEmbeddings()
loader = ConfluenceLoader(
url=confluence_url,
username=username,
api_key=api_key
)
for space_key in space_key:
documents.extend(loader.load(space_key=space_key,include_attachments=True,limit=100))`
The error I am getting:
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://api.media.atlassian.com/file/UNKNOWN_MEDIA_ID/binary?token=sometoken&name=Invalid%20file%20id%20-%20UNKNOWN_MEDIA_ID&max-age=2940
### Suggestion:
_No response_ | Issue: Getting error while integrating Confluence Spaces including attachments | https://api.github.com/repos/langchain-ai/langchain/issues/14586/comments | 3 | 2023-12-12T09:00:13Z | 2024-02-21T11:48:20Z | https://github.com/langchain-ai/langchain/issues/14586 | 2,037,287,446 | 14,586 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain Version: 0.0.335
Platform: Win11
Python Version: 3.11.5
Hi experts,
I'm trying to execute the RAG Search Example on the Langchain Doc: https://python.langchain.com/docs/expression_language/get_started
**Here is the code:**
from langchain.chat_models import ChatOpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.prompts import ChatPromptTemplate
from langchain.vectorstores import DocArrayInMemorySearch
from langchain.schema import StrOutputParser
from langchain.schema.runnable import RunnableParallel, RunnablePassthrough
vectorstore = DocArrayInMemorySearch.from_texts(
["harrison worked at kensho", "bears like to eat honey"],
embedding=OpenAIEmbeddings(),
)
retriever = vectorstore.as_retriever()
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
model = ChatOpenAI()
output_parser = StrOutputParser()
setup_and_retrieval = RunnableParallel(
{"context": retriever, "question": RunnablePassthrough()}
)
chain = setup_and_retrieval | prompt | model | output_parser
chain.invoke("where did harrison work?")
**but the example fails with the ValidationError: 2 validation errors for DocArrayDoc.**
**Here is the error details:**
C:\Project\pythonProjectAI\.venv\Lib\site-packages\pydantic\_migration.py:283: UserWarning: `pydantic.error_wrappers:ValidationError` has been moved to `pydantic:ValidationError`.
warnings.warn(f'`{import_path}` has been moved to `{new_location}`.')
Traceback (most recent call last):
File "C:\Project\pythonProjectAI\AI.py", line 28, in <module>
chain.invoke("where did harrison work?")
File "C:\Project\pythonProjectAI\.venv\Lib\site-packages\langchain\schema\runnable\base.py", line 1427, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "C:\Project\pythonProjectAI\.venv\Lib\site-packages\langchain\schema\runnable\base.py", line 1938, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Project\pythonProjectAI\.venv\Lib\site-packages\langchain\schema\runnable\base.py", line 1938, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^
File "C:\Program Files\Python\Lib\concurrent\futures\_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "C:\Program Files\Python\Lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Project\pythonProjectAI\.venv\Lib\site-packages\langchain\schema\retriever.py", line 112, in invoke
return self.get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Project\pythonProjectAI\.venv\Lib\site-packages\langchain\schema\retriever.py", line 211, in get_relevant_documents
raise e
File "C:\Project\pythonProjectAI\.venv\Lib\site-packages\langchain\schema\retriever.py", line 204, in get_relevant_documents
result = self._get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Project\pythonProjectAI\.venv\Lib\site-packages\langchain\schema\vectorstore.py", line 657, in _get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Project\pythonProjectAI\.venv\Lib\site-packages\langchain\vectorstores\docarray\base.py", line 127, in similarity_search
results = self.similarity_search_with_score(query, k=k, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Project\pythonProjectAI\.venv\Lib\site-packages\langchain\vectorstores\docarray\base.py", line 106, in similarity_search_with_score
query_doc = self.doc_cls(embedding=query_embedding) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Project\pythonProjectAI\.venv\Lib\site-packages\pydantic\main.py", line 164, in __init__
__pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)
pydantic_core._pydantic_core.ValidationError: 2 validation errors for DocArrayDoc
text
Field required [type=missing, input_value={'embedding': [-0.0192381..., 0.010137099064823456]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.5/v/missing
metadata
Field required [type=missing, input_value={'embedding': [-0.0192381..., 0.010137099064823456]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.5/v/missing
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.chat_models import ChatOpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.prompts import ChatPromptTemplate
from langchain.vectorstores import DocArrayInMemorySearch
from langchain.schema import StrOutputParser
from langchain.schema.runnable import RunnableParallel, RunnablePassthrough
vectorstore = DocArrayInMemorySearch.from_texts(
["harrison worked at kensho", "bears like to eat honey"],
embedding=OpenAIEmbeddings(),
)
retriever = vectorstore.as_retriever()
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
model = ChatOpenAI()
output_parser = StrOutputParser()
setup_and_retrieval = RunnableParallel(
{"context": retriever, "question": RunnablePassthrough()}
)
chain = setup_and_retrieval | prompt | model | output_parser
chain.invoke("where did harrison work?")
### Expected behavior
The example runs well. | ValidationError: 2 validation errors for DocArrayDoc returned when try to execute the RAG Search Example | https://api.github.com/repos/langchain-ai/langchain/issues/14585/comments | 19 | 2023-12-12T08:57:21Z | 2024-06-08T16:07:56Z | https://github.com/langchain-ai/langchain/issues/14585 | 2,037,282,618 | 14,585 |
[
"hwchase17",
"langchain"
] | https://github.com/langchain-ai/langchain/blob/76905aa043e3e604b5b34faf5e91d0aedb5ed6dd/libs/langchain/langchain/chains/graph_qa/cypher.py#L266C2-L269C25
Hi, I am heavily using GraphCypherQAChain and sometimes cypher_query_corrector returns generated cypher as valid but have some syntax problems or completely it is not a cypher query; and during the execution of the neo4j, it raises errors. Where do you think that these errors should be handled, how to continue the chain execution without any interruption and return a plausible response to the user? @tomasonjo
Example:
<img width="1104" alt="image" src="https://github.com/langchain-ai/langchain/assets/9192832/0cac12a9-0a4c-4446-af30-2f54ac3290c8">
Thanks! | GraphCypherQAChain Unhandled Exception while running Erroneous Cypher Queries | https://api.github.com/repos/langchain-ai/langchain/issues/14584/comments | 1 | 2023-12-12T08:18:57Z | 2024-03-19T16:05:58Z | https://github.com/langchain-ai/langchain/issues/14584 | 2,037,219,925 | 14,584 |
[
"hwchase17",
"langchain"
] | ### Feature request
Currently, SemanticSimilarityExampleSelector only passes `k` as a parameter to vectorstore [see here](https://github.com/langchain-ai/langchain/blob/76905aa043e3e604b5b34faf5e91d0aedb5ed6dd/libs/core/langchain_core/example_selectors/semantic_similarity.py#L55C10-L55C10). vectorstore, depending on the implementation can take multiple other arguments. most notably, `filters` can be passed down [see here](https://github.com/langchain-ai/langchain/blob/76905aa043e3e604b5b34faf5e91d0aedb5ed6dd/libs/community/langchain_community/vectorstores/faiss.py#L505C13-L507C38)
### Motivation
Having the ability to filter down examples (on top of similarity search) would be very helpful in controlling the examples that are added to the prompt. This feature provides significant more control over examples selection.
### Your contribution
It is very easy to update this. add a new attribute `vectorstore_kwargs` to the class
class SemanticSimilarityExampleSelector(BaseExampleSelector, BaseModel):
"""Example selector that selects examples based on SemanticSimilarity."""
vectorstore: VectorStore
"""VectorStore than contains information about examples."""
k: int = 4
"""Number of examples to select."""
example_keys: Optional[List[str]] = None
"""Optional keys to filter examples to."""
input_keys: Optional[List[str]] = None
"""Optional keys to filter input to. If provided, the search is based on
the input variables instead of all variables."""
vectorstore_kwargs: Optional[Dict[str, Any]] = None
"""additional arguments passed to vectorstore for similarity search."""
and then update the `select_examples` function with
`example_docs = self.vectorstore.similarity_search(query, k=self.k, **self.vectorstore_kwargs)` | passing down vectorstore additional argument in SemanticSimilarityExampleSelector | https://api.github.com/repos/langchain-ai/langchain/issues/14583/comments | 1 | 2023-12-12T07:18:55Z | 2024-03-19T16:05:52Z | https://github.com/langchain-ai/langchain/issues/14583 | 2,037,136,239 | 14,583 |
[
"hwchase17",
"langchain"
] | ### System Info
python = "^3.10"
openai = "^1.3.8"
langchain = "^0.0.349"
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
````
import chromadb
from langchain.embeddings import AzureOpenAIEmbeddings
from langchain.vectorstores.chroma import Chroma
client = chromadb.HttpClient(
host=CHROMA_SERVER_HOST,
port=CHROMA_SERVER_HTTP_PORT,
)
embeddings = AzureOpenAIEmbeddings(
openai_api_type=AZURE_OPENAI_API_TYPE,
azure_endpoint=AZURE_OPENAI_API_BASE,
api_key=AZURE_OPENAI_API_KEY,
openai_api_version=AZURE_OPENAI_API_VERSION,
azure_deployment=AZURE_EMBEDDING_DEPLOYMENT_NAME,
)
vectordb = Chroma(
client=client,
collection_name=CHROMA_COLLECTION_NAME_FBIG_1000,
embedding_function=embeddings,
)
````
### Expected behavior
TypeError: cannot pickle '_thread.RLock' object
When I use openai = "0.28.1" it doesn't have the above error | RetrievalQA and AzureOpenAIEmbeddings lead to TypeError: cannot pickle '_thread.lock' object | https://api.github.com/repos/langchain-ai/langchain/issues/14581/comments | 15 | 2023-12-12T06:39:49Z | 2024-07-27T16:03:39Z | https://github.com/langchain-ai/langchain/issues/14581 | 2,037,087,675 | 14,581 |
[
"hwchase17",
"langchain"
] | ### System Info
Ubuntu 20.04
CUDA 12.1
NVIDIA RTX 4070
### Who can help?
@hwchase17
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.chains import LLMChain
from langchain.llms import LlamaCpp
from langchain.prompts import PromptTemplate
template = """Question: {question} Answer: Your Answer"""
prompt = PromptTemplate(template=template, input_variables=["question"])
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
n_gpu_layers = 80 # Change this value based on your model and your GPU VRAM pool.
n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.llm = LlamaCpp(model_path="/home/rtx-4070/Downloads/openorca-platypus2-13b.Q4_K_M.gguf",
n_gpu_layers=n_gpu_layers,
n_batch=n_batch,
callback_manager=callback_manager,
verbose=True,
n_ctx=2048
)
from langchain_experimental.agents.agent_toolkits import create_csv_agent
from langchain.agents.agent_types import AgentType
agent = create_csv_agent(
llm,
"/home/rtx-4070/git_brainyneurals/langchain_local/docs/SPY.csv",
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True
)
agent.run("How many rows are there?")
```
I am running opeorca model which I have downloaded from HuggingFace. But I am facing this erros. Could you please help me with related solutions? Or Suggest any other ways or models. Would be grateful.
Thanks in advance.
### Expected behavior
I want to make work simple csv agent that runs on my RTX 4070 Desktop GPU with any open source models. | An output parsing error occurred. Could not parse LLM output create_csv_agent | https://api.github.com/repos/langchain-ai/langchain/issues/14580/comments | 2 | 2023-12-12T05:58:28Z | 2024-03-19T16:05:47Z | https://github.com/langchain-ai/langchain/issues/14580 | 2,037,040,612 | 14,580 |
[
"hwchase17",
"langchain"
] | ### System Info
Lanchain: 0.0.348
Python 3.12.0
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce the error:
1. Run the [notebook](https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/llms/predibase.ipynb) with the latest version of Langchain. The error occurs specficially in 13 `review = overall_chain.run("Tragedy at sunset on the beach")`
### Expected behavior
The expected behavior is simply to return the output from the LLM. | Predibase TypeError: LlmMixin.prompt() got an unexpected keyword argument 'model_name' | https://api.github.com/repos/langchain-ai/langchain/issues/14564/comments | 1 | 2023-12-11T23:00:10Z | 2024-03-19T16:05:42Z | https://github.com/langchain-ai/langchain/issues/14564 | 2,036,678,638 | 14,564 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi,
I have this issue when working with more than 1 table using the Llama2 model. Let's say table 1 has values A, B, C, and table 2 has values X, Y, Z. When I run the query, it often gives me an error that it creates the query but assumes that table 1 has values X, Y, Z, and it mixes them up. Any suggestions on how to avoid this? I'm trying to solve it with prompts, but the model seems to ignore instructions in these cases.
### Suggestion:
_No response_ | Issue: Problem when using model llama2 13b chat to create SQL queries in 2 or more tables, it mixes the columns of the tables | https://api.github.com/repos/langchain-ai/langchain/issues/14553/comments | 1 | 2023-12-11T19:58:47Z | 2024-03-18T16:08:58Z | https://github.com/langchain-ai/langchain/issues/14553 | 2,036,427,335 | 14,553 |
[
"hwchase17",
"langchain"
] | ### System Info
Name: langchain
Version: 0.0.348
Name: PyGithub
Version: 2.1.1
### Who can help?
@hwchase17
@agola11
```
Traceback (most recent call last):
File "/Users/mac/Dev Projects/Chainlit_qa/test.py", line 12, in <module>
github = GitHubAPIWrapper(
^^^^^^^^^^^^^^^^^
File "/Users/mac/Dev Projects/Chainlit_qa/myenv/lib/python3.11/site-packages/pydantic/v1/main.py", line 339, in __init__
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mac/Dev Projects/Chainlit_qa/myenv/lib/python3.11/site-packages/pydantic/v1/main.py", line 1102, in validate_model
values = validator(cls_, values)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mac/Dev Projects/Chainlit_qa/myenv/lib/python3.11/site-packages/langchain/utilities/github.py", line 69, in validate_environment
installation = gi.get_installations()[0]
~~~~~~~~~~~~~~~~~~~~~~^^^
File "/Users/mac/Dev Projects/Chainlit_qa/myenv/lib/python3.11/site-packages/github/PaginatedList.py", line 62, in __getitem__
return self.__elements[index]
~~~~~~~~~~~~~~~^^^^^^^
IndexError: list index out of range
```
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run basic python file:
```
from langchain.agents.agent_toolkits.github.toolkit import GitHubToolkit
from langchain.utilities.github import GitHubAPIWrapper
github = GitHubAPIWrapper()
toolkit = GitHubToolkit.from_github_api_wrapper(github)
tools = toolkit.get_tools()
```
### Expected behavior
Fetching github repository information | GithubAPIWrapper throws list index out of range error | https://api.github.com/repos/langchain-ai/langchain/issues/14550/comments | 8 | 2023-12-11T18:30:31Z | 2023-12-22T09:32:06Z | https://github.com/langchain-ai/langchain/issues/14550 | 2,036,284,698 | 14,550 |
[
"hwchase17",
"langchain"
] | ### Feature request
As of `langchain==0.0.348` in [`ChatVertexAI` here](https://github.com/langchain-ai/langchain/blob/v0.0.348/libs/langchain/langchain/chat_models/vertexai.py#L187-L191):
1. `vertexai.language_models.ChatSession.send_message` returns a `vertexai.language_models.MultiCandidateTextGenerationResponse` response
2. `ChatVertexAI` throws out the response's `grounding_metadata`, only preserving the `text`
The request is to not discard this useful metadata
### Motivation
I am trying to use the `response.grounding_metadata.citations` in my own code
### Your contribution
I will open a PR to make this:
```python
generations = [
ChatGeneration(
message=AIMessage(content=r.text),
generation_info=r.raw_prediction_response,
)
for r in response.candidates
]
```
So the raw response remains accessible | Request: `ChatVertexAI` preserving `grounding_metadata` | https://api.github.com/repos/langchain-ai/langchain/issues/14548/comments | 1 | 2023-12-11T18:06:33Z | 2024-01-25T04:37:45Z | https://github.com/langchain-ai/langchain/issues/14548 | 2,036,244,262 | 14,548 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain 0.0.331, macOS Monterey, Python 3.10.9
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import UnstructuredHTMLLoader
loader = UnstructuredHTMLLoader("https://www.sec.gov/ix?doc=/Archives/edgar/data/40987/000004098720000010/gpc-12312019x10k.htm")
documents = loader.load()
FileNotFoundError: [Errno 2] No such file or directory: 'https://www.sec.gov/ix?doc=/Archives/edgar/data/40987/000004098720000010/gpc-12312019x10k.htm
### Expected behavior
Success loading .htm file | Does HTML Doc Loader accept .htm sites? | https://api.github.com/repos/langchain-ai/langchain/issues/14545/comments | 2 | 2023-12-11T16:25:09Z | 2024-04-10T16:15:24Z | https://github.com/langchain-ai/langchain/issues/14545 | 2,036,050,915 | 14,545 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain version = 0.0.344
Python version = 3.11.5
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
here is my code. i am unable to connect to powerbi dataset.
from langchain.agents.agent_toolkits import PowerBIToolkit
from langchain.utilities.powerbi import PowerBIDataset
from azure.identity import ClientSecretCredential
from azure.core.credentials import TokenCredential
# Create an instance of the language model (llm)
toolkit = PowerBIToolkit(
powerbi=PowerBIDataset(
dataset_id="",
table_names=['WinShareTable'],
credential=ClientSecretCredential(
client_id=",
client_secret='',
tenant_id=""
)
),
llm=llm
)
Either getting field "credential" not yet prepared so type is still a ForwardRef, you might need to call PowerBIDataset.update_forward_refs() error or
Tokencredentials error
### Expected behavior
Should allow to connect to PowerBi datasets | impossible to connect to PowerBI Datasets even after providing all the information | https://api.github.com/repos/langchain-ai/langchain/issues/14538/comments | 1 | 2023-12-11T14:07:35Z | 2024-03-18T16:08:54Z | https://github.com/langchain-ai/langchain/issues/14538 | 2,035,763,317 | 14,538 |
[
"hwchase17",
"langchain"
] | ### Discussed in https://github.com/langchain-ai/langchain/discussions/13245
<div type='discussions-op-text'>
<sup>Originally posted by **yallapragada** November 12, 2023</sup>
I am testing a simple RAG implementation with Azure Cognitive Search.
I am seeing a "cannot import name 'Vector' from azure.search.documents.models" error when I invoke my chain. Origin of my error is line 434 in lanchain/vectorstores/azuresearch.py (from azure.search.documents.models import Vector)
this is the relevant code snippet, I get the import error when I execute rag_chain.invoke(question)
from langchain.schema.runnable import RunnablePassthrough
from langchain.prompts import ChatPromptTemplate
from langchain.chat_models.azure_openai import AzureChatOpenAI
question = "my question.."
-- vector_store is initialized using AzureSearch(), not including that snippet here --
retriever = vector_store.as_retriever()
template = '''
Answer the question based on the following context:
{context}
Question: {question}
'''
prompt = ChatPromptTemplate.from_template(template=template)
llm = AzureChatOpenAI( deployment_name='MY_DEPLOYMENT_NAME', model_name='MY_MODEL', openai_api_base=MY_AZURE_OPENAI_ENDPOINT, openai_api_key=MY_AZURE_OPENAI_KEY, openai_api_version='2023-05-15', openai_api_type='azure' )
rag_chain = {'context' : retriever, 'question' : RunnablePassthrough} | prompt | llm
rag_chain.invoke(question)
--------------
my package versions
langchain==0.0.331
azure-search-documents==11.4.0b11
azure-core==1.29.5
openai==0.28.1</div> | Stable release 11.4.0 of azure.search.documents.models not compatible with latest langchain version -> class Vector gone | https://api.github.com/repos/langchain-ai/langchain/issues/14534/comments | 1 | 2023-12-11T12:44:45Z | 2024-03-18T16:08:49Z | https://github.com/langchain-ai/langchain/issues/14534 | 2,035,602,125 | 14,534 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Page: https://python.langchain.com/docs/modules/memory/
In dark mode, there is very little contrast between inputs and outputs. Especially for pages imported from Jupyter notebooks, it can be really confusing figuring out which code blocks are safe to test in a `.py` function and which code blocks are intended for use in Jupyter.
Adding in explicit contrast or labeling between input and output blocks would be helpful.

### Idea or request for content:
_No response_ | DOC: Please add stronger contrast or labeling between notebook input and output blocks | https://api.github.com/repos/langchain-ai/langchain/issues/14532/comments | 2 | 2023-12-11T12:20:45Z | 2024-04-01T16:05:54Z | https://github.com/langchain-ai/langchain/issues/14532 | 2,035,556,970 | 14,532 |
[
"hwchase17",
"langchain"
] | ### Feature request
Weaviate has released a beta version of their [python client v4](https://weaviate.io/developers/weaviate/client-libraries/python) which seems to be more robust compared to v3. It follows a different structure but allows for more versatility and better error handling when it comes to queries. I think it would be great to add support in langchain for the new client.
### Motivation
I was using the v3 client (without langchain) to batch import data into Weaviate. It worked, but it was slower than I expected and also resulted in errors that I was not able to manage easily. The new v4 client supports their new [gRPC API](https://weaviate.io/developers/weaviate/api/grpc) which outperforms the traditional REST API that v3 is using.
### Your contribution
I will start creating some custom functions using Weaviate's new client to test its reliability. If I don't encounter any serious errors, I'll try to find the time and create a PR to add support for it in langchain. I think that support for both v3 and v4 should exist, at least until v4 becomes stable. | Support Weaviate client v4 | https://api.github.com/repos/langchain-ai/langchain/issues/14531/comments | 1 | 2023-12-11T12:15:23Z | 2024-03-12T13:14:58Z | https://github.com/langchain-ai/langchain/issues/14531 | 2,035,547,273 | 14,531 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.348
wsl
python 3.10.2
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm looking to run a simple RAG using Qdrant pre instantiated with data (something like the [link from the docs](https://colab.research.google.com/drive/19RxxkZdnq_YqBH5kBV10Rt0Rax-kminD?usp=sharing#scrollTo=fvCHMT73SmKi)) but its giving me the error in the title.
I'm posting it here since the error seems to be on langchain side. not qdrant side.
Ive tried various ways of using langchain to connect to qdrant but it always ends up with this error.
This happens even if i use the deprecated VectorDBQA or RetrievalQA
```
from qdrant_client import QdrantClient
from langchain.chat_models import AzureChatOpenAI
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.vectorstores import Qdrant
QDRANT_ENDPOINT = "localhost"
MODEL_TO_USE = 'BAAI/bge-base-en-v1.5'
collection_name = "collection"
client = QdrantClient(QDRANT_ENDPOINT, port=6333, timeout=120)
embeddings = HuggingFaceEmbeddings(model_name=MODEL_TO_USE)
qdrant = Qdrant(client, collection_name, embeddings)
qdrant.similarity_search("middle east")
```
gives
```
>>> qdrant.similarity_search("middle east")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/.local/lib/python3.10/site-packages/langchain/vectorstores/qdrant.py", line 274, in similarity_search
results = self.similarity_search_with_score(
File "/home/user/.local/lib/python3.10/site-packages/langchain/vectorstores/qdrant.py", line 350, in similarity_search_with_score
return self.similarity_search_with_score_by_vector(
File "/home/user/.local/lib/python3.10/site-packages/langchain/vectorstores/qdrant.py", line 595, in similarity_search_with_score_by_vector
results = self.client.search(
File "/home/user/.local/lib/python3.10/site-packages/qdrant_client/qdrant_client.py", line 340, in search
return self._client.search(
File "/home/user/.local/lib/python3.10/site-packages/qdrant_client/qdrant_remote.py", line 472, in search
search_result = self.http.points_api.search_points(
File "/home/user/.local/lib/python3.10/site-packages/qdrant_client/http/api/points_api.py", line 1388, in search_points
return self._build_for_search_points(
File "/home/user/.local/lib/python3.10/site-packages/qdrant_client/http/api/points_api.py", line 636, in _build_for_search_points
return self.api_client.request(
File "/home/user/.local/lib/python3.10/site-packages/qdrant_client/http/api_client.py", line 74, in request
return self.send(request, type_)
File "/home/user/.local/lib/python3.10/site-packages/qdrant_client/http/api_client.py", line 97, in send
raise UnexpectedResponse.for_response(response)
qdrant_client.http.exceptions.UnexpectedResponse: Unexpected Response: 400 (Bad Request)
Raw response content:
b'{"status":{"error":"Wrong input: Default vector params are not specified in config"},"time":0.000118885}'
```
Using this as a retriever also fails, which is what i need it for.
Testing with qdrant alone works.
```
client.query(collection_name,"middle east") # using qdrant itself works
```
### Expected behavior
For the queries to be fetched and not throw this error | b'{"status":{"error":"Wrong input: Default vector params are not specified in config"},"time":0.00036507}' | https://api.github.com/repos/langchain-ai/langchain/issues/14526/comments | 1 | 2023-12-11T10:30:41Z | 2024-03-18T16:08:44Z | https://github.com/langchain-ai/langchain/issues/14526 | 2,035,343,197 | 14,526 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain=0.0.348
openai=0.28.1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
model= "chatglm3"
llm = ChatOpenAI(model_name=model,openai_api_key=api_key,openai_api_base=api_url)
db = SQLDatabase.from_uri("mysql+pymysql://.....")
toolkit = SQLDatabaseToolkit(db=db, llm=llm,use_query_checker=True)
agent_executor = create_sql_agent( llm=llm,toolkit=toolkit, verbose=True,agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,handle_parsing_errors=False
)
content=agent_executor.run("Look at the structure of the table ads_pm_empl_count first, then give the first three rows of data")
print(content)
<img width="875" alt="联想截图_20231211165355" src="https://github.com/langchain-ai/langchain/assets/73893296/8fcee38e-0235-44c5-8454-6fd368351488">
### Expected behavior
How do I fix this output format problem | When the self-deployed chatglm3 model is invoked based on the openai API specification and create_sql_agent is used to query the first three rows of the data table, the output format is reported to be incorrect. But there are no formatting errors with qwen. How do I fix chatglm3 | https://api.github.com/repos/langchain-ai/langchain/issues/14523/comments | 1 | 2023-12-11T08:58:58Z | 2024-03-18T16:08:39Z | https://github.com/langchain-ai/langchain/issues/14523 | 2,035,156,450 | 14,523 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain=0.0.348
python=3.9
openai=0.28.1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
![Uploading 联想截图_20231211165355.png…]()
model= "chatglm3"
#llm = OpenAI(model_name=model,openai_api_key=api_key,openai_api_base=api_url)
llm = ChatOpenAI(model_name=model,openai_api_key=api_key,openai_api_base=api_url)
db = SQLDatabase.from_uri("mysql+pymysql://。。。。。")
toolkit = SQLDatabaseToolkit(db=db, llm=llm,use_query_checker=True)
#memory = ConversationBufferMemory(memory_key ="chat_history ")
agent_executor = create_sql_agent(
llm=llm,
toolkit=toolkit,
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=False
)
# content=agent_executor.output_schema.schema()
content=agent_executor.run("Look at the structure of the table ads_pm_empl_count first, then give the first three rows of data")
print(content)
### Expected behavior
How do I fix this output format problem | 1 | https://api.github.com/repos/langchain-ai/langchain/issues/14522/comments | 1 | 2023-12-11T08:50:01Z | 2023-12-11T09:00:17Z | https://github.com/langchain-ai/langchain/issues/14522 | 2,035,138,288 | 14,522 |
[
"hwchase17",
"langchain"
] | ### System Info
OS == Windows 11
Python == 3.10.11
Langchain == 0.0.348
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```bash
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from dotenv import load_dotenv
>>> load_dotenv()
True
>>> from langchain.agents import AgentType, initialize_agent
>>> from langchain.tools import StructuredTool
>>> from langchain.chat_models import ChatOpenAI
>>> llm = ChatOpenAI(model='gpt-3.5-turbo')
>>> def func0(a: int, b: int) -> int:
... return a+b
...
>>> def func1(a: int, b: int, d: dict[str, int]={}) -> int:
... print(d)
... return a+b
...
>>> def func2(a: int, b: int, d: dict[str, int]={'c': 0}) -> int:
... print(d)
... return a+b
...
>>> tool0 = StructuredTool.from_function(func0, name=func0.__name__, description='a+b')
>>> tool1 = StructuredTool.from_function(func1, name=func1.__name__, description='a+b')
>>> tool2 = StructuredTool.from_function(func2, name=func2.__name__, description='a+b')
>>> agent0 = initialize_agent(agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, tools=[tool0], llm=llm)
>>> agent1 = initialize_agent(agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, tools=[tool1], llm=llm)
>>> agent2 = initialize_agent(agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, tools=[tool2], llm=llm)
>>> agent0.run('hello')
'Hi there! How can I assist you today?'
>>> agent0.invoke(dict(input='hello'))
{'input': 'hello', 'output': 'Hi there! How can I assist you today?'}
>>> agent1.run('hello')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 507, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 288, in __call__
inputs = self.prep_inputs(inputs)
File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 435, in prep_inputs
raise ValueError(
ValueError: A single string input was passed in, but this chain expects multiple inputs ({'', 'input'}). When a chain expects multiple inputs, please call it by passing in a dictionary, eg `chain({'foo': 1, 'bar': 2})`
>>> agent1.invoke(dict(input='hello'))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 89, in invoke
return self(
File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 288, in __call__
inputs = self.prep_inputs(inputs)
File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 445, in prep_inputs
self._validate_inputs(inputs)
File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 197, in _validate_inputs
raise ValueError(f"Missing some input keys: {missing_keys}")
ValueError: Missing some input keys: {''}
>>> agent2.run('hello')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 507, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 288, in __call__
inputs = self.prep_inputs(inputs)
File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 435, in prep_inputs
raise ValueError(
ValueError: A single string input was passed in, but this chain expects multiple inputs ({"'c'", 'input'}). When a chain expects multiple inputs, please call it by passing in a dictionary, eg `chain({'foo': 1, 'bar': 2})`
>>> agent2.invoke(dict(input='hello'))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 89, in invoke
return self(
File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 288, in __call__
inputs = self.prep_inputs(inputs)
File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 445, in prep_inputs
self._validate_inputs(inputs)
File "D:\workspace\agi-baby\venv\lib\site-packages\langchain\chains\base.py", line 197, in _validate_inputs
raise ValueError(f"Missing some input keys: {missing_keys}")
ValueError: Missing some input keys: {"'c'"}
```
### Expected behavior
Given a `StructuredTool` which has an argument which defaults to a `dict` value, `StructuredChatAgent` with the tool should work.
In the above reproduction codes, `agent1` and `agent2` should works as `agent0` works.
| [Maybe Bug] `StructuredChatAgent` raises `ValueError` with a `StructuredTool` which has an argument which defaults to a `dict` default value | https://api.github.com/repos/langchain-ai/langchain/issues/14521/comments | 3 | 2023-12-11T08:41:13Z | 2024-03-18T16:08:34Z | https://github.com/langchain-ai/langchain/issues/14521 | 2,035,123,971 | 14,521 |
[
"hwchase17",
"langchain"
] | ### System Info
LangChain: 0.0.311
Python: 3.11
OS: macOS 11.6
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Thought: The user is asking for help implementing CRUD operations for a specific table in a MySQL database using the Go language. This is a programming task so I will use the programmer_agent tool to help with this.
Action:
```
{
"action": "programmer_agent",
"action_input": {
"task": {
"title": "Implement CRUD operations for limited_relationships_config table in MySQL using Go",
"description": "Write functions for adding, updating, deleting, and retrieving limited relationships configurations in a MySQL database using Go. The operations will be performed on the `limited_relationships_config` table.",
"type": "code"
}
}
}
```
### Expected behavior
Thought: The user is asking for help implementing CRUD operations for a specific table in a MySQL database using the Go language. This is a programming task so I will use the programmer_agent tool to help with this.
Action:
```
{
"action": "programmer_agent",
"action_input": {
"task": "Implement CRUD operations for limited_relationships_config table in MySQL using Go"
}
}
``` | StructuredChatAgent would go wrong when the input contain some code such as protobuf. | https://api.github.com/repos/langchain-ai/langchain/issues/14520/comments | 3 | 2023-12-11T07:44:54Z | 2024-03-18T16:08:29Z | https://github.com/langchain-ai/langchain/issues/14520 | 2,035,027,147 | 14,520 |
[
"hwchase17",
"langchain"
] | ### Feature request
i want to use gpt2 for text genration & want to control the llm
### Motivation
gpt2 is smaller version & its is best for next-word prediction
### Your contribution
want to use below code for loading gpt2 model
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = 'gpt2'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
prompt = 'want to do more '
config1 = {
'num_beams': 3, # Number of beams for beam search
'do_sample': True, # Whether to do sampling or not
'temperature': 0.6 # The value used to module the next token probabilities
}
config2 = {
'penalty_alpha': 0.6, # The penalty alpha for contrastive search
'top_k': 6 # The number of highest probability vocabulary tokens to keep for top-k-filtering
}
inputs = tokenizer(prompt, return_tensors='pt')
output = model.generate(**inputs, **config1) # Here, you should choose config1 or config2
result = tokenizer.decode(output[0], skip_special_tokens=True)
print("------->>>>.",result)
``` | how to use gpt2 with custom promt? | https://api.github.com/repos/langchain-ai/langchain/issues/14519/comments | 1 | 2023-12-11T07:38:08Z | 2024-03-18T16:08:23Z | https://github.com/langchain-ai/langchain/issues/14519 | 2,035,017,496 | 14,519 |
[
"hwchase17",
"langchain"
] | ### System Info
```langchain = "^0.0.345"```
I want to embed and store multiple documents in PGVector and RAG query the DB.
When saving documents, I am specifying a collection_name for each document.
(For example, if I have 3 documents, I have 3 collection_names)
Is it possible to separate collections like this?
Also, is the collection_name required when connecting to PGVector?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
db1 = PGVector.from_documents(
embedding=embed,
documents=documents,
collection_name="my doc 1",
connection_string=CONNECTION_STRING
)
db2 = PGVector.from_documents(
embedding=embed,
documents=documents,
collection_name="my doc 2",
connection_string=CONNECTION_STRING
)
# after, To query the DB where the document is stored.
db_connection = PGVector(
embedding=embed,
documents=documents,
collection_name="my doc 2", # I don't want to specify a collection_name.
connection_string=CONNECTION_STRING
)
### Expected behavior
I want to connect to the PGVector only the first time, and then use that session to query the collection_name (1 document in my case). | Is 'collection_name' required when initializing 'PGVector'? | https://api.github.com/repos/langchain-ai/langchain/issues/14518/comments | 2 | 2023-12-11T05:36:15Z | 2024-03-28T14:13:27Z | https://github.com/langchain-ai/langchain/issues/14518 | 2,034,867,869 | 14,518 |
[
"hwchase17",
"langchain"
] | https://github.com/langchain-ai/langchain/blob/c0f4b95aa9961724ab4569049b4c3bc12ebbacfc/libs/langchain/langchain/vectorstores/chroma.py#L742
This function breaks the pattern of how the `embedding_function` is referenced by just calling it `embedding`.
Small issue but definitely makes it a bit confusing to navigate without diving into the code/docs.
Happy to PR it in if worthwhile | Breaking of pattern in `from_document` function | https://api.github.com/repos/langchain-ai/langchain/issues/14517/comments | 1 | 2023-12-11T05:30:25Z | 2024-03-18T16:08:13Z | https://github.com/langchain-ai/langchain/issues/14517 | 2,034,861,990 | 14,517 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.10.2
langchain version 0.0.339
WSL
### Who can help?
@hwchase17
@agola11
### Information
- [x ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am using Qdrant on docker with information preloaded. However im unable to get it to search
```
MODEL_TO_USE = 'all-mpnet-base-v2'
client = QdrantClient(QDRANT_ENDPOINT, port=6333, timeout=120)
embeddings = HuggingFaceEmbeddings(model_name=MODEL_TO_USE)
qdrant = Qdrant(client, collection_name, embeddings)
conversation_chain = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=qdrant.as_retriever(search_kwargs={'k': 5}),
get_chat_history=lambda h : h,
memory=memory
)
# im using a streamlit frontend
import streamlit as st
st.session_state.conversation = conversation_chain
# during chat
result = st.session_state.conversation({
"question": user_query,
"chat_history": st.session_state['chat_history']})
```
Result would then lead to the error.
### Expected behavior
Chat is supposed to work instead of getting
```
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Document
page_content
none is not an allowed value (type=type_error.none.not_allowed)
``` | Qdrant retriever with existing data leads to pydantic.error_wrappers.ValidationError: 1 validation error for Document page_content none is not an allowed value (type=type_error.none.not_allowed) | https://api.github.com/repos/langchain-ai/langchain/issues/14515/comments | 16 | 2023-12-11T02:50:24Z | 2024-06-25T10:46:41Z | https://github.com/langchain-ai/langchain/issues/14515 | 2,034,721,756 | 14,515 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
_No response_
### Idea or request for content:
_No response_ | DOC: Could load GGUF models from https | https://api.github.com/repos/langchain-ai/langchain/issues/14514/comments | 21 | 2023-12-11T02:45:28Z | 2024-03-19T16:05:38Z | https://github.com/langchain-ai/langchain/issues/14514 | 2,034,717,889 | 14,514 |
[
"hwchase17",
"langchain"
] | ### Feature request
Customize how messages are formatted in MessagesPlaceholder
Currently, history messages are always format to:
'''
Human: ...
AI: ...
'''
Popular chat fine-tunings use all sorts of different formats.
Example:
'''
<|im_start|>user
...<|im_end|>
<|im_start|>assistant
...<|im_end|>
'''
There's currently no way to change how history messages are prompted.
### Motivation
I wanted to use langchain to make chatbots. Currently that use case is not well supported.
### Your contribution
Here's how I got it to work. I have to manually parse the history in a previous step.
<img width="683" alt="image" src="https://github.com/langchain-ai/langchain/assets/2053475/15fe31e2-cc23-47f2-9d0c-d5670f22768a">
| Customize how messages are formatted in MessagesPlaceholder | https://api.github.com/repos/langchain-ai/langchain/issues/14513/comments | 1 | 2023-12-10T23:33:17Z | 2024-03-17T16:10:46Z | https://github.com/langchain-ai/langchain/issues/14513 | 2,034,583,189 | 14,513 |
[
"hwchase17",
"langchain"
] | ### System Info
0.0.348, linux , python 3.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
chain_with_history = RunnableWithMessageHistory(
chain,
lambda session_id: RedisChatMessageHistory(session_id, url=REDIS_URL, ttl=600),
input_messages_key="inputs",
history_messages_key="history",
)
return chain_with_history
response = customer_support_invoke().stream(
{"inputs": userChats.inputs, "profile": userChats.profile},
config={"configurable": {"session_id": "123"}},
)
for s in response:
print(s.content, end="", flush=True)
```
if I streaming twice , it will raise an error "ValueError: Got unexpected message type: AIMessageChunk
"
### Expected behavior
streaming correctly | RunnableWithMessageHistory streaming bug | https://api.github.com/repos/langchain-ai/langchain/issues/14511/comments | 11 | 2023-12-10T20:43:15Z | 2024-04-05T16:07:05Z | https://github.com/langchain-ai/langchain/issues/14511 | 2,034,526,238 | 14,511 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
import requests
from typing import Optional
from langchain.tools import StructuredTool
``` python
def post_message(url: str, body: dict, parameters: Optional[dict]=None) -> str:
"""Sends a POST request to the given url with the given body and parameters."""
result = requests.post(url, json=body, params=parameters)
return result.text
custom_tool = StructuredTool.from_function(post_message)
from langchain.agents import initialize_agent, AgentType
from langchain.chat_models import ChatOpenAI
tools = [custom_tool] # Add any tools here
llm = ChatOpenAI(temperature=0) # or any other LLM
agent_chain = initialize_agent(tools, llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
```
How to run this and get the results if I have an api endpoint
### Idea or request for content:
_No response_ | Help me run this | https://api.github.com/repos/langchain-ai/langchain/issues/14508/comments | 8 | 2023-12-10T16:37:15Z | 2023-12-11T04:49:51Z | https://github.com/langchain-ai/langchain/issues/14508 | 2,034,443,411 | 14,508 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
The in-memory cache section of the LLM Caching documentation shows uncached and subsequent cached responses that differ. The later examples show cached responses that are the same as the uncached response, which makes more sense.
### Idea or request for content:
The cached response under LLM Caching: In-memory should be "\n\nWhy couldn't the bicycle stand up by itself? It was...two tired!" | DOC: LLM Caching example using in-memory cache is unclear | https://api.github.com/repos/langchain-ai/langchain/issues/14505/comments | 1 | 2023-12-10T15:22:40Z | 2024-03-17T16:10:41Z | https://github.com/langchain-ai/langchain/issues/14505 | 2,034,414,953 | 14,505 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain version: 0.0.348
Python version: 3.10.6
Platform: Linux
The issue is that when using Conversational Chain for with Kendra Retriever with memory, on any followup questions it gives this error.
My guess is somehow QueryText is getting more than the {question} value asked.

It works for first hit but for any follow up question (while memory is not empty) it gives this error.
### Who can help?
@hwchase17
@ago
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce error:
Create the retriever object:
from langchain.retrievers import AmazonKendraRetriever
retriever = AmazonKendraRetriever(
index_id=kendra_index_id,
top_k = 5
)
Then use it in Conversational Chain:
chain = ConversationalRetrievalChain.from_llm(
llm,
retriever=retriever,
memory=memory,
verbose=True,
condense_question_prompt=chat_prompt,
combine_docs_chain_kwargs=dict(prompt=rag_prompt))
chain.run({"question":question})
Run at least twice, to get the error.
### Expected behavior
There shouldn't be any such error.
Question or QueryText is always less than 1000 characters. | AmazonKendraRetriever: Error The provided QueryText has a character count of 1020, which exceeds the limit. | https://api.github.com/repos/langchain-ai/langchain/issues/14494/comments | 5 | 2023-12-09T16:34:11Z | 2024-03-17T16:10:37Z | https://github.com/langchain-ai/langchain/issues/14494 | 2,033,925,929 | 14,494 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I tried to work on SQL cutsom prompt, but it didn't work and is still giving the wrong sql queries . Here is the code : def process_user_input(user_input):
create_db()
input_db = SQLDatabase.from_uri('sqlite:///sample_db_2.sqlite')
llm_1 = OpenAI(temperature=0)
db_agent = SQLDatabaseChain.from_llm(llm = llm_1,
db = input_db,
verbose=True,)
chain = create_sql_query_chain(ChatOpenAI(temperature=0), input_db)
response = chain.invoke({"question": user_input})
### Suggestion:
_No response_ | why the sql langchain's custom prompt is not working? | https://api.github.com/repos/langchain-ai/langchain/issues/14487/comments | 2 | 2023-12-09T12:19:40Z | 2024-03-17T16:10:32Z | https://github.com/langchain-ai/langchain/issues/14487 | 2,033,828,081 | 14,487 |
[
"hwchase17",
"langchain"
] | ### System Info
I have retriever implementation like this
```
def get_vector_store(options: StoreOptions) -> VectorStore:
"""Gets the vector store for the given options."""
vector_store: VectorStore
embedding = get_embeddings()
store_type = os.environ.get("STORE")
if store_type == StoreType.QDRANT.value:
client = qdrant_client.QdrantClient(
url=os.environ["QDRANT_URL"],
prefer_grpc=True,
api_key=os.getenv("QDRANT_API_KEY", None),
)
vector_store = Qdrant(
client, collection_name=options.namespace, embeddings=embedding
)
# vector_store = Qdrant.from_documents([], embedding, url='http://localhost:6333', collection=options.namespace)
else:
raise ValueError("Invalid STORE environment variable value")
return vector_store
```
get-embeddings.py
```
return OllamaEmbeddings(base_url=f"host.docker.internal:11434", model="mistral")
```
```
knowledgebase: VectorStore = get_vector_store(StoreOptions("knowledgebase"))
async def get_relevant_docs(text: str, bot_id: str) -> Optional[str]:
try:
kb_retriever = knowledgebase.as_retriever(
search_kwargs={
"k": 3,
"score_threshold": vs_thresholds.get("kb_score_threshold"),
"filter": {"bot_id": bot_id},
},
)
result = kb_retriever.get_relevant_documents(text)
if result and len(result) > 0:
# Assuming result is a list of objects and each object has a page_content attribute
all_page_content = "\n\n".join([item.page_content for item in result])
return all_page_content
return None
except Exception as e:
logger.error(
"Error occurred while getting relevant docs",
incident="get_relevant_docs",
payload=text,
error=str(e),
)
return None
```
As long as i use chatgpt embeddings and chat models, i get the correct outputs. Once i switch to ollama, none of my retrievers are working.
I see the documents being ingested to qdrant, which means embeddings are working, but retrievers fail to retrieve any document
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
ss
### Expected behavior
retrievers should be able to fetch the documents from qdrant irrespective of embedding models being used | Retrievers don't seem to work properly with ollama | https://api.github.com/repos/langchain-ai/langchain/issues/14485/comments | 4 | 2023-12-09T08:06:14Z | 2023-12-10T04:17:57Z | https://github.com/langchain-ai/langchain/issues/14485 | 2,033,718,595 | 14,485 |
[
"hwchase17",
"langchain"
] | ### Feature request
It would be nice to write code for any generic LLM to use in a given chain, being able to specify its provider and parameters in a config file outside the source code.
Is there a built-in way to do so? What major roadblocks would you see in doing that?
### Motivation
Most examples that I see in the docs or over the Internet tend to specify from code whether the LLM to be used in a given chain has to be from openAI, anthropic, LLaMA, etc., resulting in different codebases, while it would be great to write one single unified codebase and compare the performance of different open-source or API LLMs by simply changing one line of config.
This would be especially relevant when the specific LLM provider is not set or known from the get-go and multiple ones should be compared to find the most suitable one in terms of perfomance, efficiency and cost for any given LLM-driven application.
### Your contribution
I don't have time for a PR now, but since I have been doing a similar thing for our private codebase at the laboratory, combining openAI, anthropic and LLaMA under a single unified spell without using the langchain framework, I may be interested in supporting that in the future as the framework already has lots of interesting features in terms of tooling, memory and storage. | generic codebase? | https://api.github.com/repos/langchain-ai/langchain/issues/14483/comments | 7 | 2023-12-09T07:06:17Z | 2024-04-09T16:14:17Z | https://github.com/langchain-ai/langchain/issues/14483 | 2,033,679,268 | 14,483 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.335
### Who can help?
@hwchase17 @agol
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create new object of `langchain.llms.vertexai.VertexAI` class.
2. Add attribute `request_timeout` with some value.
3. Run the model. Timeout specified is not respected.
### Expected behavior
VertexAI request should timeout as per the `request_timeout` specified and then retried as per retry configuration. This works flawlessly for `AzureChatOpenAI`. | VertexAI classs doesn't support request_timeout | https://api.github.com/repos/langchain-ai/langchain/issues/14478/comments | 2 | 2023-12-09T03:28:39Z | 2024-03-18T16:08:03Z | https://github.com/langchain-ai/langchain/issues/14478 | 2,033,581,284 | 14,478 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Hi,
I'm trying to understand _**Pinecone.from_documents**_ as shown [here ](https://python.langchain.com/docs/integrations/vectorstores/pinecone).
I saw it used the same way in another tutorial as well and it seems like it should be calling OpenAIEmbeddings.embed_documents at some point and then upserting the texts and vectors to the pinecone index, but I can't find the actual python file for the method or any documentation anywhere.
I'm trying to adapt it and need to understand how **_from_documents_** works. Specifically I am trying to do the following and am curious if the method could be used in this way
My goals:
- Use Pinecone.from_documents(documents, embeddings, index_name=index_name, namespace=namespace) in a for loop where in each iteration namespace change and the "chunked" documents are different. Each namespace represents a different chunking strategy for an experiment.
Note that I defined embeddings=OpenAIEmbeddingsWrapper(model=embedding_model_name) before the for loop, where OpenAIEmbeddingsWrapper is a wrapper class around the OpenAIEmbeddings object, and embedding_model_name="text-embedding-ada-002".
Why I'm asking:
- It seems like the number of texts and vectors that I can extract from OpenAIEmbeddingsWrapper (using the OpenAIEmbeddings.embed_documents method) for each namespace doesn't match what's in Pinecone (610 texts/vectors from the method vs 1251 in Pinecone).
### Idea or request for content:
Can you share some details about Pinecone.from_documents and if it can be used multiple times to upsert chunk documents onto a pinecone index? | DOC: Need clarity on Pinecone.from_documents and OpenAIEmbeddings. | https://api.github.com/repos/langchain-ai/langchain/issues/14472/comments | 7 | 2023-12-08T22:48:39Z | 2024-03-18T16:07:59Z | https://github.com/langchain-ai/langchain/issues/14472 | 2,033,402,079 | 14,472 |
[
"hwchase17",
"langchain"
] | ### System Info
OS: Using docker image amd64/python:3.10-slim
Python Version: 3.10.13
Langchain Version: 0.0.336
OpenAI Version: 0.27.7
Tenacity Version: 4.65.0
### Who can help?
@agola11 @hwchase17
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When I try to use an llm with a custom openai_api_base argument within an agent, the agent appears to be attempting to access the **OpenAI** API endpoint instead of the custom one I have specified.
Running:
llm = ChatOpenAI(default_headers={"api-key":"**REDACTED**",
openai_api_base="**REDACTED**",
openai_api_key="none").bind(stop=["\nObservation"])
tools = []
tools.append(Tool.from_function(func=self.get_scores,
name="get_scores",
description="function to get scores"))
prompt = PromptTemplate.from_template("""Answer the following questions as best you can. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {input}
Thought:{agent_scratchpad}""")
prompt = prompt.partial(tools=render_text_description(tools),
tool_names=", ".join([t.name for t in tools]),
)
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),
}
| prompt
| llm
| ReActSingleInputOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
output = agent_executor.invoke({"input":"foo"})
yields:
`File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 87, in invoke
return self(
File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.10/site-packages/langchain/agents/agent.py", line 1245, in _call
next_step_output = self._take_next_step(
File "/usr/local/lib/python3.10/site-packages/langchain/agents/agent.py", line 1032, in _take_next_step
output = self.agent.plan(
File "/usr/local/lib/python3.10/site-packages/langchain/agents/agent.py", line 385, in plan
output = self.runnable.invoke(inputs, config={"callbacks": callbacks})
File "/usr/local/lib/python3.10/site-packages/langchain/schema/runnable/base.py", line 1427, in invoke
input = step.invoke(
File "/usr/local/lib/python3.10/site-packages/langchain/schema/runnable/base.py", line 2787, in invoke
return self.bound.invoke(
File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/base.py", line 142, in invoke
self.generate_prompt(
File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/base.py", line 459, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/base.py", line 349, in generate
raise e
File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/base.py", line 339, in generate
self._generate_with_cache(
File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/base.py", line 492, in _generate_with_cache
return self._generate(
File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 422, in _generate
response = self.completion_with_retry(
File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 352, in completion_with_retry
return _completion_with_retry(**kwargs)
File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 325, in iter
raise retry_exc.reraise()
File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 158, in reraise
raise self.last_attempt.result()
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 350, in _completion_with_retry
return self.client.create(**kwargs)
File "/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 230, in request
resp, got_stream = self._interpret_response(result, stream)
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 624, in _interpret_response
self._interpret_response_line(
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
raise self.handle_error_response(
File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 337, in handle_error_response
raise error.APIError(
openai.error.APIError: Invalid response object from API: '{\n "detail": "No authorization token provided",\n "status": 401,\n "title": "Unauthorized",\n "type": "about:blank"\n}\n' (HTTP response code was 401)`
When I change the openai_api_base to something nonsensical, the same error is returned, making me think that it is using OpenAI's API base and not the custom one specified.
### Expected behavior
I would expect the agent to work as shown here: https://python.langchain.com/docs/modules/agents/agent_types/react | agent executor not using custom openai_api_base | https://api.github.com/repos/langchain-ai/langchain/issues/14470/comments | 10 | 2023-12-08T21:41:20Z | 2024-03-18T16:07:54Z | https://github.com/langchain-ai/langchain/issues/14470 | 2,033,344,391 | 14,470 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.346
### Who can help?
@hwch
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://github.com/langchain-ai/langchain/pull/14266 added a deprecation for `input_variables` argument of `PromptTemplate.from_file`. It was released in 0.0.346.
However, https://github.com/langchain-ai/langchain/blob/v0.0.346/libs/langchain/langchain/chains/llm_summarization_checker/base.py#L20-L31 still uses `input_variables` at the module-level. So now this `DeprecationWarning` is emitted simply for importing from LangChain.
Can we fix this, so LangChain isn't emitting `DeprecationWarning`s?
### Expected behavior
I expect LangChain to not automatically emit `DeprecationWarning`s when importing from it | DeprecationWarning: `input_variables' is deprecated and ignored | https://api.github.com/repos/langchain-ai/langchain/issues/14467/comments | 1 | 2023-12-08T21:10:46Z | 2023-12-13T01:43:28Z | https://github.com/langchain-ai/langchain/issues/14467 | 2,033,316,778 | 14,467 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi, I'm using the llama2 model for SQL, and I modified it to work directly using LLM. I also added more tables to test the model, and I'm modifying the prompt. When I do this, it generates longer queries based on the question I send, but when the query is very long, it doesn't complete generating everything. I checked the response character count, and on average, it returns around 400+- maximum response characters (I tried increasing and removing the character limit, but it didn't solve the problem).
llm = Replicate(
model=llama2_13b,
input={"temperature": 0.1,
"max_length": 2000,
Incorrect return type.
Retrieve the performance of Miles Norris in his most recent game.
```sql
SELECT pg_player_game_stats.Points, pg_player_game_stats.Rebounds, pg_player_game_stats.Assists
FROM nba_roster_temp
JOIN player_game_stats ON nba_roster_temp."PlayerID" = player_game_stats."PlayerID"
WHERE nba_roster
As you can see, the SQL query it returns is incomplete, and here I'm also counting the number of characters.
number character: 401
Any help you can provide would be appreciated
### Suggestion:
_No response_ | Issue: llama2-sql for long queries doesn't return the complete query. | https://api.github.com/repos/langchain-ai/langchain/issues/14465/comments | 1 | 2023-12-08T19:31:22Z | 2024-03-16T16:14:16Z | https://github.com/langchain-ai/langchain/issues/14465 | 2,033,203,436 | 14,465 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi, I'm trying to implement the [web scraping tutorial ](https://python.langchain.com/docs/use_cases/web_scraping#llm-with-function-calling) using ChatOllama instead of ChatOpenAI.
This is what I'm trying to do:
```
import pprint
from langchain.chains import create_extraction_chain
from langchain.document_loaders import AsyncChromiumLoader
from langchain.document_transformers import BeautifulSoupTransformer
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chat_models import ChatOllama
def extract(content: str, schema: dict, llm):
return create_extraction_chain(schema=schema, llm=llm).run(content)
def scrape_with_playwright(urls, schema, llm):
loader = AsyncChromiumLoader(urls)
docs = loader.load()
bs_transformer = BeautifulSoupTransformer()
docs_transformed = bs_transformer.transform_documents(
docs, tags_to_extract=["span"]
)
print("Extracting content with LLM")
# Grab the first 1000 tokens of the site
splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
chunk_size=1000, chunk_overlap=0
)
splits = splitter.split_documents(docs_transformed)
# Process the first split
extracted_content = extract(schema=schema, content=splits[0].page_content, llm=llm)
return extracted_content
if __name__ == '__main__':
llm = ChatOllama(base_url="https://localhost:11434", model="llama2")
schema = {
"properties": {
"news_article_title": {"type": "string"},
"news_article_summary": {"type": "string"},
},
"required": ["news_article_title", "news_article_summary"],
}
urls = ["https://www.wsj.com"]
extracted_content = scrape_with_playwright(urls, schema=schema, llm=llm)
pprint.pprint(extracted_content)
```
Instead of the results shown I get this error: `requests.exceptions.SSLError: HTTPSConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate/ (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1006)')))` when the `extract` function is called.
Could anyone please help me understand what I'm doing wrong? Thanks!
### Suggestion:
_No response_ | Web Scraping with ChatOllama gives SSL: WRONG_VERSION_NUMBER | https://api.github.com/repos/langchain-ai/langchain/issues/14450/comments | 6 | 2023-12-08T15:19:32Z | 2024-07-03T07:29:16Z | https://github.com/langchain-ai/langchain/issues/14450 | 2,032,847,275 | 14,450 |
[
"hwchase17",
"langchain"
] | During the recent initiative to secure API keys with `SecretStr` (https://github.com/langchain-ai/langchain/issues/12165), some implementations and their corresponding tests were implemented with some flaws. More specifically, they were not really masking the API key.
For instsance, in `libs/langchain/langchain/chat_models/javelin_ai_gateway.py` we have:
```
@property
def _default_params(self) -> Dict[str, Any]:
params: Dict[str, Any] = {
"gateway_uri": self.gateway_uri,
"javelin_api_key": cast(SecretStr, self.javelin_api_key).get_secret_value(),
"route": self.route,
**(self.params.dict() if self.params else {}),
}
return params
```
In the above snippet, `self.javelin_api_key` is cast to `SecretStr`, and then immediately `.get_secret_value()` is invoked, essentially retrieving the original string. Note that Javelin chat lacks unit tests. This could be used to handle the case where the API key is `None`, but then it might appear like there's no masking and it's preferable to address the `None` case directly.
It's worth noting that this pattern is repeated in tests, such as in `libs/langchain/tests/integration_tests/chat_models/test_baiduqianfan.py`:
```
def test_uses_actual_secret_value_from_secret_str() -> None:
"""Test that actual secret is retrieved using `.get_secret_value()`."""
chat = QianfanChatEndpoint(
qianfan_ak="test-api-key",
qianfan_sk="test-secret-key",
)
assert cast(SecretStr, chat.qianfan_ak).get_secret_value() == "test-api-key"
assert cast(SecretStr, chat.qianfan_sk).get_secret_value() == "test-secret-key"
```
The point of the test would be to assert that the API key is indeed a secret, and not just cast it back and forth.
Let me point out that the test suite for baiduqianfan chat does indeed catch whether the API key is indeed masked with a `SecretStr` by capturing the stdout.
@eyurtsev @hwchase17
### Suggestion:
PR to fix the issues | Issue: Flawed implementations of SecretStr for API keys | https://api.github.com/repos/langchain-ai/langchain/issues/14445/comments | 1 | 2023-12-08T13:45:01Z | 2024-02-02T14:32:30Z | https://github.com/langchain-ai/langchain/issues/14445 | 2,032,683,865 | 14,445 |
[
"hwchase17",
"langchain"
] | ### System Info
npm version: "^0.0.203"
MacOS
Bun version: 1.0.15+b3bdf22eb
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The following code will cause this error:
```
import { Pinecone } from '@pinecone-database/pinecone';
import { VectorDBQAChain } from 'langchain/chains';
import { OpenAIEmbeddings } from 'langchain/embeddings/openai';
import { OpenAI } from 'langchain/llms/openai';
import { PineconeStore } from 'langchain/vectorstores/pinecone';
const pinecone = new Pinecone();
const indexKey = process.env.PINECONE_INDEX_KEY;
if (!indexKey) {
throw new Error('PINECONE_INDEX_KEY is not set.');
}
const pineconeIndex = pinecone.Index(indexKey);
export async function queryDocuments(query: string, returnSourceDocuments = false) {
const vectorStore = await PineconeStore.fromExistingIndex(
new OpenAIEmbeddings({
modelName: 'text-embedding-ada-002',
}),
{
pineconeIndex,
},
);
const model = new OpenAI({
modelName: 'gpt-4-1106-preview',
});
const chain = VectorDBQAChain.fromLLM(model, vectorStore, {
k: 5,
returnSourceDocuments,
});
return await chain.call({ query });
}
```
The embeddings have been created and confirmed to exist in the Pinecone console, e.g.:
<img width="1240" alt="Screenshot 2023-12-08 at 13 46 24" src="https://github.com/langchain-ai/langchain/assets/1304307/66c23c7e-916a-461d-b8f6-28a7fa460300">
### Expected behavior
I would expect it to query the vector DB and correctly prompt GPT-4 with the results. But instead, I get the following error:
```
? Enter your query what is the third wave of dam
Creating query for "what is the third wave of dam"...
499 | var _a;
500 | return __generator(this, function (_b) {
501 | switch (_b.label) {
502 | case 0:
503 | _a = this.transformer;
504 | return [4 /*yield*/, this.raw.json()];
^
SyntaxError: Unexpected end of JSON input
at /Users/andy/dev/runestone/node_modules/@pinecone-database/pinecone/dist/pinecone-generated-ts-fetch/runtime.js:504:46
at step (/Users/andy/dev/runestone/node_modules/@pinecone-database/pinecone/dist/pinecone-generated-ts-fetch/runtime.js:72:18)
at /Users/andy/dev/runestone/node_modules/@pinecone-database/pinecone/dist/pinecone-generated-ts-fetch/runtime.js:53:53
at /Users/andy/dev/runestone/node_modules/@pinecone-database/pinecone/dist/pinecone-generated-ts-fetch/runtime.js:47:9
at new Promise (:1:21)
at /Users/andy/dev/runestone/node_modules/@pinecone-database/pinecone/dist/pinecone-generated-ts-fetch/runtime.js:43:12
at /Users/andy/dev/runestone/node_modules/@pinecone-database/pinecone/dist/pinecone-generated-ts-fetch/runtime.js:498:16
at /Users/andy/dev/runestone/node_modules/@pinecone-database/pinecone/dist/pinecone-generated-ts-fetch/apis/VectorOperationsApi.js:405:46
at step (/Users/andy/dev/runestone/node_modules/@pinecone-database/pinecone/dist/pinecone-generated-ts-fetch/apis/VectorOperationsApi.js:84:18)
``` | Unexpected end of JSON | https://api.github.com/repos/langchain-ai/langchain/issues/14443/comments | 1 | 2023-12-08T12:47:37Z | 2024-03-18T16:07:49Z | https://github.com/langchain-ai/langchain/issues/14443 | 2,032,599,462 | 14,443 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hello Team,
We are using opensearch vectordb to store the embeddings, then we are using these embeddings to retrieve the similar document during conversastnal retrieval. while checking the settings of the index created by vector_db.add_text(page_content,metadatas). i have seen the number_of_replicas as 5 and shards as 1(this is the default behaviour of vector_db.add_text(page_content,metadats)
**I want to pass as number_of_replicas as 1** and shards as 1 whenever the index is created on opensearch. Can you please help me with this, how can i custom pass this replicas value. Also i am adding code for better understanding below
In the below code if u see the **langchain vector_db.add_text() in Line 109** creating a index with 5 shards by default. i just want to pass this parameter manually lets say 1 shard. Can u please help me out here. Please let me know if you want any more info to understand my issue.

### Suggestion:
_No response_ | Issue: Opensearch manually assigned shards and replicas while using vector_db.add_text(page_contents,metatas) | https://api.github.com/repos/langchain-ai/langchain/issues/14442/comments | 1 | 2023-12-08T11:02:46Z | 2024-03-16T16:14:06Z | https://github.com/langchain-ai/langchain/issues/14442 | 2,032,443,629 | 14,442 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
#prompt template
How Can I use Prompt Template in my code below?
def chat_langchain(new_project_qa, query, not_uuid):
check = query.lower()
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
# query="If the context is related to hi/hello with or without incorrect spelling then reply Hi! How can I assist you today?,else Search the entire context and provide formal and accurate answer for this query - {}. Explain the relevant information with important points, if necessary.If you don't find answer then give relavant answer else reply with text 'Sorry, I can't find the related information' ".format(check)
if check not in ['hi','hello','hey','hui','hiiii','hii','hiii','heyyy'] and not user_experience_inst.custom_prompt:
query = "Search the entire context and provide formal and accurate answer for this query - {}. Explain the relevant information with important points, if necessary.If you don't find answer then reply with text 'Sorry, I can't find the related information' otherwise give relavant answer".format(check)
elif check not in ['hi','hello','hey','hui','hiiii','hii','hiii','heyyy'] and user_experience_inst.custom_prompt:
query = f"{user_experience_inst.custom_prompt} {check}.If you don't find answer then reply with text 'Sorry, I can't find the related information'"
else:
query="Search the entire context and provide formal and accurate answer for this query - {}. Explain the relevant information with important points, if necessary.".format(check)
result = new_project_qa(query)
relevant_document = result['source_documents']
if relevant_document:
source = relevant_document[0].metadata.get('source', '')
# Check if the file extension is ".pdf"
file_extension = os.path.splitext(source)[1]
if file_extension.lower() == ".pdf":
source = os.path.basename(source)
# Retrieve the UserExperience instance using the provided not_uuid
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
bot_ending = user_experience_inst.bot_ending_msg if user_experience_inst.bot_ending_msg is not None else ""
# Create the list_json dictionary
if bot_ending != '':
list_json = {
'bot_message': result['result'] + '\n\n' + str(bot_ending),
"citation": source
}
else:
list_json = {
'bot_message': result['result'] + str(bot_ending),
"citation": source
}
else:
# Handle the case when relevant_document is empty
list_json = {
'bot_message': result['result'],
'citation': ''
}
# Return the list_json dictionary
return list_json
### Suggestion:
_No response_ | Issue: How Can I use Prompt Template? | https://api.github.com/repos/langchain-ai/langchain/issues/14441/comments | 1 | 2023-12-08T10:24:07Z | 2024-03-16T16:14:01Z | https://github.com/langchain-ai/langchain/issues/14441 | 2,032,387,502 | 14,441 |
[
"hwchase17",
"langchain"
] | ### Feature request
When utilizing custom_table_info in the sqldatabase instance while employing the create_sql_agent function, it appears that there is an issue where it disregards the sql_db_schema. Currently, it only utilizes either the custom_table_info or the sql_db_schema. This poses a challenge, especially when crucial information, such as identifying which ID column corresponds to other tables, cannot be specified in the custom table info. There is a need for an option to use both the custom table information and the sql_db_schema
table_info={'invoice':" the customer_id in invoice table is referenced to customers table's company_id",}
db=SQLDatabase(engine=dbengine,include_tables=["invoice","customer"],custom_table_info=table_info)
Invoking: `sql_db_schema` with `customer, invoice`
CREATE TABLE customer (
id SERIAL NOT NULL,
key VARCHAR NOT NULL,
company_id VARCHAR NOT NULL,
company_name VARCHAR NOT NULL,
)
/*
1 rows from customer table:
id key company_id company_name
670 CUST-0ab15 17 Aim Inc
*/
the customer_id in invoice table is referenced to customers table's company_id,
It doesnt have the schema of invoice table so generating wrong sql queries
### Motivation
I need both custom_table_info and sql_db_schema to work, as some kinda metadata is needed to specify that is specific to my use cases.
### Your contribution
NO | custom_table_info along with sql_db_schema while using create_sql_agent | https://api.github.com/repos/langchain-ai/langchain/issues/14440/comments | 1 | 2023-12-08T10:10:16Z | 2024-03-16T16:13:56Z | https://github.com/langchain-ai/langchain/issues/14440 | 2,032,367,290 | 14,440 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
The supabase vectorstore does not support setting the `score_threshold` in `as_retriever` despite being showcased as an option in the vectorestore superclass docstring example.
https://github.com/langchain-ai/langchain/blob/a05230a4ba4dee591d3810440ce65e16860956ae/libs/langchain/langchain/vectorstores/supabase.py#L218
https://github.com/langchain-ai/langchain/blob/a05230a4ba4dee591d3810440ce65e16860956ae/libs/core/langchain_core/vectorstores.py#L596
### Idea or request for content:
The VectoreStore superclass of SupabaseVectoreStore contains logic in `similarity_search_by_vector_with_relevance_scores` that could be used in the SupabaseVectorStore subclass to support the `score_threshold` parameter. | DOC: `SupabaseVectorStore` support for similarity `score_threshold` filtering in `as_retriever` | https://api.github.com/repos/langchain-ai/langchain/issues/14438/comments | 2 | 2023-12-08T09:48:57Z | 2024-03-17T16:10:06Z | https://github.com/langchain-ai/langchain/issues/14438 | 2,032,332,864 | 14,438 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.11
Langchain 0.0.348
### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying DoctranTextTranslator of langchain. However, I got an error message after running below code.
Error:
doctran_docs[i] = await doc.translate(language=self.language).execute()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: object Document can't be used in 'await' expression
```
from langchain.document_transformers import DoctranTextTranslator
from langchain.schema import Document
from dotenv import load_dotenv
import asyncio
load_dotenv()
sample_text = """[Generated with ChatGPT]
Confidential Document - For Internal Use Only
Date: July 1, 2023
Subject: Updates and Discussions on Various Topics
Dear Team,
I hope this email finds you well. In this document, I would like to provide you with some important updates and discuss various topics that require our attention. Please treat the information contained herein as highly confidential.
Security and Privacy Measures
As part of our ongoing commitment to ensure the security and privacy of our customers' data, we have implemented robust measures across all our systems. We would like to commend John Doe (email: [email protected]) from the IT department for his diligent work in enhancing our network security. Moving forward, we kindly remind everyone to strictly adhere to our data protection policies and guidelines. Additionally, if you come across any potential security risks or incidents, please report them immediately to our dedicated team at [email protected].
HR Updates and Employee Benefits
Recently, we welcomed several new team members who have made significant contributions to their respective departments. I would like to recognize Jane Smith (SSN: 049-45-5928) for her outstanding performance in customer service. Jane has consistently received positive feedback from our clients. Furthermore, please remember that the open enrollment period for our employee benefits program is fast approaching. Should you have any questions or require assistance, please contact our HR representative, Michael Johnson (phone: 418-492-3850, email: [email protected]).
Marketing Initiatives and Campaigns
Our marketing team has been actively working on developing new strategies to increase brand awareness and drive customer engagement. We would like to thank Sarah Thompson (phone: 415-555-1234) for her exceptional efforts in managing our social media platforms. Sarah has successfully increased our follower base by 20% in the past month alone. Moreover, please mark your calendars for the upcoming product launch event on July 15th. We encourage all team members to attend and support this exciting milestone for our company.
Research and Development Projects
In our pursuit of innovation, our research and development department has been working tirelessly on various projects. I would like to acknowledge the exceptional work of David Rodriguez (email: [email protected]) in his role as project lead. David's contributions to the development of our cutting-edge technology have been instrumental. Furthermore, we would like to remind everyone to share their ideas and suggestions for potential new projects during our monthly R&D brainstorming session, scheduled for July 10th.
Please treat the information in this document with utmost confidentiality and ensure that it is not shared with unauthorized individuals. If you have any questions or concerns regarding the topics discussed, please do not hesitate to reach out to me directly.
Thank you for your attention, and let's continue to work together to achieve our goals.
Best regards,
Jason Fan
Cofounder & CEO
Psychic
[email protected]
"""
documents = [Document(page_content=sample_text)]
qa_translator = DoctranTextTranslator(language="spanish", openai_api_model="gpt-3.5-turbo")
async def atransform_documents(docs):
return await qa_translator.atransform_documents(docs)
translated_document = asyncio.run(atransform_documents(documents))
print(translated_document[0].page_content)
```
### Expected behavior
It should return the translated text. | DoctranTextTranslator Is Not Working | https://api.github.com/repos/langchain-ai/langchain/issues/14437/comments | 1 | 2023-12-08T09:10:36Z | 2024-03-16T16:13:46Z | https://github.com/langchain-ai/langchain/issues/14437 | 2,032,270,431 | 14,437 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain version: 0.0.348
for the type hint for es_connection variable in class ElasticsearchChatMessageHistory, module is used as a type
@hwchase17 @eyurtsev
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import json
import logging
from time import time
from typing import TYPE_CHECKING, Any, Dict, List, Optional
from langchain_core.chat_history import BaseChatMessageHistory
from langchain_core.messages import (
BaseMessage,
message_to_dict,
messages_from_dict,
)
if TYPE_CHECKING:
from elasticsearch import Elasticsearch
logger = logging.getLogger(__name__)
class ElasticsearchChatMessageHistory(BaseChatMessageHistory):
"""Chat message history that stores history in Elasticsearch.
Args:
es_url: URL of the Elasticsearch instance to connect to.
es_cloud_id: Cloud ID of the Elasticsearch instance to connect to.
es_user: Username to use when connecting to Elasticsearch.
es_password: Password to use when connecting to Elasticsearch.
es_api_key: API key to use when connecting to Elasticsearch.
es_connection: Optional pre-existing Elasticsearch connection.
index: Name of the index to use.
session_id:Arbitrary key that is used to store the messages
of a single chat session.
"""
def __init__(
self,
index: str,
session_id: str,
*,
es_connection: Optional["**Elasticsearch**"] = None,
es_url: Optional[str] = None,
es_cloud_id: Optional[str] = None,
es_user: Optional[str] = None,
es_api_key: Optional[str] = None,
es_password: Optional[str] = None,
):
self.index: str = index
self.session_id: str = session_id
### Expected behavior
import of module elasticsearch is not done properly | import error in elasticsearch memory module: Module cannot be used as a type | https://api.github.com/repos/langchain-ai/langchain/issues/14436/comments | 1 | 2023-12-08T09:09:41Z | 2024-03-16T16:13:41Z | https://github.com/langchain-ai/langchain/issues/14436 | 2,032,269,106 | 14,436 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain version:0.0.311
os:macOS 11.6
python: 3.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This is code sample of using agent:
```
sys_p = self.create_sys_prompt()
tools = [
ProgrammerAgent(self.env),
DefaultAgent(),
]
agent_obj = StructuredChatAgent.from_llm_and_tools(
llm=self.env.llm,
tools=tools,
prefix=sys_p + PREFIX,
verbose=True,
)
agent = AgentExecutor.from_agent_and_tools(
agent=agent_obj,
tools=tools,
verbose=True,
)
task = f"""
Here is the requirement:
```Markdown
{requirement}
```
Please Implement the requirement.
"""
return agent.run(task)
```
this is the code of ProgrammerAgent tool:
```
from typing import Any, Type
from langchain.tools import BaseTool
from pydantic import Field, BaseModel
from enviroment import Environment
class ProgramRequirementSchema(BaseModel):
task: str = Field(description="Coding task")
task_context: str = Field(description="Contextual background information for the task.")
project_path: str = Field(description="Project path")
class ProgrammerAgent(BaseTool):
name: str = "programmer_agent"
description: str = """
programmer agent is a agent that write code for a given coding task.
"""
args_schema: Type[ProgramRequirementSchema] = ProgramRequirementSchema
env: Environment = Field(default=None)
def __init__(self, env: Environment):
super().__init__()
self.env = env
def _run(self, task: str, task_context: str, project_path: str) -> Any:
result = "success"
return result
```
And this is the wrong action:
```
Action:
```json
{
"action": "programmer_agent",
"action_input": {
"task": {
"title": "Implement the requirement",
"description": "1. Update the `grpc.proto` file. 2. Design the database and write the create SQL. 3. Implement the database operation interface. 4. Implement the grpc interface."
},
"task_context": {
"title": "Context",
"description": "The project is built with golang, and the database used is relational database. The grpc interface is defined in `grpc.proto`."
},
"project_path": "golang.52tt.com/services/tt-rev/offering-room"
}
}
```
### Expected behavior
the value of action_input should be :
```json
{
"action": "programmer_agent",
"action_input": {
"task": "1. Update the `grpc.proto` file. 2. Design the database and write the create SQL. 3. Implement the database operation interface. 4. Implement the grpc interface.",
"task_context": "The project is built with golang, and the database used is relational database. The grpc interface is defined in `grpc.proto`.",
"project_path": "golang.52tt.com/services/tt-rev/offering-room"
}
}
``` | StructuredChatAgent did not provide the correct action input. | https://api.github.com/repos/langchain-ai/langchain/issues/14434/comments | 3 | 2023-12-08T08:15:05Z | 2023-12-08T10:27:18Z | https://github.com/langchain-ai/langchain/issues/14434 | 2,032,157,152 | 14,434 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain Version: 0.0.346
Python: 3.11.4
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.vectorstores import ElasticsearchStore
```
Results in
```
File "C:\Users\XXX\Desktop\Projects\XXX\api\controllers\Vector.py", line 5, in <module>
from langchain.vectorstores import ElasticsearchStore
ImportError: cannot import name 'ElasticsearchStore' from 'langchain.vectorstores'
```
or
```python
from langchain.vectorstores.elasticsearch import ElasticsearchStore
```
Results in
```
File "C:\Users\XXX\Desktop\Projects\XXX\api\controllers\Vector.py", line 5, in <module>
from langchain.vectorstores.elasticsearch import ElasticsearchStore
ModuleNotFoundError: No module named 'langchain.vectorstores.elasticsearch'
```
### Expected behavior
I am upgrading from `langchain==0.0.279` to `langchain==0.0.346` and this is the issue that arised.
Expected behavior would be import successfully, new langchain version does not seems to be backward compatible for `ElasticsearchStore` | Bug: ImportError for ElasticsearchStore | https://api.github.com/repos/langchain-ai/langchain/issues/14431/comments | 3 | 2023-12-08T07:14:10Z | 2023-12-08T15:16:20Z | https://github.com/langchain-ai/langchain/issues/14431 | 2,032,065,241 | 14,431 |
[
"hwchase17",
"langchain"
] | ### System Info
Python ==3.11.3
pymilvus== 2.3.1
langchain==0.0.327
openai==0.28.1
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Description:
The issue encountered revolves around the inability of the retrieval process to filter data based on metadata, specifically focusing on the 'file_name' object in the metadata. The context pertains to utilizing Milvius as the database for the vector store.
Details:
Metadata Structure: The metadata comprises a list of dictionaries, where each dictionary holds key-value pairs containing metadata information. The 'file_name' attribute is utilized for filtration purposes.
Example Metadata List:
[{'page_number': '4', 'file_name': 'Apple_history.pdf', 'source_path': '.../Apple_history.pdf'}, ...]
Applied Options and Observations:
Attempted a approach by directly using a 'filter_query' with the 'file_name'.
filter_query = {"filter": {"file_name": 'samsung.pdf'}, "k": self.top}
retriever = vectorstore.as_retriever(search_kwargs=filter_query) # Retrieval of top k results
result = retriever.get_relevant_documents(agent_query)
Similar observations were made where the retrieval fetched data unrelated to the specified 'samsung.pdf' filename.
### Expected behavior
The anticipated functionality was to filter the retrieval process based on the 'file_name' metadata attribute. In scenarios where there are no chunks associated with the specified 'file_name', the retrieval should ideally return no data. | Retrieval Inability to Filter Based on Metadata in Milvius Database | https://api.github.com/repos/langchain-ai/langchain/issues/14429/comments | 2 | 2023-12-08T04:37:35Z | 2024-03-17T16:10:02Z | https://github.com/langchain-ai/langchain/issues/14429 | 2,031,914,421 | 14,429 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain Version: 0.0.348
Output from `poetry env info`
**Virtualenv**
Python: 3.9.12
Implementation: CPython
Path: /Users/peternf/Desktop/langchain/libs/langchain/.venv
Executable: /Users/peternf/Desktop/langchain/libs/langchain/.venv/bin/python
Valid: True
**System**
Platform: darwin
OS: posix
Python: 3.9.12
Path: /Users/peternf/opt/miniconda3
Executable: /Users/peternf/opt/miniconda3/bin/python3.9
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
### Reason to Install litellm
@xavier-xia-99 and I are collaborating on finishing the tasks of #12165 for _libs/langchain/langchain/chat_models/litellm.py_. We have finished the implementation and is adding unit tests to verify our logic when we discovered that we need to include litellm for the unit tests, or `make test` will just skip our newly added unit tests. Yet as we try to add the optional dependency litellm according to the instructions provided in langchain/.github/CONTRIBUTING.md, it shows the following dependency conflict. Hence, we will appreciate any help in addressing this conflict or running the CI. Thank you!
### Steps to Reproduce the Behavior
1. Run `poetry add --optional litellm`
2. Error Message: <img width="720" alt="Screenshot 2023-12-07 at 5 50 43 PM" src="https://github.com/langchain-ai/langchain/assets/98713019/3e0d5730-2134-4e11-b901-fbda927bc796">
### Expected behavior
Output of litellm downloads in progress and successful installation of relevant packages. | Dependency Conflict between litellm and tiktoken | https://api.github.com/repos/langchain-ai/langchain/issues/14419/comments | 1 | 2023-12-07T23:10:21Z | 2024-03-16T16:13:31Z | https://github.com/langchain-ai/langchain/issues/14419 | 2,031,666,536 | 14,419 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
im trying to create a metadata to separate user id fields vectors, I searched a lot and couldn't find anything , im using zilliz vector store , I tried using the userid as memory key but it's not possible , any ideas ? Im using the docs is automatically saved and retrieved from the saved collection on zilliz , so I can't edit the docs too , im just trying to add meta fields when user chats and when retrieved for context.
```
vectordb = Milvus.from_documents(
{} ,
embeddings,
connection_args={
"uri": ZILLIZ_CLOUD_URI,
"token": ZILLIZ_CLOUD_API_KEY, # API key, for serverless clusters which can be used as replacements for user and password
"secure": True,
},
)
# vectordb.
retriever = Milvus.as_retriever(vectordb,search_kwargs= {"k":15 , 'user_id': "a3"} ) # here we use userid with "a" for retreiving memory
# print(retriever)
memory= VectorStoreRetrieverMemory(retriever=retriever , memory_key="history" , metadata={"user_id": "a3"} )
chain = ConversationChain(llm=self.llm, memory=memory, verbose=True,prompt = PROMPT , metadata={"user_id": "a3"})
res = chain.predict(input=input_text)
# with chain_recorder as recording:
# llm_response = chain(input_text)
return res
```
### Suggestion:
_No response_ | Issue: <Zilliz and Milvus metadata field and memory seperation> | https://api.github.com/repos/langchain-ai/langchain/issues/14412/comments | 1 | 2023-12-07T20:08:44Z | 2024-03-16T16:13:26Z | https://github.com/langchain-ai/langchain/issues/14412 | 2,031,444,278 | 14,412 |
[
"hwchase17",
"langchain"
] | ### System Info
LangChain version : Latest
```python
def delete(self, ids: Optional[List[str]] = None, **kwargs: Any) -> Optional[bool]:
```
[This check](https://github.com/langchain-ai/langchain/blob/54040b00a4a05e81964a1a7f7edbf0b830d4395c/libs/langchain/langchain/vectorstores/faiss.py#L798) causes the issue.
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
db = FAISS.from_documents(text_pages, embeddings)
db.delete()
```
Error
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[51], line 1
----> 1 db.delete()
File /opt/conda/lib/python3.10/site-packages/langchain/vectorstores/faiss.py:799, in FAISS.delete(self, ids, **kwargs)
789 """Delete by ID. These are the IDs in the vectorstore.
790
791 Args:
(...)
796 False otherwise, None if not implemented.
797 """
798 if ids is None:
--> 799 raise ValueError("No ids provided to delete.")
800 missing_ids = set(ids).difference(self.index_to_docstore_id.values())
801 if missing_ids:
ValueError: No ids provided to delete.
```
### Expected behavior
The index should be deleted without needing to pass an index `id`. | FAISS db.delete() says `ids` is required even when it is Optional | https://api.github.com/repos/langchain-ai/langchain/issues/14409/comments | 1 | 2023-12-07T19:22:56Z | 2024-03-17T16:09:57Z | https://github.com/langchain-ai/langchain/issues/14409 | 2,031,382,949 | 14,409 |
[
"hwchase17",
"langchain"
] | ### System Info
from langchain.llms import GooglePalm
from sqlalchemy import create_engine
from langchain.utilities import SQLDatabase
from langchain.llms import GooglePalm
from langchain_experimental.sql import SQLDatabaseChain
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
st.write(db_chain_sql_out.return_sql) is returning bool value "True" Instead of the model generated actual SQL Statement -
I am using Google Palm - is this the normal output?
### Expected behavior
Expecting model generated SQL | db_chain_sql_out.return_sql | https://api.github.com/repos/langchain-ai/langchain/issues/14404/comments | 4 | 2023-12-07T15:55:01Z | 2024-03-17T16:09:52Z | https://github.com/langchain-ai/langchain/issues/14404 | 2,031,053,527 | 14,404 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am using langchain 0.0.316 an trying to create a ElasticsearchStore to do some similarity_search
However, whenever I try to create it (using from_documents or not) I get the following error :
`raise ValueError("check_hostname requires server_hostname")`
This is a SSL error, and I suspect it to be the problem as I cannot use SSL. Somewhere else in the project, I cannot with Python to Elastic using verify_certs = False and everything works perfectly.
Thus I tried to create ElasticsearchStore with the following arguments :
```
db = ElasticsearchStore(texts,
es_url='...',
index_name = '...',
embedding,
ssl_verify = {'verify_certs'=False'})
```
But I still have the error and nothing has changed.
How can I make langchain to initialize Elastic without checking for ssl certificates ?
### Suggestion:
_No response_ | Issue: ElasticsearchStore with ssl_verify = {'verify_certs':False} does not work | https://api.github.com/repos/langchain-ai/langchain/issues/14403/comments | 5 | 2023-12-07T15:46:15Z | 2024-02-05T02:29:06Z | https://github.com/langchain-ai/langchain/issues/14403 | 2,031,037,974 | 14,403 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain: 0.0.346
OpenAI: 1.3.7
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Simple script to authenticate to Azure with RBAC
```
from langchain.embeddings import AzureOpenAIEmbeddings
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
token_provider = get_bearer_token_provider(DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default")
embeddings = AzureOpenAIEmbeddings(azure_endpoint='xxxxxxx', azure_ad_token_provider=token_provider)
```
### Expected behavior
Should authenticate, but is seems like the `azure_ad_token_provider` is not added to the values dict
langchain/embeddings/azure_openai.py line 80-86
```
values["azure_endpoint"] = values["azure_endpoint"] or os.getenv(
"AZURE_OPENAI_ENDPOINT"
)
values["azure_ad_token"] = values["azure_ad_token"] or os.getenv(
"AZURE_OPENAI_AD_TOKEN"
)
```
Other parameters are added to values, but not `azure_ad_token_provider` | AzureOpenAIEmbeddings cannot authenticate with azure_ad_token_provider | https://api.github.com/repos/langchain-ai/langchain/issues/14402/comments | 10 | 2023-12-07T15:35:19Z | 2024-03-21T08:22:18Z | https://github.com/langchain-ai/langchain/issues/14402 | 2,031,016,359 | 14,402 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hello community,
I'm currently working on a project that involves using the langchain library for natural language processing. I'm encountering an issue with the LLMChain class, and I'm hoping someone can help me troubleshoot.
I've initialized a Hugging Face pipeline and constructed a prompt using PromptTemplate. However, when I attempt to load a QA chain using the load_qa_chain function, I get a ValidationError related to the Runnable type. The error suggests that an instance of Runnable is expected, but it seems there's a mismatch.
Here's a simplified version of my code:
```
from langchain.prompts import PromptTemplate
from langchain import load_qa_chain
from transformers import HuggingFacePipeline
hf = HuggingFacePipeline.from_model_id(
model_id="ai-forever/rugpt3large_based_on_gpt2",
task="text-generation",
pipeline_kwargs={"max_new_tokens": 10},
)
prompt = """Question: {question}
Answer: {text}"""
# The next line is where the error occurs
chain = load_qa_chain(hf(prompt=prompt), chain_type="stuff")
```
```
ValidationError: 2 validation errors for LLMChain
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
```
I have checked the documentation and versions of the libraries, but I'm still having trouble understanding and resolving the issue. Could someone please provide guidance on what might be causing this ValidationError and how I can address it?
Thank you in advance for your help!
### Suggestion:
_No response_ | Issue: <Trouble with langchain Library: Error in LLMChain Validation> | https://api.github.com/repos/langchain-ai/langchain/issues/14401/comments | 1 | 2023-12-07T15:17:37Z | 2024-03-17T16:09:46Z | https://github.com/langchain-ai/langchain/issues/14401 | 2,030,975,852 | 14,401 |
[
"hwchase17",
"langchain"
] | ### System Info
Hi Team,
Am trying to connect to SQL/CSV using the HuggingFaceHUb and I get value error. This value error occurs even when I use the same example as given in https://python.langchain.com/docs/use_cases/qa_structured/sql except that instead of openAI am using huggingfaceHub
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
st.title("SQL DB with Langchain")
#entering input through streamlit into the app for querying
input_text = st.text_input("enter the text for search")
#input_period = st.text_input("enter the period for which you need summarization")
#connecting to hugging face API
os.environ["HUGGINGFACEHUB_API_TOKEN"] = huggingface_write_key
#SQL
db = SQLDatabase.from_uri("sqlite:///Chinook.db")
repo_id = "google/flan-t5-xxl"
llm = HuggingFaceHub(
repo_id=repo_id, model_kwargs={"temperature": 0.2}
)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
if input_text:
st.write(db_chain.run(input_text))
### Expected behavior
Expected to give output for the query thats run | value error when using huggingfacehub API | https://api.github.com/repos/langchain-ai/langchain/issues/14400/comments | 10 | 2023-12-07T14:05:46Z | 2024-04-25T11:22:42Z | https://github.com/langchain-ai/langchain/issues/14400 | 2,030,832,959 | 14,400 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
python3.10/site-packages/langchain/llms/bedrock.py:315: RuntimeWarning: coroutine 'AsyncCallbackManagerForLLMRun.on_llm_new_token' was never awaited
### Suggestion:
_No response_ | Issue:python3.10/site-packages/langchain/llms/bedrock.py:315: RuntimeWarning: coroutine 'AsyncCallbackManagerForLLMRun.on_llm_new_token' was never awaited | https://api.github.com/repos/langchain-ai/langchain/issues/14399/comments | 4 | 2023-12-07T13:59:30Z | 2023-12-08T02:24:40Z | https://github.com/langchain-ai/langchain/issues/14399 | 2,030,821,733 | 14,399 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain: 0.0.346
python: 3.11.7
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Error when run:
```
from langchain.tools import DuckDuckGoSearchRun
search = DuckDuckGoSearchRun()
search.run("Obama's first name?")
```
### Expected behavior
I got this error when run:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[c:\Users\thenh\OneDrive\M](file:///C:/Users/thenh/OneDrive/M)áy tính\demo\test.ipynb Cell 13 line 4
.....
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain_core\tools.py:337, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
[334](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:334) try:
[335](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:335) tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
[336](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:336) observation = (
--> [337](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:337) self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
[338](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:338) if new_arg_supported
[339](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:339) else self._run(*tool_args, **tool_kwargs)
[340](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:340) )
[341](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:341) except ToolException as e:
[342](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:342) if not self.handle_tool_error:
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain\tools\ddg_search\tool.py:37, in DuckDuckGoSearchRun._run(self, query, run_manager)
[31](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:31) def _run(
[32](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:32) self,
[33](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:33) query: str,
[34](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:34) run_manager: Optional[CallbackManagerForToolRun] = None,
[35](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:35) ) -> str:
[36](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:36) """Use the tool."""
---> [37](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:37) return self.api_wrapper.run(query)
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain\utilities\duckduckgo_search.py:81, in DuckDuckGoSearchAPIWrapper.run(self, query)
[79](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:79) """Run query through DuckDuckGo and return concatenated results."""
[80](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:80) if self.source == "text":
---> [81](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:81) results = self._ddgs_text(query)
[82](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:82) elif self.source == "news":
[83](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:83) results = self._ddgs_news(query)
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain\utilities\duckduckgo_search.py:48, in DuckDuckGoSearchAPIWrapper._ddgs_text(self, query, max_results)
[45](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:45) from duckduckgo_search import DDGS
[47](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:47) with DDGS() as ddgs:
---> [48](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:48) ddgs_gen = ddgs.text(
[49](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:49) query,
[50](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:50) region=self.region,
[51](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:51) safesearch=self.safesearch,
[52](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:52) timelimit=self.time,
[53](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:53) max_results=max_results or self.max_results,
[54](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:54) backend=self.backend,
[55](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:55) )
[56](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:56) if ddgs_gen:
[57](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:57) return [r for r in ddgs_gen]
TypeError: DDGS.text() got an unexpected keyword argument 'max_results'
```
After remove _`max_results=max_results or self.max_results`_, i still got another error:
```
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
[c:\Users\thenh\OneDrive\M](file:///C:/Users/thenh/OneDrive/M)áy tính\demo\test.ipynb Cell 13 line 4
[1](vscode-notebook-cell:/c%3A/Users/thenh/OneDrive/M%C3%A1y%20t%C3%ADnh/demo/test.ipynb#X15sZmlsZQ%3D%3D?line=0) from langchain.tools import DuckDuckGoSearchRun
[2](vscode-notebook-cell:/c%3A/Users/thenh/OneDrive/M%C3%A1y%20t%C3%ADnh/demo/test.ipynb#X15sZmlsZQ%3D%3D?line=1) search = DuckDuckGoSearchRun()
----> [4](vscode-notebook-cell:/c%3A/Users/thenh/OneDrive/M%C3%A1y%20t%C3%ADnh/demo/test.ipynb#X15sZmlsZQ%3D%3D?line=3) search.run("who is newjeans")
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain_core\tools.py:365, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
[363](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:363) except (Exception, KeyboardInterrupt) as e:
[364](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:364) run_manager.on_tool_error(e)
--> [365](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:365) raise e
[366](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:366) else:
[367](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:367) run_manager.on_tool_end(
[368](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:368) str(observation), color=color, name=self.name, **kwargs
[369](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:369) )
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain_core\tools.py:337, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
[334](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:334) try:
[335](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:335) tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
[336](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:336) observation = (
--> [337](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:337) self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
[338](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:338) if new_arg_supported
[339](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:339) else self._run(*tool_args, **tool_kwargs)
[340](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:340) )
[341](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:341) except ToolException as e:
[342](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain_core/tools.py:342) if not self.handle_tool_error:
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain\tools\ddg_search\tool.py:37, in DuckDuckGoSearchRun._run(self, query, run_manager)
[31](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:31) def _run(
[32](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:32) self,
[33](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:33) query: str,
[34](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:34) run_manager: Optional[CallbackManagerForToolRun] = None,
[35](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:35) ) -> str:
[36](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:36) """Use the tool."""
---> [37](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/tools/ddg_search/tool.py:37) return self.api_wrapper.run(query)
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain\utilities\duckduckgo_search.py:81, in DuckDuckGoSearchAPIWrapper.run(self, query)
[79](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:79) """Run query through DuckDuckGo and return concatenated results."""
[80](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:80) if self.source == "text":
---> [81](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:81) results = self._ddgs_text(query)
[82](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:82) elif self.source == "news":
[83](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:83) results = self._ddgs_news(query)
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain\utilities\duckduckgo_search.py:57, in DuckDuckGoSearchAPIWrapper._ddgs_text(self, query, max_results)
[48](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:48) ddgs_gen = ddgs.text(
[49](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:49) query,
[50](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:50) region=self.region,
(...)
[54](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:54) backend=self.backend,
[55](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:55) )
[56](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:56) if ddgs_gen:
---> [57](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:57) return [r for r in ddgs_gen]
[58](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:58) return []
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain\utilities\duckduckgo_search.py:57, in <listcomp>(.0)
[48](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:48) ddgs_gen = ddgs.text(
[49](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:49) query,
[50](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:50) region=self.region,
(...)
[54](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:54) backend=self.backend,
[55](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:55) )
[56](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:56) if ddgs_gen:
---> [57](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:57) return [r for r in ddgs_gen]
[58](file:///C:/Program%20Files/Python311/Lib/site-packages/langchain/utilities/duckduckgo_search.py:58) return []
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\duckduckgo_search\duckduckgo_search.py:150, in DDGS.text(self, keywords, region, safesearch, timelimit, backend)
[134](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:134) """DuckDuckGo text search generator. Query params: https://duckduckgo.com/params
[135](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:135)
[136](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:136) Args:
(...)
[147](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:147)
[148](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:148) """
[149](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:149) if backend == "api":
--> [150](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:150) yield from self._text_api(keywords, region, safesearch, timelimit)
[151](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:151) elif backend == "html":
[152](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:152) yield from self._text_html(keywords, region, safesearch, timelimit)
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\duckduckgo_search\duckduckgo_search.py:203, in DDGS._text_api(self, keywords, region, safesearch, timelimit)
[201](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:201) for s in ("0", "20", "70", "120"):
[202](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:202) payload["s"] = s
--> [203](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:203) resp = self._get_url(
[204](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:204) "GET", "https://links.duckduckgo.com/d.js", params=payload
[205](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:205) )
[206](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:206) if resp is None:
[207](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:207) break
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\duckduckgo_search\duckduckgo_search.py:89, in DDGS._get_url(self, method, url, **kwargs)
[87](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:87) logger.warning(f"_get_url() {url} {type(ex).__name__} {ex}")
[88](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:88) if i >= 2 or "418" in str(ex):
---> [89](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:89) raise ex
[90](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:90) sleep(3)
[91](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:91) return None
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\duckduckgo_search\duckduckgo_search.py:82, in DDGS._get_url(self, method, url, **kwargs)
[78](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:78) resp = self._client.request(
[79](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:79) method, url, follow_redirects=True, **kwargs
[80](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:80) )
[81](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:81) if self._is_500_in_url(str(resp.url)) or resp.status_code == 202:
---> [82](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:82) raise httpx._exceptions.HTTPError("")
[83](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:83) resp.raise_for_status()
[84](file:///C:/Program%20Files/Python311/Lib/site-packages/duckduckgo_search/duckduckgo_search.py:84) if resp.status_code == 200:
HTTPError:
``` | TypeError: DDGS.text() got an unexpected keyword argument 'max_results' AND HTTPError: | https://api.github.com/repos/langchain-ai/langchain/issues/14397/comments | 1 | 2023-12-07T13:55:48Z | 2023-12-07T14:38:09Z | https://github.com/langchain-ai/langchain/issues/14397 | 2,030,814,970 | 14,397 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
def delete_confluence_embeddings(file_path, persist_directory,not_uuid):
chroma_db = chromadb.PersistentClient(path=persist_directory)
collection = chroma_db.get_or_create_collection(name="langchain")
project_instance = ProjectName.objects.get(not_uuid=not_uuid)
confluence_data = json.loads(project_instance.media).get('confluence', [])
confluence_url = project_instance.url
username = project_instance.confluence_username
api_key = base64.b64decode(project_instance.api_key).decode('utf-8')
space_keys = [space_data['space_key'] for space_data in project_instance.space_key]
documents = []
loader = ConfluenceLoader(
url=confluence_url,
username=username,
api_key=api_key
)
for space_key in space_keys:
documents.extend(loader.load(space_key=space_key, limit=100))
page_info = []
for document in documents:
page_id = document.metadata.get('id')
page_title = document.metadata.get('title')
formatted_title = page_title.replace(' ', '+')
page_info.append({"id": page_id, "title": formatted_title})
# print(f"Page ID: {page_id}, Page Title: {formatted_title}")
for entry in page_info:
entry_file_path = f"{file_path}/pages/{entry['id']}"
ids = collection.get(where={"source": entry_file_path})['ids']
collection.delete(where={"source": entry_file_path}, ids=ids)
for entry in page_info:
entry_file_path = f"{file_path}/pages/{entry['id']}/{entry['title']}"
ids = collection.get(where={"source": entry_file_path})['ids']
collection.delete(where={"source": entry_file_path}, ids=ids)
chroma_db.delete_collection(name="langchain")
print("Delete successfully")
how can i delete for particular space.
file path is url/spaces/space_key/
### Suggestion:
_No response_ | Issue: How to delete particular space embeddings for a confluence projects | https://api.github.com/repos/langchain-ai/langchain/issues/14396/comments | 1 | 2023-12-07T13:14:35Z | 2024-03-16T16:13:01Z | https://github.com/langchain-ai/langchain/issues/14396 | 2,030,741,752 | 14,396 |
[
"hwchase17",
"langchain"
] | ### System Info
when i am using Retrieval QA with custom prompt on official llama2 model it gives back an empty result even though retriever has worked but LLM failed to give back the response but if i directly pass the query to chain without any prompt it works as expected
## Versions
Python - 3.10
Langchain - 0.0.306
@hwchase17 and @agola11 please take a loot at this issue
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`qa_chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=final_retriever,
chain_type_kwargs={"prompt": prompt_template},
return_source_documents=True
)`
if i initialize the chain like this it is failing without
### Expected behavior
`qa_chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=final_retriever,
return_source_documents=True
)`
if i initialize the chain like this it is working as expected | Retrieval QA chain does not work | https://api.github.com/repos/langchain-ai/langchain/issues/14395/comments | 1 | 2023-12-07T13:08:45Z | 2024-03-16T16:12:56Z | https://github.com/langchain-ai/langchain/issues/14395 | 2,030,731,840 | 14,395 |
[
"hwchase17",
"langchain"
] | ### System Info
Google colab
### Who can help?
@agola
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
!pip install lmql==0.0.6.6 langchain==0.0.316 openai==0.28.1 -q
import lmql
import aiohttp
import os
os.environ["OPENAI_API_KEY"] = ""
from langchain import LLMChain, PromptTemplate
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.llms import OpenAI
# Setup the LM to be used by langchain
llm = OpenAI(temperature=0.9)
human_message_prompt = HumanMessagePromptTemplate(
prompt=PromptTemplate(
template="What is a good name for a company that makes {product}?",
input_variables=["product"],
)
)
chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt])
chat = ChatOpenAI(temperature=0.9)
chain = LLMChain(llm=chat, prompt=chat_prompt_template)
# Run the chain
chain.run("colorful socks")
```
gives error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-17-c7d901a7e281>](https://localhost:8080/#) in <cell line: 25>()
23
24 # Run the chain
---> 25 chain.run("colorful socks")
11 frames
[/usr/local/lib/python3.10/dist-packages/langchain/chat_models/openai.py](https://localhost:8080/#) in _create_retry_decorator(self)
300
301 def _combine_llm_outputs(self, llm_outputs: List[Optional[dict]]) -> dict:
--> 302 overall_token_usage: dict = {}
303 for output in llm_outputs:
304 if output is None:
AttributeError: module 'openai' has no attribute 'error'
```
### Expected behavior
show results | AttributeError: module 'openai' has no attribute 'error' | https://api.github.com/repos/langchain-ai/langchain/issues/14394/comments | 1 | 2023-12-07T12:47:28Z | 2024-03-18T16:07:39Z | https://github.com/langchain-ai/langchain/issues/14394 | 2,030,689,372 | 14,394 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
How does conversationbuffermemory works with routerchain for suppose if I wanted to create a chat application I need memory to store the conversations how does that thing work with routerchain?
I'm currently using the same implementation that has shown in the documentation
please respond as soon as possible
thank you:))
### Suggestion:
_No response_ | Issue: <how does langchain's routerchain work with conversationbuffermemory> | https://api.github.com/repos/langchain-ai/langchain/issues/14392/comments | 14 | 2023-12-07T10:39:13Z | 2024-06-13T16:07:42Z | https://github.com/langchain-ai/langchain/issues/14392 | 2,030,447,654 | 14,392 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain v 0.0.344
pydantic v 2.5.2
pydantic_code v 2.14.5
python v 3.10.13
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I noticed that the **OutputFixingParser** class does not work when applied to a **PydanticOutputParser** class.
Something has probably changed in **Pydantic**.
Doing step-by-step debugging, I saw that in **PydanticOutputParser**, at line 32 (see below)...

...the exception caught is indeed a **ValidationError**, but _it is not the same_ **ValidationError**...
The **ValidationError** expected from that `try..except` block is of this type.

While the **ValidationError** raised is of this other type.

### Expected behavior
I therefore imagine that the **LangChain** code needs to be updated to also handle the new exception (since the old one belongs to a "**v1**" package). | OutputFixingParser does not work with PydanticOutputParser | https://api.github.com/repos/langchain-ai/langchain/issues/14387/comments | 2 | 2023-12-07T09:01:20Z | 2024-03-17T16:09:31Z | https://github.com/langchain-ai/langchain/issues/14387 | 2,030,232,645 | 14,387 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.346
python==3.11.6
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hi there!
I'm currently exploring the index feature in Langchain to load documents into the vector store upon my app startup.
However, I've encountered an issue where the index doesn't delete old documents when utilizing Redis as the vector store. After some investigation, I discovered that the `delete` function in `langchain.vectorstores.redis.base.Redis` is a static method, which poses a limitation—it cannot access instance variables, including the essential `key_prefix`. Without the `key_prefix`, Redis is unable to delete documents correctly.
This leads me to question why the `delete` method of the Redis vector store is static. I've noticed that other vector stores, such as Pinecone, do not have a static `delete` function and seem to handle this differently.
### Expected behavior
index with Redis as vector store should delete documents correctly.
| index with redis as vector store cannot delete documents | https://api.github.com/repos/langchain-ai/langchain/issues/14383/comments | 2 | 2023-12-07T08:09:30Z | 2024-03-13T21:06:56Z | https://github.com/langchain-ai/langchain/issues/14383 | 2,030,135,709 | 14,383 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.