issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
found_response_schemas = [
ResponseSchema(name="answer", description="answer to the user's question"),
ResponseSchema(
name="found",
description="whether the model could find the proper answers or not.",
),
]
found_output_parser = StructuredOutputParser.from_response_schemas(found_response_schemas)
format_instructions = found_output_parser.get_format_instructions()
loader = CSVLoader(file_path='myReference.csv',
csv_args={
'delimiter': ',',
'quotechar': '"',
'fieldnames': ["Category","Unit","Equivalent", "Grams", "Ounces"]
})
data = loader.load()
embeddings = VertexAIEmbeddings(model_name='textembedding-gecko@003')
vectorstore = Chroma.from_documents(data, embeddings)
retriever = vectorstore.as_retriever()
template = """Use the following pieces of context to answer the question at the end.
- If you think you need more information to find proper answer, ask user to clarify.
- If you cannot find a proper answer from the reference, just say we don't have a data from the reference.
Use three sentences maximum and keep the answer as concise as possible.
{context}
Question: {question}
Helpful Answer: {format_instructions}
"""
rag_prompt_custom = PromptTemplate(
template=template,
input_variables=["context", "question"],
partial_variables={"format_instructions": format_instructions},
output_parser=found_output_parser
)
context = conv_retriever.get_relevant_documents(message)
rag_chain = rag_prompt_custom | llm | found_output_parser
response = rag_chain.invoke({"context": context, "question":message})
```
### Description
I would like to make cooking converter bot with my reference document. However, I found that this designed prompt and RAG chain cannot handle mathematical scaling of ingredient.
For example, suppose there is a data about '1 cup of rice is 6.5 ounces'.
When I asked 'how many ounces is 1 cup of rice?' it said "1 cup of uncooked long-grain white rice is equivalent to 6.5 ounces." well. But when I asked 'how many ounces is 2 cups of rice?', it says "I apologize, but the provided context does not contain information about the weight of 2 cups of rice."
### System Info
langchain==0.0.353
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | mathematical scaling of RAG chain | https://api.github.com/repos/langchain-ai/langchain/issues/16181/comments | 1 | 2024-01-18T05:51:52Z | 2024-01-22T23:01:53Z | https://github.com/langchain-ai/langchain/issues/16181 | 2,087,559,416 | 16,181 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
import os
import qdrant_client
from dotenv import load_dotenv
from langchain.chains import RetrievalQA
from langchain.embeddings.huggingface import HuggingFaceInstructEmbeddings
from langchain.llms import OpenAI
from langchain.vectorstores import Qdrant
os.environ['OPENAI_API_KEY'] = "mykey"
# memory=
def get_vector_store():
client = qdrant_client.QdrantClient(
os.getenv('QDRANT_HOST'),
)
embeddings = HuggingFaceInstructEmbeddings(model_name="all-MiniLM-L6-v2")
vector_store = Qdrant(
client=client,
collection_name=os.getenv('QDRANT_COLLECTION_NAME'),
embeddings=embeddings,
)
return vector_store
def main():
load_dotenv()
vectorstore = get_vector_store()
qa = RetrievalQA.from_chain_type(
llm=OpenAI(
temperature=0.7,
max_tokens=100,
),
chain_type="stuff",
retriever=vectorstore.as_retriever(search_type="similarity_score_threshold",
search_kwargs={"score_threshold": 0.1})
)
while True:
user_question = input("ask:")
if user_question == "bye":
break
answer = qa.invoke(user_question)
print(f"Ai: {answer}")
if __name__ == "__main__":
main()
### Idea or request for content:
How can I embed chat logs into the specified collection 'QDRANT_COLLECTION_NAME' in Qrdant, and what approach should I use to write the chain for short-term memory? | How can I embed chat logs into the specified collection 'QDRANT_COLLECTION_NAME' in Qrdant, and what approach should I use to write the chain for short-term memory? | https://api.github.com/repos/langchain-ai/langchain/issues/16179/comments | 1 | 2024-01-18T05:22:40Z | 2024-04-25T16:12:43Z | https://github.com/langchain-ai/langchain/issues/16179 | 2,087,527,966 | 16,179 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
none
### Description
In the HOME directory of the langchain source code, execute the following command:
poetry install --with lint,typing,test
However, the following errors are thrown, and I have a few questions to ask:
(1) Why do we need to perform a downgrade operation on modules like langchain-core, and how to avoid downgrading?
(2) Why can't the module langchain-monorepo be found?
(3) How to compile the source code of langchain in order to understand the principles of langchain and contribute code to langchain?
My development environment:
Python: Built on conda, Python version 3.8.18
Langchain: The master branch on GitHub.
Error message:
(langchain) [xiaobai@xiaobai langchain-master]$ poetry install --with lint,typing,test
Installing dependencies from lock file
Package operations: 0 installs, 4 updates, 0 removals
• Downgrading langchain-core (0.1.10 /home/xiaobai/data/pro/python/langchain-master/libs/core -> 0.1.3 /home/xiaobai/data/pro/python/langchain-master/libs/core)
• Downgrading langchain-community (0.0.12 /home/xiaobai/data/pro/python/langchain-master/libs/community -> 0.0.6 /home/xiaobai/data/pro/python/langchain-master/libs/community)
• Downgrading langchain (0.1.0 /home/xiaobai/data/pro/python/langchain-master/libs/langchain -> 0.0.352 /home/xiaobai/data/pro/python/langchain-master/libs/langchain)
• Downgrading langchain-experimental (0.0.49 /home/xiaobai/data/pro/python/langchain-master/libs/experimental -> 0.0.47 /home/xiaobai/data/pro/python/langchain-master/libs/experimental)
Installing the current project: langchain-monorepo (0.0.1)
The current project could not be installed: No file/folder found for package langchain-monorepo
If you do not want to install the current project use --no-root
### System Info
Python: Built on conda, Python version 3.8.18
Langchain: The master branch on GitHub.
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | No file/folder found for package langchain-monorepo | https://api.github.com/repos/langchain-ai/langchain/issues/16170/comments | 3 | 2024-01-18T01:24:10Z | 2024-07-01T16:04:49Z | https://github.com/langchain-ai/langchain/issues/16170 | 2,087,297,421 | 16,170 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
def createAgent():
pinecone.init(environment="gcp-starter")
llm = ChatOpenAI(temperature=0, model_name="gpt-4-1106-preview")
embeddings = OpenAIEmbeddings()
docsearch_ch = Chroma(persist_directory="../chroma_db", embedding_function=embeddings,
collection_name="company")
docsearch = Pinecone.from_existing_index(index_name="company", embedding=embeddings)
companyQa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=docsearch_ch.as_retriever(kwargs={"k": 15})
)
tools = [
Tool(
name="Company QA",
description="Useful to get informations about Company",
func=companyQa.run
)
]
return initialize_agent(
tools=tools, llm=llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
```
### Description
I have Chroma index and Pinecone index created from exactly the same documents. It is a company website scraped through Apify. Below code is used to create both indices:
```
splitter = RecursiveCharacterTextSplitter(
chunk_size=500,
chunk_overlap=50,
separators= ["\n", "", " ", "\n\n"])
docs = splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
Pinecone.from_documents(docs, embeddings, index_name="company")
Chroma.from_documents(docs, embeddings, persist_directory="./chroma_db", collection_name="company")
```
The problem is that Chroma seems very limited in capabilities compared to Pinecone.I have provided the example agent code that I am using.
When we use the `docsearch_ch` and ask the following prompt: `What are some clients of the Company"?`. We are going to receive `To find out the clients of the Company, you would need to contact Company directly or visit their official website or check their press releases for announcements of new partnerships or clients. Company QA does not have access to this information.`
When we use the `docsearch` that is based on Pinecone. The same prompt returns `Some clients of the company include industry leaders such as Demandbase, New Relic, Nubank, Rapid7, and Skyscanner.`
If we try to do the similarity search manually using the below code:
```
print(docsearch_ch.similarity_search("List of the company clients.", k =10))
print(docsearch.similarity_search("List of the company clients.", k =10))
```
Both of the vector stores will return more or less correct results (altough seems that Pinecone results are better), but more importantly both of them will return the Document with following content:
```
page_content='by industry leaders worldwide, such as Demandbase, New Relic, Nubank, Rapid7, and Skyscanner.'
```
When I try to debug the code and setup the breakpoint in the [BaseCombineDocumentsChain](https://github.com/langchain-ai/langchain/blob/ca014d5b04b1d73fd8f0fe224def98a82600c991/libs/langchain/langchain/chains/combine_documents/base.py#L136) the inputs for Chroma based retriever are empty. Seems like this isn't the expected behaviour.
### System Info
langchain==0.0.342
langchain-community==0.0.13
langchain-core==0.0.7
langchain-experimental==0.0.49
Mac OS M1 Ventura 13.6
Python 3.9.6
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Chroma and Pinecone provide totally different results even when created with the same document set. | https://api.github.com/repos/langchain-ai/langchain/issues/16166/comments | 3 | 2024-01-18T00:16:00Z | 2024-05-03T16:06:00Z | https://github.com/langchain-ai/langchain/issues/16166 | 2,087,233,605 | 16,166 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
I have noticed that the prompt produced by LangChain when using ```ChatOllama(model='mixtral')``` was not in the proper format.
This can be replicated using the following:
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
##messages example
messages = [
{"role": "system", "content": "You are a chef"},
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
##converting messages example to LC format
messages_lc = [(m['role'],m['content']) for m in messages]
```
For the proper format:
```
tokenizer.apply_chat_template(messages,tokenize=False)
```
This would produce:
```
"<s>[INST] <<SYS>>\nYou are a chef\n<</SYS>>\n\nWhat is your favourite condiment? [/INST] Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen! </s><s>[INST] Do you have mayonnaise recipes? [/INST]"
```
Where as the LangChain format:
```
from langchain_community.chat_models import ChatOllama
from langchain.prompts import ChatPromptTemplate
chat = ChatOllama(model='mixtral')
chat_resp = (ChatPromptTemplate.from_messages(messages_lc) | chat_tmp ).invoke({},{'callbacks': [ConsoleCallbackHandler()]})
```
I then copied the tokens sent to the LLM (using the ConsoleCallbackHandler) into ```tokens``` and decoded them:
```
tokenizer.decode(tokens)
```
To receive this:
```
" [INST] <<SYS>> You are a chef <</SYS>>\n[INST] What is your favourite condiment? [/INST]\nWell, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!\n[INST] Do you have mayonnaise recipes? [/INST] [/INST]
```
### Description
I'm trying to use LangChain with ChatOllama.
Output tokens decoded were:
```
" [INST] <<SYS>> You are a chef <</SYS>>\n[INST] What is your favourite condiment? [/INST]\nWell, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!\n[INST] Do you have mayonnaise recipes? [/INST] [/INST]
```
Where the proper format is:
```
"<s>[INST] <<SYS>>\nYou are a chef\n<</SYS>>\n\nWhat is your favourite condiment? [/INST] Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen! </s><s>[INST] Do you have mayonnaise recipes? [/INST]"
```
### System Info
langchain==0.0.350
langchain-community==0.0.3
langchain-core==0.1.0
Python 3.11.4
MacOS Sonoma
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async | Improper chat_template when using Ollama | https://api.github.com/repos/langchain-ai/langchain/issues/16165/comments | 3 | 2024-01-17T23:17:06Z | 2024-04-25T16:23:12Z | https://github.com/langchain-ai/langchain/issues/16165 | 2,087,170,268 | 16,165 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The bug was found by trying to install [jupyter-ai](https://github.com/jupyterlab/jupyter-ai) via conda-forge.
```install -c conda-forge jupyter_ai```
Basically, there is an incompatibility with the latest version of langchain and geopandas.
### Description
Based on the bug report posted on the jupyter-ai github https://github.com/jupyterlab/jupyter-ai/issues/557#issue-2062998660 , it looks like langchain enforces a version of geopandas below 0.14. Could this be relaxed to allow for >0.14? There are many features of modern geopandas that make GIS workflows so much easier, and the current enforcement version prevents langchain-dependent repositories from making full use of GIS-adjacent stacks.
### System Info
The incompatibility arose while using an M1 apple silicon laptop.
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Geopandas and Langchain blocking issue. | https://api.github.com/repos/langchain-ai/langchain/issues/16156/comments | 2 | 2024-01-17T19:08:48Z | 2024-07-31T04:26:52Z | https://github.com/langchain-ai/langchain/issues/16156 | 2,086,808,660 | 16,156 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
pip install langchain langchain-community langchain-core langchain-experimental
```
```
from langchain.agents import create_spark_sql_agent
from langchain_community.agent_toolkits import SparkSQLToolkit
from langchain_community.utilities.spark_sql import SparkSQL
from langchain_openai import ChatOpenAI
from pyspark.sql import SparkSession
```
### Description
I get errors related to dependicies when trying to follow the Spark toolkit examples from the documentation.
For example:
https://python.langchain.com/docs/integrations/toolkits/spark_sql
```
ImportError: cannot import name 'TypeAliasType' from 'typing_extensions' (/databricks/python/lib/python3.10/site-packages/typing_extensions.py)
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
File <command-2000110168240505>, line 1
----> 1 from langchain.agents import create_spark_sql_agent
2 from langchain_community.agent_toolkits import SparkSQLToolkit
3 from langchain_community.utilities.spark_sql import SparkSQL
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-b2a0b64a-b43b-40f8-a165-ff3aac48e3b4/lib/python3.10/site-packages/langchain/agents/__init__.py:34
31 from pathlib import Path
32 from typing import Any
---> 34 from langchain_community.agent_toolkits import (
35 create_json_agent,
36 create_openapi_agent,
37 create_pbi_agent,
38 create_pbi_chat_agent,
39 create_spark_sql_agent,
40 create_sql_agent,
41 )
42 from langchain_core._api.path import as_import_path
44 from langchain.agents.agent import (
45 Agent,
```
### System Info
langchain==0.1.1
langchain-community==0.0.13
langchain-core==0.1.11
langchain-experimental==0.0.49
Python 3.10
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Dependency issues with Spark toolkit examples | https://api.github.com/repos/langchain-ai/langchain/issues/16151/comments | 1 | 2024-01-17T17:37:23Z | 2024-04-24T16:52:26Z | https://github.com/langchain-ai/langchain/issues/16151 | 2,086,661,037 | 16,151 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
llm = SagemakerEndpoint(
endpoint_name=endpoint_name,
endpoint_kwargs={ "CustomAttributes": "accept_eula=true,InferenceComponentName="+inference_component_name },
region_name = region_name,
credentials_profile_name = profile_name,
model_kwargs = parameters,
content_handler = content_handler,
)
print(str( llm.endpoint_kwargs ))
db = SQLDatabase.from_uri(REDSHIFT_ENDPOINT)
db_chain = SQLDatabaseSequentialChain.from_llm(
llm, db, verbose=True, use_query_checker=True
)
try:
results = db_chain.run(QUESTION_01)
print(results)
except (ProgrammingError, ValueError, DataError) as exc:
print(f"\n\n{exc}")
### Description
I am using a Sagemaker Flan T5 Xl model to ask model and is ok,
but when try to use langchain to query RedShift DB.
I have this error.
----> " Error raised by inference endpoint: An error occurred (ValidationError) when calling the InvokeEndpoint operation: Inference Component Name header is required for endpoints to which you plan to deploy inference components. Please include Inference Component Name header or consider using SageMaker models."
### System Info
model jumpstart-dft-hf-text2text-flan-t5 xxl
Python 3.11.6
boto3 1.34.19
langchain 0.1.0
langchain-community 0.0.12
langchain-core 0.1.10
langchain-experimental 0.0.49
sentence-transformers 2.2.2
SQLAlchemy 1.4.48
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Inference Component Name header is required (SageMaker RedShift Flan T5) | https://api.github.com/repos/langchain-ai/langchain/issues/16150/comments | 2 | 2024-01-17T17:11:42Z | 2024-01-22T23:34:55Z | https://github.com/langchain-ai/langchain/issues/16150 | 2,086,621,372 | 16,150 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The following code:
```
from langchain.vectorstores import PGVector
filter = {
"source": {
"or": [{"contains": "xlsx"}, {"contains": "pdf"}]
}
}
vector_db = PGVector(connection_string=connection_string, embedding_function=embeddings)
vector_db.as_retriever(search_type="similarity", search_kwargs={"k": 5, "filter": filter})
```
won't work because the list is not unpacked when passed to
filter_by_metadata = sqlalchemy.or_(or_clauses) in https://github.com/langchain-ai/langchain/blob/e5cf1e2414a22c7465f0f47bd661ece183e88924/libs/community/langchain_community/vectorstores/pgvector.py#L548
The code throws the error
`ArgumentError: SQL expression for WHERE/HAVING role expected, got [, ].`
### Description
* I am trying to filter documents based on metadata during retrieval
* I want to pass a list to filter metadata via an "or" statement.
* I think it should be `filter_by_metadata = sqlalchemy.or_(*or_clauses)` in https://github.com/langchain-ai/langchain/blob/e5cf1e2414a22c7465f0f47bd661ece183e88924/libs/community/langchain_community/vectorstores/pgvector.py#L548
### System Info
langchain==0.0.353
langchain-community==0.0.13
langchain-core==0.1.11
Python 3.10.13
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Missing unpack operator for or_clause in pgvector document filter | https://api.github.com/repos/langchain-ai/langchain/issues/16146/comments | 2 | 2024-01-17T15:18:55Z | 2024-01-17T17:19:16Z | https://github.com/langchain-ai/langchain/issues/16146 | 2,086,414,888 | 16,146 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
This is no documentation about QAGenerationChain in langchain.evaluation.qa
I used `from langchain.evaluation.qa import QAGenerateChain` to run evaluation using LLM-generated examples.
I tried to find documentation regarding the QAGenerateChain and found none.
Even in [api.python.langchain.com/](https://api.python.langchain.com/en/latest/search.html?q=QAGenerationChain#) site there is documentation regarding `langchain.chains.qa_generation.base.QAGenerationChain` but nothing about `from langchain.evaluation.qa import QAGenerateChain`
### Idea or request for content:
I believe there is more work that needs to be done. Please fill a documentation or a use case of `QAGenerateChain` | DOC: No documentation about QAGenerationChain in langchain.evaluation.qa | https://api.github.com/repos/langchain-ai/langchain/issues/16145/comments | 1 | 2024-01-17T14:46:57Z | 2024-04-24T16:52:22Z | https://github.com/langchain-ai/langchain/issues/16145 | 2,086,351,099 | 16,145 |
[
"hwchase17",
"langchain"
]
| ### Feature request
like
from langchain.output_parsers import OutputFixingParser
fix_parser = OutputFixingParser.from_llm(llm, parser)
chain= chat_template | llm | parser |fix_parser
### Motivation
I think OutputFixing should be in the process of generating content in a specific format.
### Your contribution
I'm sorry. | i want to add OutputFixingParser in LCEL chain | https://api.github.com/repos/langchain-ai/langchain/issues/16140/comments | 1 | 2024-01-17T12:35:21Z | 2024-04-24T16:49:52Z | https://github.com/langchain-ai/langchain/issues/16140 | 2,086,101,220 | 16,140 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The chain is defined as:
```python
retriever_chain = create_retriever_chain(llm, retriever, use_chat_history)
_get_k_or_less_documents = partial(get_k_or_less_documents, k=k)
context = (
RunnableMap(
{
"question": lambda x: x["question"],
"memory": memory.load_memory_variables,
}
)
| RunnableMap(
{
"context": (
retriever_chain
| _get_k_or_less_documents
| reorder_documents
| format_docs
),
"question": lambda x: x["question"],
"chat_history": lambda x: x["memory"]["chat_history"],
}
)
)
prompt = ChatPromptTemplate.from_messages(
messages=[
("system", SYSTEM_ANSWER_QUESTION_TEMPLATE),
MessagesPlaceholder(variable_name="chat_history"),
("human", "{question}"),
]
)
response_synthesizer = prompt | llm
response_chain = context | response_synthesizer
```
### Description
I am trying to get back source documents used for my RAG application. However, I cannot find any ways to retrieve the document. I am using `chain.astream` to stream through my chain. The relevant code and chains are given below.
```python
async def generate_response():
input = {"question": data.message}
response = ""
async for token in chain.astream(input=input):
yield token.content
response += token.content
memory.save_context(input, {"output": response})
```
The chain is defined as:
```python
retriever_chain = create_retriever_chain(llm, retriever, use_chat_history)
_get_k_or_less_documents = partial(get_k_or_less_documents, k=k)
context = (
RunnableMap(
{
"question": lambda x: x["question"],
"memory": memory.load_memory_variables,
}
)
| RunnableMap(
{
"context": (
retriever_chain
| _get_k_or_less_documents
| reorder_documents
| format_docs
),
"question": lambda x: x["question"],
"chat_history": lambda x: x["memory"]["chat_history"],
}
)
)
prompt = ChatPromptTemplate.from_messages(
messages=[
("system", SYSTEM_ANSWER_QUESTION_TEMPLATE),
MessagesPlaceholder(variable_name="chat_history"),
("human", "{question}"),
]
)
response_synthesizer = prompt | llm
response_chain = context | response_synthesizer
```
Any kind of help is highly appreciated.
### System Info
langchain 0.1.0
langchain-community 0.0.11
langchain-core 0.1.9
langchain-openai 0.0.2
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async | Return Source Documents in LECL | https://api.github.com/repos/langchain-ai/langchain/issues/16136/comments | 5 | 2024-01-17T10:43:29Z | 2024-02-06T04:32:30Z | https://github.com/langchain-ai/langchain/issues/16136 | 2,085,909,062 | 16,136 |
[
"hwchase17",
"langchain"
]
| ### Feature request
We should be able to pass some extra parameters to `ElasticsearchStore` in form of kwargs or predefined parameters.
For example, to allow usage of `RequestsHttpNode` that takes `https_proxy` / `http_proxy` / `no_proxy` env variables.
### Motivation
In some cases, all the external calls to internet/ELK (on cloud) must use a proxy, else it will be rejected by the firewall of the company.
The default behavior of `Elasticsearch` class from https://github.com/elastic/elasticsearch-py does not take by default proxy env variable and you need to set a `node_class` to `RequestsHttpNode`
### Your contribution
Instead of calling `ElasticsearchStore` from langchain, I duplicated locally the file and hardcoded in the `Elasticsearch` init the `node_class`.
``` python
from elastic_transport import RequestsHttpNodes
# https://github.com/langchain-ai/langchain/blob/e5cf1e2414a22c7465f0f47bd661ece183e88924/libs/community/langchain_community/vectorstores/elasticsearch.py#L593
es_client = elasticsearch.Elasticsearch(
**connection_params,
node_class=RequestsHttpNode,
headers={"user-agent": ElasticsearchStore.get_user_agent()},
)
```
I am sure there's a better way and more global way to pass parameters to `Elasticsearch()` | ElasticsearchStore does not accept proxy | https://api.github.com/repos/langchain-ai/langchain/issues/16135/comments | 2 | 2024-01-17T10:17:26Z | 2024-05-08T16:07:04Z | https://github.com/langchain-ai/langchain/issues/16135 | 2,085,861,615 | 16,135 |
[
"hwchase17",
"langchain"
]
| <peek>
```
below is my code :-
def regenerate_custom_prompt(new_project_qa,chroma_db_path,query,name,not_uuid):
check = query.lower()
relevant_document = new_project_qa.get_relevant_documents(query=query)
context_text="\n\n---\n\n".join([doc.page_content for doc in relevant_document])
# print(context_text,"context_text")
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
greetings = ['hi', 'hello', 'hey', 'hui', 'hiiii', 'hii', 'hiii', 'heyyy']
if check in greetings:
custom_prompt_template = f"""
Just simply reply with "Hello {name}! How can I assist you today?"
"""
elif check not in greetings and user_experience_inst.custom_prompt:
custom_prompt_template = f"""Answer the question based only on following context: ```{context_text} ```
You are a chatbot designed to provide answers to User's Questions:```{check}```, delimited by triple backticks.
Generate your answer to match the user's requirements: {user_experience_inst.custom_prompt}
If you encounter a question for which you don't know the answer, please respond with 'I don't know' and refrain from making up an answer.
- Before saying 'I don't know,' please re-verify your vector store to ensure the answer is not present in the database.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, feel free to ask for clarification.
User's Question: ```{check}```
AI Answer:"""
else:
# Create the custom prompt template
custom_prompt_template = """Generate your response exclusively from the provided context: {context_text}. You function as a chatbot specializing in delivering detailed answers to the User's Question: ```{check} ```, enclosed within triple backticks.
Generate your answer in points in the following format:
1. Point no 1
1.1 Its subpoint in details
1.2 More information if needed.
2. Point no 2
2.1 Its subpoint in details
2.2 More information if needed.
…
N. Another main point.
If you encounter a question for which you don't know the answer based on the predefined points, please respond with 'I don't know' and refrain from making up an answer.
However, if the answer is not present in the predefined points, then Provide comprehensive information related to the user's query.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, you can ask for clarification.
User's Question: ```{check} ```
AI Answer:"""
# Create the PromptTemplate
custom_prompt = ChatPromptTemplate.from_template(custom_prompt_template)
formatted_prompt = custom_prompt.format(context_text=context_text, check=check)
print(formatted_prompt,"-------------------------------")
embedding = OpenAIEmbeddings()
vectordb = Chroma(persist_directory=chroma_db_path, embedding_function=embedding)
llm = ChatOpenAI(model="gpt-3.5-turbo-16k",temperature=0.1)
### mmr stands for Maximum marginal relevance
retriever = vectordb.as_retriever(search_type="similarity", search_kwargs={"k":2})
qa = MultiQueryRetriever.from_llm(llm=llm, retriever=retriever)
return qa
</peek> | How is multiquery retreiver implemented with RAG? | https://api.github.com/repos/langchain-ai/langchain/issues/16134/comments | 1 | 2024-01-17T10:01:04Z | 2024-05-08T16:06:59Z | https://github.com/langchain-ai/langchain/issues/16134 | 2,085,829,150 | 16,134 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Allow manual definition of embedding column length in PGVector
### Motivation
Right now embeddings stores in PGVector are initialised with:
`embedding: Vector = sqlalchemy.Column(Vector(None))`
This won't allow indexing of the DB ([https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/howto-optimize-performance-pgvector](url))
### Your contribution
I can open a PR shortly to fix it | Implement vector length definition at init time in PGVector for indexing | https://api.github.com/repos/langchain-ai/langchain/issues/16132/comments | 2 | 2024-01-17T09:31:57Z | 2024-04-27T16:24:43Z | https://github.com/langchain-ai/langchain/issues/16132 | 2,085,775,436 | 16,132 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
https://python.langchain.com/docs/langgraph
The paragraph under "Examples":
>
> Modifications
>
> We also have a lot of examples highlighting how to slightly modify the base chat agent executor. These all build off the [getting started notebook](https://python.langchain.com/docs/examples/chat_agent_executor_with_function_calling/base.ipynb) so it is recommended you start with that first.
The link is not working.
### Idea or request for content:
_No response_ | DOC: Dead link in langgraph docs | https://api.github.com/repos/langchain-ai/langchain/issues/16131/comments | 1 | 2024-01-17T09:28:12Z | 2024-04-24T16:42:57Z | https://github.com/langchain-ai/langchain/issues/16131 | 2,085,768,721 | 16,131 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
db = SQLDatabase.from_uri(
'vertica+vertica_python://user:[email protected]:5433/verticadst',
schema='myschema'
)
```
### Description
When I use the above snippet to create an SQLDatabase object for vertica, I get the below error:
> sqlalchemy.exc.ProgrammingError: (vertica_python.errors.MissingSchema) Severity: ERROR, Message: Schema "pg_catalog" does not exist
Which makes sense because Vertica does not have pg_catalog schema, but how can I create the object without the reflect part where it tries to read from pg_catalog
### System Info
sqlalchemy-vertica-python~=0.6.3
langchain==0.1.0
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | SQLDatabase.from_uri Throws an error when used with Vertica | https://api.github.com/repos/langchain-ai/langchain/issues/16129/comments | 4 | 2024-01-17T09:09:24Z | 2024-04-30T16:28:51Z | https://github.com/langchain-ai/langchain/issues/16129 | 2,085,735,570 | 16,129 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The following code follows the example from [langchain docs](https://python.langchain.com/docs/use_cases/extraction#option-2-parsing).
```python
class Person(BaseModel):
person_name: str
person_height: int
person_hair_color: str
dog_breed: Optional[str]
dog_name: Optional[str]
class People(BaseModel):
"""Identifying information about all people in a text."""
people: Sequence[Person]
query = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde. Alex has a poodle named Rex."""
# Set up a parser + inject instructions into the prompt template
parser = PydanticOutputParser(pydantic_object=People)
prompt = PromptTemplate(
template="Follow the given instructions and perform the task mentioned in the context of given information. \n{format_instructions}\n Context is:\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()},
)
input_to_model=prompt.format_prompt(query=query)
print(input_to_model.to_string())
# output=FLAN.generate_text(input_to_model.to_string())
output=LLAMA.generate_text(input_to_model.to_string())
print(output)
parser.parse(output)
```
### Description
* I'm trying to use PydanticOutputParser for structured output.
* Incase of an optional field missing from JSON, pydanticoutput parser raises a validation error saying missing a required field
Output from model:
```json
```
{
"people": [
{
"person_name": "Alex",
"person_height": 5,
"person_hair_color": "blonde",
"dog_breed": "poodle",
"dog_name": "Rex"
},
{
"person_name": "Claudia",
"person_height": 6,
"person_hair_color": "brunette"
}
]
}
```
```
Complete Error:
```json
{
"name": "ValidationError",
"message": "2 validation errors for People\npeople.1.dog_breed\n Field required [type=missing, input_value={'person_name': 'Claudia'...hair_color': 'brunette'}, input_type=dict]\n For further information visit https://errors.pydantic.dev/2.5/v/missing\npeople.1.dog_name\n Field required [type=missing, input_value={'person_name': 'Claudia'...hair_color': 'brunette'}, input_type=dict]\n For further information visit https://errors.pydantic.dev/2.5/v/missing",
"stack": "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m\n\u001b[1;31mValidationError\u001b[0m Traceback (most recent call last)\nCell \u001b[1;32mIn[70], line 33\u001b[0m\n\u001b[0;32m 30\u001b[0m \u001b[38;5;66;03m# output=FLAN.generate_text(input_to_model.to_string())\u001b[39;00m\n\u001b[0;32m 32\u001b[0m \u001b[38;5;28mprint\u001b[39m(output)\n\u001b[1;32m---> 33\u001b[0m \u001b[43mparser\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mparse\u001b[49m\u001b[43m(\u001b[49m\u001b[43moutput\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 35\u001b[0m \u001b[38;5;66;03m# print(parser.get_format_instructions())\u001b[39;00m\n\nFile \u001b[1;32mc:\\Users\\user\\repos\\.venv\\Lib\\site-packages\\langchain\\output_parsers\\pydantic.py:30\u001b[0m, in \u001b[0;36mPydanticOutputParser.parse\u001b[1;34m(self, text)\u001b[0m\n\u001b[0;32m 28\u001b[0m json_str \u001b[38;5;241m=\u001b[39m match\u001b[38;5;241m.\u001b[39mgroup()\n\u001b[0;32m 29\u001b[0m json_object \u001b[38;5;241m=\u001b[39m json\u001b[38;5;241m.\u001b[39mloads(json_str, strict\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mFalse\u001b[39;00m)\n\u001b[1;32m---> 30\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mpydantic_object\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mparse_obj\u001b[49m\u001b[43m(\u001b[49m\u001b[43mjson_object\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 32\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m (json\u001b[38;5;241m.\u001b[39mJSONDecodeError, ValidationError) \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[0;32m 33\u001b[0m name \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mpydantic_object\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__name__\u001b[39m\n\nFile \u001b[1;32mc:\\Users\\user\\repos\\.venv\\Lib\\site-packages\\typing_extensions.py:2499\u001b[0m, in \u001b[0;36mdeprecated.__call__.<locals>.wrapper\u001b[1;34m(*args, **kwargs)\u001b[0m\n\u001b[0;32m 2496\u001b[0m \u001b[38;5;129m@functools\u001b[39m\u001b[38;5;241m.\u001b[39mwraps(arg)\n\u001b[0;32m 2497\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mwrapper\u001b[39m(\u001b[38;5;241m*\u001b[39margs, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs):\n\u001b[0;32m 2498\u001b[0m warnings\u001b[38;5;241m.\u001b[39mwarn(msg, category\u001b[38;5;241m=\u001b[39mcategory, stacklevel\u001b[38;5;241m=\u001b[39mstacklevel \u001b[38;5;241m+\u001b[39m \u001b[38;5;241m1\u001b[39m)\n\u001b[1;32m-> 2499\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43marg\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\nFile \u001b[1;32mc:\\Users\\user\\repos\\.venv\\Lib\\site-packages\\pydantic\\main.py:1027\u001b[0m, in \u001b[0;36mBaseModel.parse_obj\u001b[1;34m(cls, obj)\u001b[0m\n\u001b[0;32m 1021\u001b[0m \u001b[38;5;129m@classmethod\u001b[39m\n\u001b[0;32m 1022\u001b[0m \u001b[38;5;129m@typing_extensions\u001b[39m\u001b[38;5;241m.\u001b[39mdeprecated(\n\u001b[0;32m 1023\u001b[0m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mThe `parse_obj` method is deprecated; use `model_validate` instead.\u001b[39m\u001b[38;5;124m'\u001b[39m, category\u001b[38;5;241m=\u001b[39mPydanticDeprecatedSince20\n\u001b[0;32m 1024\u001b[0m )\n\u001b[0;32m 1025\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mparse_obj\u001b[39m(\u001b[38;5;28mcls\u001b[39m: \u001b[38;5;28mtype\u001b[39m[Model], obj: Any) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m Model: \u001b[38;5;66;03m# noqa: D102\u001b[39;00m\n\u001b[0;32m 1026\u001b[0m warnings\u001b[38;5;241m.\u001b[39mwarn(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mThe `parse_obj` method is deprecated; use `model_validate` instead.\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;167;01mDeprecationWarning\u001b[39;00m)\n\u001b[1;32m-> 1027\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mcls\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mmodel_validate\u001b[49m\u001b[43m(\u001b[49m\u001b[43mobj\u001b[49m\u001b[43m)\u001b[49m\n\nFile \u001b[1;32mc:\\Users\\user\\repos\\.venv\\Lib\\site-packages\\pydantic\\main.py:503\u001b[0m, in \u001b[0;36mBaseModel.model_validate\u001b[1;34m(cls, obj, strict, from_attributes, context)\u001b[0m\n\u001b[0;32m 501\u001b[0m \u001b[38;5;66;03m# `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks\u001b[39;00m\n\u001b[0;32m 502\u001b[0m __tracebackhide__ \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mTrue\u001b[39;00m\n\u001b[1;32m--> 503\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mcls\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m__pydantic_validator__\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mvalidate_python\u001b[49m\u001b[43m(\u001b[49m\n\u001b[0;32m 504\u001b[0m \u001b[43m \u001b[49m\u001b[43mobj\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstrict\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstrict\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mfrom_attributes\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mfrom_attributes\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcontext\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mcontext\u001b[49m\n\u001b[0;32m 505\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n\n\u001b[1;31mValidationError\u001b[0m: 2 validation errors for People\npeople.1.dog_breed\n Field required [type=missing, input_value={'person_name': 'Claudia'...hair_color': 'brunette'}, input_type=dict]\n For further information visit https://errors.pydantic.dev/2.5/v/missing\npeople.1.dog_name\n Field required [type=missing, input_value={'person_name': 'Claudia'...hair_color': 'brunette'}, input_type=dict]\n For further information visit https://errors.pydantic.dev/2.5/v/missing"
}
```
### System Info
Langchain:
```
langchain==0.1.0
langchain-community==0.0.10
langchain-core==0.1.8
```
platform:
```
Windows 11 Enterprise Version 10.0.22621 Build 22621
```
python :
```
Python 3.11.7
```
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | PydanticOutputParser ` ValidationError: Field required [type=missing,' for an Optional field | https://api.github.com/repos/langchain-ai/langchain/issues/16123/comments | 2 | 2024-01-17T06:51:10Z | 2024-06-08T16:09:10Z | https://github.com/langchain-ai/langchain/issues/16123 | 2,085,521,161 | 16,123 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
import os
from langchain_community.chat_models import ChatOpenAI
from langchain.schema import Document
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import Qdrant
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
from qdrant_client import QdrantClient, models
os.environ['OPENAI_API_KEY'] = "mykey"
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
client = QdrantClient(host="192.168.10.31", port=6333)
collection_name = "test117"
client.create_collection(
collection_name=collection_name,
vectors_config=models.VectorParams(size=384, distance=models.Distance.COSINE),
optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000),
)
vectorstore = Qdrant(
client=client,
embedding=embeddings,
location=":memory:",
collection_name=collection_name,
)
llm = ChatOpenAI(
temperature=0.7,
model="gpt-3.5-turbo",
max_tokens=100
)
memory = ConversationBufferMemory()
conversation = ConversationChain(llm=llm, memory=memory, verbose=True,)
collection_info = client.get_collection(collection_name)
while True:
chat_history = memory.load_memory_variables({})
question = input('ask:')
question_vector = embeddings.embed_query(question)[0]
search_results = client.search(
collection_name=collection_name,
query_vector=models.NamedSparseVector(
name=collection_name,
vector=models.SparseVector(
indices=question_vector.indices,
values=question_vector.values,
),
),
limit=1,
)
if search_results and search_results[0].payload:
response_vector = search_results[0].payload.vector
response_str = vectorstore.retrieve_text(response_vector)
else:
response = conversation.predict(input=f"Q: {question}\nA:")
response_str = response.choices[0].text.strip()
response_vector = embeddings.embed_query(response_str)[0]
vectorstore.add_documents([Document(vector=response_vector, text=response_str)])
print("respones:", response_str)
memory.add_memory_variables({"input": question, "output": response_str})
if question.lower() == 'bye':
break
### Idea or request for content:
Why can't test117 be retrieved by vector and provide replies for reference to achieve long-term memory?
Is it that the retriever is not written well? | Why can't test117 be retrieved by vector and provide replies for reference to achieve long-term memory? | https://api.github.com/repos/langchain-ai/langchain/issues/16121/comments | 1 | 2024-01-17T06:29:33Z | 2024-04-24T16:47:45Z | https://github.com/langchain-ai/langchain/issues/16121 | 2,085,484,831 | 16,121 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
llm = VertexAI(model_name='gemini-pro', max_output_tokens=512, temperature=0.2)
data = loader.load()
embeddings = VertexAIEmbeddings(model_name='textembedding-gecko@003')
vectorstore = Chroma.from_documents(data, embeddings)
retriever = vectorstore.as_retriever()
template = """
Use the following pieces of context to answer the question at the end.
- If you think you need more information to find proper answer, ask user to clarify.
- If you cannot find a proper answer from the reference, just say we don't have a data from the reference.
Use three sentences maximum and keep the answer as concise as possible.
Context: {context}
Question: {question}
Helpful Answer:
**Resource:** [reference source name]
"""
found_response_schemas = [
ResponseSchema(name="answer", description="answer to the user's question"),
ResponseSchema(
name="found",
description="whether the model could find the proper answers or not.",
),
]
found_output_parser = StructuredOutputParser.from_response_schemas(found_response_schemas)
format_instructions = found_output_parser.get_format_instructions()
rag_prompt_custom = ChatPromptTemplate.from_template(template)
rag_chain = (
{"context": retriever, "question": RunnablePassthrough()}
| rag_prompt_custom
| llm
| StrOutputParser()
)
response = rag_chain.invoke(message)
print(response)
```
### Description
I am trying to add StructuredOutputParser with a specific ResponseSchema.
What I would like to add as a ResponseSchema is, adding boolean value to check whether the llm could find proper information from reference document in RAG.
### System Info
langchain==0.0.353
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | structured output parser on ChatPromptTemplate.from_template() | https://api.github.com/repos/langchain-ai/langchain/issues/16114/comments | 11 | 2024-01-17T00:34:20Z | 2024-01-17T05:09:20Z | https://github.com/langchain-ai/langchain/issues/16114 | 2,085,185,440 | 16,114 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
This code:
```python
def parse(output):
# If no function was invoked, return to user
if "function_call" not in output.additional_kwargs:
return AgentFinish(return_values={"output": output.content}, log=output.content)
# Parse out the function call
function_call = output.additional_kwargs["function_call"]
name = function_call["name"]
inputs = json.loads(function_call["arguments"])
# If the Response function was invoked, return to the user with the function inputs
if name == "Response":
return AgentFinish(return_values=inputs, log=str(function_call))
# Otherwise, return an agent action
else:
return AgentActionMessageLog(
tool=name, tool_input=inputs, log="", message_log=[output]
)
def creat_ai_search_new_agent(embeddings, llm, class_name_rich):
ai_search_endpoint = get_ai_search_endpoint()
ai_search_admin_key = get_ai_search_admin_key()
vector_store = AzureSearch(
azure_search_endpoint=ai_search_endpoint,
azure_search_key=ai_search_admin_key,
index_name=class_name_rich,
embedding_function=embeddings.embed_query,
content_key=content_key
)
"""Retriever that uses `Azure Cognitive Search`."""
azure_search_retriever = AzureSearchVectorStoreRetriever(
vectorstore=vector_store,
search_type=search_type,
k=k,
top=n
)
retriever_tool = create_retriever_tool(
azure_search_retriever,
"Retriever",
"Useful when you need to retrieve information from documents",
)
class Response(BaseModel):
"""Final response to the question being asked"""
answer: str = Field(description="The final answer to respond to the user")
sources: List[int] = Field(
description="List of page chunks that contain answer to the question. Only include a page chunk if it contains relevant information"
)
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant who retrieves information from documents"),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
llm_with_tools = llm.bind(
functions=[
# The retriever tool
format_tool_to_openai_function(retriever_tool),
# Response schema
convert_pydantic_to_openai_function(Response),
]
)
try:
agent = (
{
"input": lambda x: x["input"],
# Format agent scratchpad from intermediate steps
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| prompt
| llm_with_tools
| parse
)
agent_executor = AgentExecutor(tools=[retriever_tool], agent=agent, verbose=True, return_intermediate_steps=True)
except Exception as e:
print(e)
print("error instanciating the agent")
return agent_executor
```
Gives me the following error:
JSONDecodeError Traceback (most recent call last)
File <command-2290979546782169>, line 5
3 # get the start time
4 st = time.time()
----> 5 answer = agent_executor.invoke(
6 {"input": text},
7 # return_only_outputs=True,
8
9 )
10 # get the end time
11 et = time.time()
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/chains/base.py:162, in Chain.invoke(self, input, config, **kwargs)
160 except BaseException as e:
161 run_manager.on_chain_error(e)
--> 162 raise e
163 run_manager.on_chain_end(outputs)
164 final_outputs: Dict[str, Any] = self.prep_outputs(
165 inputs, outputs, return_only_outputs
166 )
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs)
149 run_manager = callback_manager.on_chain_start(
150 dumpd(self),
151 inputs,
152 name=run_name,
153 )
154 try:
155 outputs = (
--> 156 self._call(inputs, run_manager=run_manager)
157 if new_arg_supported
158 else self._call(inputs)
159 )
160 except BaseException as e:
161 run_manager.on_chain_error(e)
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/agents/agent.py:1329, in AgentExecutor._call(self, inputs, run_manager)
1327 # We now enter the agent loop (until it returns something).
1328 while self._should_continue(iterations, time_elapsed):
-> 1329 next_step_output = self._take_next_step(
1330 name_to_tool_map,
1331 color_mapping,
1332 inputs,
1333 intermediate_steps,
1334 run_manager=run_manager,
1335 )
1336 if isinstance(next_step_output, AgentFinish):
1337 return self._return(
1338 next_step_output, intermediate_steps, run_manager=run_manager
1339 )
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/agents/agent.py:1055, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1046 def _take_next_step(
1047 self,
1048 name_to_tool_map: Dict[str, BaseTool],
(...)
1052 run_manager: Optional[CallbackManagerForChainRun] = None,
1053 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1054 return self._consume_next_step(
-> 1055 [
1056 a
1057 for a in self._iter_next_step(
1058 name_to_tool_map,
1059 color_mapping,
1060 inputs,
1061 intermediate_steps,
1062 run_manager,
1063 )
1064 ]
1065 )
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/agents/agent.py:1055, in <listcomp>(.0)
1046 def _take_next_step(
1047 self,
1048 name_to_tool_map: Dict[str, BaseTool],
(...)
1052 run_manager: Optional[CallbackManagerForChainRun] = None,
1053 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1054 return self._consume_next_step(
-> 1055 [
1056 a
1057 for a in self._iter_next_step(
1058 name_to_tool_map,
1059 color_mapping,
1060 inputs,
1061 intermediate_steps,
1062 run_manager,
1063 )
1064 ]
1065 )
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/agents/agent.py:1083, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1080 intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
1082 # Call the LLM to see what to do.
-> 1083 output = self.agent.plan(
1084 intermediate_steps,
1085 callbacks=run_manager.get_child() if run_manager else None,
1086 **inputs,
1087 )
1088 except OutputParserException as e:
1089 if isinstance(self.handle_parsing_errors, bool):
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/agents/agent.py:386, in RunnableAgent.plan(self, intermediate_steps, callbacks, **kwargs)
374 """Given input, decided what to do.
375
376 Args:
(...)
383 Action specifying what tool to use.
384 """
385 inputs = {**kwargs, **{"intermediate_steps": intermediate_steps}}
--> 386 output = self.runnable.invoke(inputs, config={"callbacks": callbacks})
387 return output
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain_core/runnables/base.py:1774, in RunnableSequence.invoke(self, input, config)
1772 try:
1773 for i, step in enumerate(self.steps):
-> 1774 input = step.invoke(
1775 input,
1776 # mark each step as a child run
1777 patch_config(
1778 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
1779 ),
1780 )
1781 # finish the root run
1782 except BaseException as e:
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain_core/runnables/base.py:3074, in RunnableLambda.invoke(self, input, config, **kwargs)
3072 """Invoke this runnable synchronously."""
3073 if hasattr(self, "func"):
-> 3074 return self._call_with_config(
3075 self._invoke,
3076 input,
3077 self._config(config, self.func),
3078 **kwargs,
3079 )
3080 else:
3081 raise TypeError(
3082 "Cannot invoke a coroutine function synchronously."
3083 "Use `ainvoke` instead."
3084 )
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain_core/runnables/base.py:975, in Runnable._call_with_config(self, func, input, config, run_type, **kwargs)
971 context = copy_context()
972 context.run(var_child_runnable_config.set, child_config)
973 output = cast(
974 Output,
--> 975 context.run(
976 call_func_with_variable_args,
977 func, # type: ignore[arg-type]
978 input, # type: ignore[arg-type]
979 config,
980 run_manager,
981 **kwargs,
982 ),
983 )
984 except BaseException as e:
985 run_manager.on_chain_error(e)
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain_core/runnables/config.py:323, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
321 if run_manager is not None and accepts_run_manager(func):
322 kwargs["run_manager"] = run_manager
--> 323 return func(input, **kwargs)
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain_core/runnables/base.py:2950, in RunnableLambda._invoke(self, input, run_manager, config, **kwargs)
2948 output = chunk
2949 else:
-> 2950 output = call_func_with_variable_args(
2951 self.func, input, config, run_manager, **kwargs
2952 )
2953 # If the output is a runnable, invoke it
2954 if isinstance(output, Runnable):
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain_core/runnables/config.py:323, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
321 if run_manager is not None and accepts_run_manager(func):
322 kwargs["run_manager"] = run_manager
--> 323 return func(input, **kwargs)
File <command-2290979546782090>, line 11, in parse(output)
9 function_call = output.additional_kwargs["function_call"]
10 name = function_call["name"]
---> 11 inputs = json.loads(function_call["arguments"])
13 # If the Response function was invoked, return to the user with the function inputs
14 if name == "Response":
File /usr/lib/python3.10/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
341 s = s.decode(detect_encoding(s), 'surrogatepass')
343 if (cls is None and object_hook is None and
344 parse_int is None and parse_float is None and
345 parse_constant is None and object_pairs_hook is None and not kw):
--> 346 return _default_decoder.decode(s)
347 if cls is None:
348 cls = JSONDecoder
File /usr/lib/python3.10/json/decoder.py:337, in JSONDecoder.decode(self, s, _w)
332 def decode(self, s, _w=WHITESPACE.match):
333 """Return the Python representation of ``s`` (a ``str`` instance
334 containing a JSON document).
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
339 if end != len(s):
File /usr/lib/python3.10/json/decoder.py:353, in JSONDecoder.raw_decode(self, s, idx)
344 """Decode a JSON document from ``s`` (a ``str`` beginning with
345 a JSON document) and return a 2-tuple of the Python
346 representation and the index in ``s`` where the document ended.
(...)
350
351 """
352 try:
--> 353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
355 raise JSONDecodeError("Expecting value", s, err.value) from None
JSONDecodeError: Invalid control character at: line 2 column 631 (char 632)
Can someone help?
### Description
Error half of the times
### System Info
langchain==0.1.0
openai==1.7.0
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | JSONDecodeError: Invalid control character at: line 2 column 631 (char 632) when using langchain new agent | https://api.github.com/repos/langchain-ai/langchain/issues/16102/comments | 5 | 2024-01-16T18:04:37Z | 2024-05-02T16:05:24Z | https://github.com/langchain-ai/langchain/issues/16102 | 2,084,539,156 | 16,102 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
from langchain_community.vectorstores.redis import Redis
embedding = ...
redis_url = ...
schema_path = ...
vectorstore = Redis.from_existing_index(
embedding,
index_name='index',
redis_url=redis_url,
schema=schema_path,
)
retriever = vectorstore.as_retriever(search_type="similarity_distance_threshold", search_kwargs={"distance_threshold": 0.9, "k": 5})
retriever.aget_relevant_documents('query')
```
### Description
The asynchronous method `_aget_relevant_documents` is not defined for `RedisVectorStoreRetriever`. As such, the method of the parent of `RedisVectorStoreRetriever` (`VectorStoreRetriever`) is called. In `VectorStoreRetriever` however, `similarity_distance_threshold` is not available. This makes the chain crash when `RedisVectorStoreRetriever` is called asynchronously in a chain.
I will make a PR to solve this
### System Info
```
langchain 0.1.0
langchain-community 0.0.12
langchain-core 0.1.10
```
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [x] Async | RedisVectorStoreRetriever similarity search similarity_distance_threshold is not implemented asyncronously | https://api.github.com/repos/langchain-ai/langchain/issues/16099/comments | 1 | 2024-01-16T17:19:05Z | 2024-04-23T16:57:21Z | https://github.com/langchain-ai/langchain/issues/16099 | 2,084,474,502 | 16,099 |
[
"hwchase17",
"langchain"
]
| ### Feature request
When using `MessagesPlaceholder`, it always includes all messages. For some prompts I want the full message history, but for others, I want to limit it to just the most recent few messages.
This can be accomplished with `ConversationBufferWindowMemory`, but that limits the memory storage which is used for all my LLM prompts in the chain.
I currently accomplish this with a custom prompt generator, but it would be easier if there were an optional max_messages parameter to MessagesPlaceholder that limits the history to the given number of messages.
### Motivation
I have chain with multiple LLM calls. For one, the routing prompt, it works best when passed the last few messages in the conversation. Right now I have to use a custom prompt generator to accomplish this. I'd really like to be able to just add a limit to MessagesPlaceholder.
### Your contribution
I will submit a PR, | Allow MessagesPlaceholder to include a maximum number of messages | https://api.github.com/repos/langchain-ai/langchain/issues/16096/comments | 1 | 2024-01-16T16:48:18Z | 2024-04-23T17:00:55Z | https://github.com/langchain-ai/langchain/issues/16096 | 2,084,423,853 | 16,096 |
[
"hwchase17",
"langchain"
]
| ### Feature request
This feature request adds support for NVIDIA NeMo embedding models.
### Motivation
I am constantly in need of custom embedding models like Embedding models from NeMo.
### Your contribution
I will open a PR to add support for this. | Embedding for NVIDIA NeMo | https://api.github.com/repos/langchain-ai/langchain/issues/16095/comments | 1 | 2024-01-16T16:46:07Z | 2024-04-23T17:02:50Z | https://github.com/langchain-ai/langchain/issues/16095 | 2,084,420,246 | 16,095 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
``` Python
from langchain_community.document_loaders import DirectoryLoader
directory = '/content/drive/MyDrive/History_QA_dataset'
def load_docs(directory):
return DirectoryLoader(directory).load()
documents = load_docs(directory)
```
### Description
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/pdf2image/pdf2image.py](https://localhost:8080/#) in pdfinfo_from_path(pdf_path, userpw, ownerpw, poppler_path, rawdates, timeout, first_page, last_page)
580 env["LD_LIBRARY_PATH"] = poppler_path + ":" + env.get("LD_LIBRARY_PATH", "")
--> 581 proc = Popen(command, env=env, stdout=PIPE, stderr=PIPE)
582
19 frames
FileNotFoundError: [Errno 2] No such file or directory: 'pdfinfo'
During handling of the above exception, another exception occurred:
PDFInfoNotInstalledError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/pdf2image/pdf2image.py](https://localhost:8080/#) in pdfinfo_from_path(pdf_path, userpw, ownerpw, poppler_path, rawdates, timeout, first_page, last_page)
605
606 except OSError:
--> 607 raise PDFInfoNotInstalledError(
608 "Unable to get page count. Is poppler installed and in PATH?"
609 )
PDFInfoNotInstalledError: Unable to get page count. Is poppler installed and in PATH?
```
### System Info
Windows: 11
pip == 23.3.1
python == 3.10.10
langchain == 0.1.0
transformers == 4.36.2
sentence_transformers == 2.2.2
unstructured == 0.12.0
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | PDFInfoNotInstalledError: Unable to get page count. Is poppler installed and in PATH? | https://api.github.com/repos/langchain-ai/langchain/issues/16085/comments | 8 | 2024-01-16T13:44:40Z | 2024-05-21T03:03:06Z | https://github.com/langchain-ai/langchain/issues/16085 | 2,083,997,805 | 16,085 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
I have omitted as much as possible irrelevant portions:
```python
def get_text(docs):
if not docs:
return None
all_text = ""
for doc in docs:
reader = PdfReader(doc)
for page in reader.pages:
all_text += page.extract_text()
return all_text
def get_chunks(text):
splitter = CharacterTextSplitter(
separator='\n',
chunk_size=1000,
chunk_overlap=200,
)
chunks = splitter.split_text(text)
return chunks
def get_vectorstore(chunks, model='models/embedding-001'):
client = qdrant_client.QdrantClient(
os.environ['QDRANT_HOST'],
api_key=os.environ["QDRANT_API_KEY"],
)
vectors_config = qdrant_client.http.models.VectorParams(
size=768,
distance=qdrant_client.http.models.Distance.COSINE
)
client.recreate_collection(
collection_name=os.environ["QDRANT_COLLECTION"],
vectors_config=vectors_config
)
embeddings = GoogleGenerativeAIEmbeddings(model=model)
vectorstore = Qdrant(
client=client, collection_name=os.environ["QDRANT_COLLECTION"],
embeddings=embeddings,
)
vectorstore.add_texts(chunks)
return vectorstore
def get_conversation_chain(vectorstore):
llm = ChatGoogleGenerativeAI(model="gemini-pro", convert_system_message_to_human=True, temperature=0)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
chain = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=vectorstore.as_retriever(),
memory=memory,
)
return chain
docs = st.file_uploader("Upload your documents:", accept_multiple_files=True, type=["pdf", "txt", "csv"])
user_input = "Some question"
if st.button("Proess"):
with st.spinner("Processing..."):
text = get_text(docs)
chunks = get_chunks(text)
vectorstore = get_vectorstore(chunks)
chain = get_conversation_chain(vectorstore)
response = chain({'question': user_input})
```
### Description
I'm trying to use ``ConversationalRetrievalChain`` with the ``ChatGoogleGenerativeAI`` integration. This seems to perform rather poorly in several scenarios involving PDF documents, with 2 issues frequently arising:
1. The model seems to default quite often to "I don't know" or "I cannot find the information in the context provided", despite the fact that the context seems to have the requested information within (so it's unlikely to be a problem with the vector DB).
2. Some of the LangChain-built system prompts (converted to human messages via the recently-added hyperparameter) which ask the model to translate something to the "original language" completely confuse it, making it output its response in a random language. I have tried using the prompt created by LangChain with the Google API directly, and the response is also bad. Example prompt created by LangChain from my conversation: ``"Human: Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\n\nChat History:\n\nHuman: what does the third amendment say?\nAssistant: I do not have access to the context necessary to answer that question.\nFollow Up Input: what does the second amendment say?\nStandalone question:"``. This outputs a message in a random language on both the Google Gemini API and LangChain.
Example document: <a href="https://www.archives.gov/files/legislative/resources/education/bill-of-rights/images/handout-3.pdf"> Bill of rights</a>. Example questions: (see example prompt above).
### System Info
langchain==0.0.351
langchain-community==0.0.4
langchain-core==0.1.1
langchain-experimental==0.0.47
langchain-google-genai==0.0.5
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | ConversationalRetrievalChain with ChatGoogleGenerativeAI performs poorly | https://api.github.com/repos/langchain-ai/langchain/issues/16081/comments | 2 | 2024-01-16T12:21:54Z | 2024-04-23T17:21:35Z | https://github.com/langchain-ai/langchain/issues/16081 | 2,083,828,687 | 16,081 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I would like to use the research assistant template with OpenAI models on Azure. Can this be integrated into the template somehow ? I can of course modify the code post template expansion (ChatOpenAI -> AzureChatOpenAI etc.), but that would be lost upon changes / enhancements to the templates.
### Motivation
Cannot use OpenAI models directly due to company policy.
### Your contribution
Happy to spend some effort if someone can point me to a good way of adding e.g. a switch to the template (don't want to duplicate the whole template). | Make research assistant template work with OpenAI models on AzureAI | https://api.github.com/repos/langchain-ai/langchain/issues/16079/comments | 1 | 2024-01-16T10:56:49Z | 2024-04-23T17:00:49Z | https://github.com/langchain-ai/langchain/issues/16079 | 2,083,642,796 | 16,079 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
import os
from docarray.index.backends.qdrant import qdrant_client
from langchain.schema import Document
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import Qdrant
from qdrant_client import QdrantClient
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
os.environ['OPENAI_API_KEY'] = "mykey"
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
client = QdrantClient(host="192.168.0.22", port=6333)
conversation_doc = Document(
id="1",
page_content="openhistory",
metadata={
"timestamp": "2023-01-16",
"user": "Ai",
},
)
print(conversation_doc)
vectorstore = Qdrant.from_documents(
[conversation_doc],
client=client,
embedding=embeddings,
location=":memory:",
collection_name="test116",
)
# client.create_collection(
# collection_name="test116",
# vectors_config=models.VectorParams(
# distance=models.Distance.COSINE,
# size=384),
# optimizers_config=models.OptimizersConfigDiff(memmap_threshold=20000),
# hnsw_config=models.HnswConfigDiff(on_disk=True, m=16, ef_construct=100)
# )
print("找到ㄉtable:")
collections = client.get_collections()
for collection in collections:
print(collection)
metadata_field_info = [
AttributeInfo(
name="openhistory",
description="talk_history_embeddings",
type="string or list[string]",
)
]
document_content_description = "chat_history"
llm = ChatOpenAI(
temperature=0.7,
model="gpt-3.5-turbo",
max_tokens=100
)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
memory = ConversationBufferMemory()
conversation = ConversationChain(llm=llm, memory=memory, verbose=True, )
while True:
chat_history = memory.load_memory_variables({})
client.get_collection(collection_name="test116")
print("成功載入history")
question = input('冒險者:')
response = conversation.predict(input=f"Q: {question}\nA:")
print(response)
response_str = str(response[0])
response_vector = embeddings.embed_query(response_str)[0]
conversation_doc = Document(
id="1",
page_content=response,
metadata={
"user": f"{question}",
"response_vector": response_vector
},
)
vectorstore.add_documents([conversation_doc])
print(conversation_doc)
if question.lower() == 'bye':
break
### Idea or request for content:
_No response_ | Why can't the response refer to the vector data of the historical conversation in test116 for achieving long-term memory in replies? | https://api.github.com/repos/langchain-ai/langchain/issues/16074/comments | 1 | 2024-01-16T09:10:31Z | 2024-04-23T16:59:32Z | https://github.com/langchain-ai/langchain/issues/16074 | 2,083,390,517 | 16,074 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
This code gives me an weird error:
```
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
def create_weaviate_new_RAG(client, embeddings, llm, class_name_rich, class_properties_rich, class_text_key):
vector_store = Weaviate(client=client,
index_name=class_name_rich,
text_key=class_text_key,
embedding=embeddings,
attributes=class_properties_rich,
#by_text=False,
)
retriever=vector_store.as_retriever(
search_type= "mmr",
search_kwargs={
"k": 4,
"fetch_k": 20,
}
)
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant who retrieves information from documents"),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
rag_pipe = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
return rag_pipe
rag_pipe = create_weaviate_new_RAG(client, embeddings, llm_chat, class_name, ["chunk_id", "doc_name", "doc_path"], "chunk_text")
text = "what is ai policy?"
rag_pipe.invoke(text)
```
Gives me this error:
```
"Input to ChatPromptTemplate is missing variables {'input', 'agent_scratchpad'}. Expected: ['agent_scratchpad', 'input'] Received: ['context', 'question']"
```
I am using openai==1.7.0 and langchain==0.1.0
This inly started the appear after adding the new rag code from langchain.
Can someone help?
Thank you :)
### Description
I expect the function to just run smoothly. Mot of the code is copied from the langchain documentation.
### System Info
openai==1.7.0 and langchain==0.1.0
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Input to ChatPromptTemplate is missing variables {'input', 'agent_scratchpad'}. Expected: ['agent_scratchpad', 'input'] | https://api.github.com/repos/langchain-ai/langchain/issues/16073/comments | 6 | 2024-01-16T07:31:25Z | 2024-07-05T11:31:03Z | https://github.com/langchain-ai/langchain/issues/16073 | 2,083,177,951 | 16,073 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
`
llm = AzureChatOpenAI(
openai_api_version=OPENAI_API_VERSION,
azure_deployment=OPENAI_DEPLOYMENT_NAME,
n=1,
temperature=0.0,
model_kwargs={"presence_penalty": -2}
)
from langchain.prompts import PromptTemplate
from langchain.chains.openai_functions import create_openai_fn_chain
functions = [
{
"name": "get_pizza_info",
"description": "Get name and price of a pizza of the restaurant",
"parameters": {
"type": "object",
"properties": {
"pizza_name": {
"type": "string",
"description": "The name of the pizza, e.g. Hawaii",
},
},
"required": ["pizza_name"],
},
},
{
"name": "place_order",
"description": "Place an order for a pizza from the restaurant",
"parameters": {
"type": "object",
"properties": {
"pizza_name": {
"type": "string",
"description": "The name of the pizza you want to order, e.g. Margherita",
},
"quantity": {
"type": "integer",
"description": "The number of pizzas you want to order",
"minimum": 1
},
"address": {
"type": "string",
"description": "The address where the pizza should be delivered",
},
},
"required": ["pizza_name", "quantity", "address"],
},
}
]
template = """You are an AI chatbot having a conversation with a human.
Human: {human_input}
AI: """
prompt = PromptTemplate(input_variables=["human_input"], template=template)
chain = create_openai_fn_chain(functions, llm, prompt)
while True:
user_input = input("Enter you query -------> ")
if not user_input:
break
print(chain.invoke(user_input))
`
### Description
I am using openai 0.28.1 and langchain latest version.
Whenever the user asks a general query like "Hi", or "I need help" the chain results in an error. But the same thing is working in the case of OpenAI implementation
Error:- langchain_core.exceptions.OutputParserException: Could not parse function call: 'function_call'
Can anyone suggests better way or any clue to fix this problem
### System Info
python 3.9
openai 0.28.1
langchain 0.1.0
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Issue will function calling using Langchain, incase of OpenAI it works fine | https://api.github.com/repos/langchain-ai/langchain/issues/16069/comments | 2 | 2024-01-16T06:59:33Z | 2024-04-23T17:04:42Z | https://github.com/langchain-ai/langchain/issues/16069 | 2,083,138,870 | 16,069 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```bash
qa_system_prompt = """Note\
Make sure to address the user's queries politely.\
Compose a comprehensive reply to the query using the search results given.\
1.Respond to the questions based on the given context. \
2.please refrain from inventing responses.\
3.Use as much detail when as possible when responding. Answer step-by-step.\
4.Locate the answer within the given context.\
5.Keep the response within 3000 tokens.\
6. If a user asks the questions like please summarise the above, then you should summarize the previous answer you provided which can be fetched from the history\
7. If the user queries based on its previous questions please surround your answers based on the history of previous questions\
IMPORTANT NOTE: Don't answer question outside of provided context and kindly respond with "I apologize, but that falls outside of my current scope of knowledge."\
Use the following context to answer the question:\
------
CONTEXT:
{context}
"""
```
### Description
Using this prompt, this was working fine in v0.0.336 but after i started using this new v0.1.0 code the no matter how i write the prompt it's not following the instruction mentioned below:
IMPORTANT NOTE: Don't answer question outside of provided context and kindly respond with "I apologize, but that falls outside of my current scope of knowledge.
It's working for few questions and not working for others.
### System Info
%pip install --upgrade langchain langchain-community langchainhub langchain-openai chromadb bs4
python 3.9
model: gpt-3.5-turbo-16k
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Created a RAG with history it's working but not following the prompt instructions properly | https://api.github.com/repos/langchain-ai/langchain/issues/16068/comments | 8 | 2024-01-16T06:08:41Z | 2024-01-20T07:08:20Z | https://github.com/langchain-ai/langchain/issues/16068 | 2,083,083,972 | 16,068 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, return_direct=True)
tools = [PythonREPLTool()]
description = (
"Useful for when you need to answer questions about employess. "
"You must not input SQL. Use this more than the Python tool if the question "
"is about employee data, like 'how many employees are there?' or 'count the number of department'"
)
repl_tool = Tool(
name="Data", # We'll just call it 'Data'
func=db_chain.run,
description=description,
)
tools.append(repl_tool)
prefix = "Fulfill the following request as best you can. You have access to the following tools:"
# Remind the agent of the Data tool, and what types of input it expects
suffix = (
"Begin! When looking for data, do not write a SQL query. "
"Pass the relevant portion of the request directly to the Data tool in its entirety."
"\n\n"
"Request: {input}\n"
"{agent_scratchpad}"
)
# The agent's prompt is built with the list of tools, prefix, suffix, and input variables
prompt = ZeroShotAgent.create_prompt(
tools, prefix=prefix, suffix=suffix, input_variables=["input", "agent_scratchpad"]
)
# Set up the llm_chain
llm_chain = LLMChain(llm=llm, prompt=prompt)
# Specify the tools the agent may use
tool_names = [tool.name for tool in tools]
agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names)
# Create the AgentExecutor
agent_executor = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=True, handle_parsing_errors=True
)
answer = agent_executor.run(question)
print(answer)
### Description
using langchain experimental, i'm trying to interact with sql db, where i should also be able to plot the graph using natural language. im using the following above method using python repl tool, its displaying the graph but how to save the graph, like in create_pandas_dataframe_agent in langchain agent where we have a save_chatrs=True function.
and also, python repl tool in python or pandas ai which one is the best method to generate graphs/plots in natural language
### System Info
python: 3.11
langchain: latest version
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | how to save plots/graphs when using PythonREPLTool in langchain experimental | https://api.github.com/repos/langchain-ai/langchain/issues/16067/comments | 10 | 2024-01-16T05:39:37Z | 2024-08-05T16:07:51Z | https://github.com/langchain-ai/langchain/issues/16067 | 2,083,056,200 | 16,067 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.agents import load_tools, initialize_agent, AgentType, Agent
from dotenv import dotenv_values
config = dotenv_values(".env")
GOOGLE_API_KEY = config["GOOGLE_API_KEY"]
model = ChatGoogleGenerativeAI(
model="gemini-pro", google_api_key=GOOGLE_API_KEY, temperature=0.8, max_tokens=32000, top_p=0.9, top_k=1)
tools = load_tools(["serpapi", "wikipedia"], llm=model)
tool_names = [tool.name for tool in tools]
FORMAT_INSTRUCTIONS="""
The context includes relevant information you can use to generate a an email response:
{context}
Here is the students email message:
{message}
To use a tools, please use the following format:
'''
Thought: Do I need to use a tool? Yes
Action: the action to take.
Action Input: the input to the action
Observation: the result of the action
Final Answer: the final answer
'''
Write a professional reply to {sender_name} answering his questions. Do not include anything that is not relevant to Email sent by {sender_name}.
Always represent ... and address the student by his name.
If the context does not contain relevant information, try using the tools to search for answers to write the response.
Don't try searching anything in the context. Only use it as source of information to write the response.
If you don't find anything relevent in your search or in the context, just write the response with your best guess.
"""
PREFIX = '''You are an email assistant with more than 10 years of experience working at ....'''
SUFFIX='''
Begin!
Instructions: {input}
{agent_scratchpad}
'''
prompt = PromptTemplate(
template=FORMAT_INSTRUCTIONS,
input_variables=["context", "sender_name", "message"],
)
agent = initialize_agent(
tools,
llm=model,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
handle_parsing_errors=True,
agent_kwargs={
"suffix": SUFFIX,
"prefix": PREFIX
}
)
result = agent(prompt.format(context=retriever, sender_name=sender_name, message=message))
print(result["output"])
```
### Description
I am trying to use the langchain-google-genai library to build an agent to answer emails
I expert to see a successful agent execution which I saw from the start
However, I am getting an error:
```
Retrying langchain_google_genai.chat_models._chat_with_retry.<locals>._chat_with_retry in 2.0 seconds as it raised InternalServerError: 500 An internal error has occurred. Please retry or report in https://developers.generativeai.google/guide/troubleshooting.
Retrying langchain_google_genai.chat_models._chat_with_retry.<locals>._chat_with_retry in 4.0 seconds as it raised InternalServerError: 500 An internal error has occurred. Please retry or report in https://developers.generativeai.google/guide/troubleshooting.
Retrying langchain_google_genai.chat_models._chat_with_retry.<locals>._chat_with_retry in 8.0 seconds as it raised InternalServerError: 500 An internal error has occurred. Please retry or report in https://developers.generativeai.google/guide/troubleshooting.
Retrying langchain_google_genai.chat_models._chat_with_retry.<locals>._chat_with_retry in 16.0 seconds as it raised InternalServerError: 500 An internal error has occurred. Please retry or report in https://developers.generativeai.google/guide/troubleshooting.
```
### System Info
```
langchain==0.1.0
langchain-community==0.0.12
langchain-core==0.1.10
langchain-google-genai==0.0.6
```
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | 500 Internal Error when using google genai langchain with agents | https://api.github.com/repos/langchain-ai/langchain/issues/16066/comments | 2 | 2024-01-16T05:06:54Z | 2024-07-28T19:30:45Z | https://github.com/langchain-ai/langchain/issues/16066 | 2,083,029,208 | 16,066 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
llm = VertexAI(model_name='gemini-pro', max_output_tokens=512, temperature=0.2)
data = loader.load()
embeddings = VertexAIEmbeddings(model_name='textembedding-gecko@003')
vectorstore = Chroma.from_documents(data, embeddings)
retriever = vectorstore.as_retriever()
template = """
Use the following pieces of context to answer the question at the end.
- If you think you need more information to find proper answer, ask user to clarify.
- If you cannot find a proper answer from the reference, just say we don't have a data from the reference.
Use three sentences maximum and keep the answer as concise as possible.
Context: {context}
Question: {question}
Helpful Answer:
**Resource:** [reference source name]
"""
template2 = """
If the user query lacks of below data, please ask user to get additional data.
- data1
- data2
If the user query has enough data, response with the context directly.
Context: {context}
Question: {question}
Answer:
"""
rag_prompt_custom = ChatPromptTemplate.from_template(template)
multi_turn_prompt_custom = ChatPromptTemplate.from_template(template2)
rag_chain = (
{"context": retriever, "question": RunnablePassthrough()}
| rag_prompt_custom
| llm
| StrOutputParser()
)
response = rag_chain.invoke(message)
print(response)
```
### Description
I would like to have two prompt based chat in a single RAG chain.
Below is what I am thinking about.
1. based on query and context information from the document, fine the proper answer.
2. with the generated (found) answer and the user's prompt, consider if the user's prompt lacks of data. If it lacks of data, ask user to get addiitonal data. And if there is enough data to generate and find the answer from the document, response found answer directly.
I have no idea how to make it merged.
### System Info
langchain==0.0.352
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | multiple prompt in a single RAG chain | https://api.github.com/repos/langchain-ai/langchain/issues/16063/comments | 4 | 2024-01-16T03:49:37Z | 2024-07-03T05:43:47Z | https://github.com/langchain-ai/langchain/issues/16063 | 2,082,970,379 | 16,063 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
template = """
**Role:**
You are a helpful assistant.
**Context:**
You have to only use a reference stored document to generate a response.
CONTEXT: {context}
**Task:**
1. task 1
- some important requirements for task 1
2. task 2
- some important requirements for task 2
3. task 3
- some important requirements for task 3
Question: {question}
Helpful Answer:
**Resource:** [reference source name]
"""
qa_prompt = ChatPromptTemplate.from_messages(
[
("system", template),
MessagesPlaceholder(variable_name="chat_history"),
("user", "{question}"),
]
)
rag_chain = (
RunnablePassthrough.assign(
context=contextualized_question | retriever | format_docs
)
| qa_prompt
| llm
)
response = rag_chain.invoke({"question": message, "chat_history": memory.get_history()})
print(response)
```
### Description
I tried to make the rag chain with the prompt.
In here, I found that the tasks mentioned in the prompt are sometimes followed, but sometimes not.
I would like to make the chain consistent for the user queries.
Is there way to do this?
### System Info
langchain==0.0.352
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | inconsistent performance of langchain rag | https://api.github.com/repos/langchain-ai/langchain/issues/16056/comments | 3 | 2024-01-16T00:53:43Z | 2024-01-16T03:40:22Z | https://github.com/langchain-ai/langchain/issues/16056 | 2,082,852,718 | 16,056 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
# No code provided
# LangSmith trace link in Description
```
### Description
I test a sample code of CrewAI, which is based on Langchain and I found, again :( that tool_input is incorrectly set, which causes incorrect answers.
I track it in LangSmith and here is the RunnableSequence's input:
```
input: Investigate which AI / LLM assisted IDE assistant can assist coding related questions not only related to the selected code or the opened file, but all the project code files, the console and terminal messages. Check also the most recent IDEs, not only the most popular ones. Your final answer MUST be a full analysis report
tool_names: Google Search
tools: "Google Search: Search Google for recent results."
chat_history: ""
intermediate_steps: []
```
and output:
```
output:
tool: Google Search
tool_input: AI / LLM assisted IDE assistant for coding related questions
log: |-
Thought: Do I need to use a tool? Yes
Action: Google Search
Action Input: "AI / LLM assisted IDE assistant for coding related questions"
type: AgentAction
```
so we can see the original task of:
_**"Investigate which AI / LLM assisted IDE assistant can assist coding related questions not only related to the selected code or the opened file, but all the project code files, the console and terminal messages. Check also the most recent IDEs, not only the most popular ones. Your final answer MUST be a full analysis report"**_
was transformed to a Google search for:
_**"AI / LLM assisted IDE assistant for coding related questions"**_
which obviously doesn't provide the search results relevant to original task!
The trace with Zephyr-7B via Hugging Face: https://smith.langchain.com/public/d4076fff-78d7-48ab-bdfe-d463bd1ffcb3/r
The same with gpt-4: https://smith.langchain.com/public/1fa8da61-6cc9-448c-8aa3-fd75d930e623/r
**So what and where should we finetune, modify to keep search term or any other tool_input correct, without huge distortion?**
### System Info
Windows 10
Python 3.11.6
langchain-core: 0.1.10
langchain-community: 0.0.12
crewai: 0.1.32
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [x] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | In RunnableSequence Action input / tool_input incorrectly set, ultimately causing incorrect answers, results | https://api.github.com/repos/langchain-ai/langchain/issues/16042/comments | 6 | 2024-01-15T17:04:57Z | 2024-01-23T09:17:55Z | https://github.com/langchain-ai/langchain/issues/16042 | 2,082,409,723 | 16,042 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
I implemented a custom retriever like this:
```
class CustomUserRetriever(BaseRetriever):
vectorstore: VectorStore
username: str # New attribute to store the filename
def clean_metadata(self, doc):
metadata = doc.metadata
return { # Add a button to clear chat history
"Title": metadata["title"],
"source": metadata["title"],
"chunk_id": metadata["chunk_id"]
}
def get_relevant_documents(self, query):
docs = []
for doc in self.vectorstore.similarity_search(query, search_type="hybrid", k=5, filters=f"search.ismatch('{self.username.split('@')[0]}', 'usernames_result') "):
cleaned_metadata = self.clean_metadata(doc)
docs.append(Document(
page_content=doc.page_content,
metadata=cleaned_metadata))
return docs
```
And then I created a custom functionality which should return the answer, and the source of the answer. (document name with extension)
```
retriever = CustomUserRetriever(vectorstore=vector_store, username=user_name)
chain = RetrievalQAWithSourcesChain.from_chain_type(llm,
chain_type="stuff",
memory=memory,
retriever=retriever,
return_source_documents=True
)
#result = chain({"question": user_question})
result = chain({"question":user_question}, return_only_outputs=True)
```
THen I ask a question which should get a response from 2 document chunks, and the answer is generated correctly.
However asking the same question multiple times, sometimes I get an empty source and sometimes I get a list of the source documents,


### Description
I want to get answers from internal documents, but the document retriever returns by similarity chunks that are not relevant to the generated answer, so those ones should be removed, sometimes it works! I asked a question and the chunks are returned, sometimes I get an empty source list.
### System Info
python 3.11
langchain 0.0.353
openai 1.6.1
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | RetrievalQAWithSourcesChain result sources is not deterministic | https://api.github.com/repos/langchain-ai/langchain/issues/16038/comments | 1 | 2024-01-15T15:26:14Z | 2024-04-22T16:38:56Z | https://github.com/langchain-ai/langchain/issues/16038 | 2,082,253,988 | 16,038 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
This is a minor issue with metadata for pip. I opened a ticket against pip.
https://github.com/pypa/pip/issues/12476
But I was told that I should report the issue to this project.
[langchain_0.0.27.log](https://github.com/langchain-ai/langchain/files/13940154/langchain_0.0.27.log)
If there existed a langchain version which doesn't use future feature annotations, install it.
Otherwise, prompt the user that no suitable version.
### Description
I installed langchain 0.0.27 on python 3.6.5 on my Windows 10 successfully. But I run into an issue when I tried import the library.
from langchain.llms import Ollama
SyntaxError: future feature annotations is not defined
[PEP-563](https://www.python.org/dev/peps/pep-0563/#enabling-the-future-behavior-in-python-3-7)future annotations are introduced in Python 3.7. That means langchain 0.0.27 which depends on future feature annotations should NOT be installed on Python 3.6.5.
### System Info
pip 21.3.1
python 3.6.5/Windows 10
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | langchain 0.0.27 should NOT be installed on python 3.6.5 | https://api.github.com/repos/langchain-ai/langchain/issues/16037/comments | 3 | 2024-01-15T14:54:25Z | 2024-01-15T23:26:19Z | https://github.com/langchain-ai/langchain/issues/16037 | 2,082,196,329 | 16,037 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
embeddings = HuggingFaceEmbeddings(
model_name="mistralai/Mistral-7B-v0.1", # Provide the pre-trained model's path
)
cache_embeddings = CacheBackedEmbeddings.from_bytes_store(embeddings, cached_dir)
vectorstore = FAISS.from_documents(docs, cache_embeddings)
```
I use this code.
I met this error.
Why occur this occur? I look around all. I can't fix ti. so Sad ㅠㅠㅠ
Could you explain me how to get fix it?
```
ValueError Traceback (most recent call last)
Cell In[8], [line 1](vscode-notebook-cell:?execution_count=8&line=1)
----> [1](vscode-notebook-cell:?execution_count=8&line=1) vectorstore = FAISS.from_documents(docs, cache_embeddings)
[3](vscode-notebook-cell:?execution_count=8&line=3) retriver = vectorstore.as_retriever(reduce_k_below_max_tokens=True)
[4](vscode-notebook-cell:?execution_count=8&line=4) # retriver = vectorstore.as_retriever()
File [~/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_core/vectorstores.py:508](https://vscode-remote+ssh-002dremote-002bssh-002eddapp-002eio.vscode-resource.vscode-cdn.net/home/datadriven/ds.kang/rag/~/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_core/vectorstores.py:508), in VectorStore.from_documents(cls, documents, embedding, **kwargs)
[506](https://vscode-remote+ssh-002dremote-002bssh-002eddapp-002eio.vscode-resource.vscode-cdn.net/home/datadriven/ds.kang/rag/~/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_core/vectorstores.py:506) texts = [d.page_content for d in documents]
[507](https://vscode-remote+ssh-002dremote-002bssh-002eddapp-002eio.vscode-resource.vscode-cdn.net/home/datadriven/ds.kang/rag/~/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_core/vectorstores.py:507) metadatas = [d.metadata for d in documents]
--> [508](https://vscode-remote+ssh-002dremote-002bssh-002eddapp-002eio.vscode-resource.vscode-cdn.net/home/datadriven/ds.kang/rag/~/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_core/vectorstores.py:508) return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File [~/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_community/vectorstores/faiss.py:913](https://vscode-remote+ssh-002dremote-002bssh-002eddapp-002eio.vscode-resource.vscode-cdn.net/home/datadriven/ds.kang/rag/~/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_community/vectorstores/faiss.py:913), in FAISS.from_texts(cls, texts, embedding, metadatas, ids, **kwargs)
[886](https://vscode-remote+ssh-002dremote-002bssh-002eddapp-002eio.vscode-resource.vscode-cdn.net/home/datadriven/ds.kang/rag/~/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_community/vectorstores/faiss.py:886) @classmethod
[887](https://vscode-remote+ssh-002dremote-002bssh-002eddapp-002eio.vscode-resource.vscode-cdn.net/home/datadriven/ds.kang/rag/~/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_community/vectorstores/faiss.py:887) def from_texts(
[888](https://vscode-remote+ssh-002dremote-002bssh-002eddapp-002eio.vscode-resource.vscode-cdn.net/home/datadriven/ds.kang/rag/~/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_community/vectorstores/faiss.py:888) cls,
(...)
[893](https://vscode-remote+ssh-002dremote-002bssh-002eddapp-002eio.vscode-resource.vscode-cdn.net/home/datadriven/ds.kang/rag/~/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_community/vectorstores/faiss.py:893) **kwargs: Any,
[894](https://vscode-remote+ssh-002dremote-002bssh-002eddapp-002eio.vscode-resource.vscode-cdn.net/home/datadriven/ds.kang/rag/~/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_community/vectorstores/faiss.py:894) ) -> FAISS:
[895](https://vscode-remote+ssh-002dremote-002bssh-002eddapp-002eio.vscode-resource.vscode-cdn.net/home/datadriven/ds.kang/rag/~/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_community/vectorstores/faiss.py:895) """Construct FAISS wrapper from raw documents.
[896](https://vscode-remote+ssh-002dremote-002bssh-002eddapp-002eio.vscode-resource.vscode-cdn.net/home/datadriven/ds.kang/rag/~/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_community/vectorstores/faiss.py:896)
[897](https://vscode-remote+ssh-002dremote-002bssh-002eddapp-002eio.vscode-resource.vscode-cdn.net/home/datadriven/ds.kang/rag/~/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_community/vectorstores/faiss.py:897) This is a user friendly interface that:
(...)
[911](https://vscode-remote+ssh-002dremote-002bssh-002eddapp-002eio.vscode-resource.vscode-cdn.net/home/datadriven/ds.kang/rag/~/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_community/vectorstores/faiss.py:911) faiss = FAISS.from_texts(texts, embeddings)
[912](https://vscode-remote+ssh-002dremote-002bssh-002eddapp-002eio.vscode-resource.vscode-cdn.net/home/datadriven/ds.kang/rag/~/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_community/vectorstores/faiss.py:912) """
...
(...)
[2719](https://vscode-remote+ssh-002dremote-002bssh-002eddapp-002eio.vscode-resource.vscode-cdn.net/home/datadriven/ds.kang/rag/~/anaconda3/envs/rag/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:2719) and (max_length % pad_to_multiple_of != 0)
[2720](https://vscode-remote+ssh-002dremote-002bssh-002eddapp-002eio.vscode-resource.vscode-cdn.net/home/datadriven/ds.kang/rag/~/anaconda3/envs/rag/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:2720) ):
ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`
```
### Description
I use this code.
I met this error.
Why occur this occur? I look around all. I can't fix ti. so Sad ㅠㅠㅠ
Could you explain me how to get fix it?
### System Info
langchain 0.1.0
langchain-community 0.0.12
langchain-core 0.1.10
langsmith 0.0.80
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Using faiss.from_documents require tokenizer pad_token | https://api.github.com/repos/langchain-ai/langchain/issues/16036/comments | 3 | 2024-01-15T14:44:21Z | 2024-06-20T16:06:33Z | https://github.com/langchain-ai/langchain/issues/16036 | 2,082,177,846 | 16,036 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
``` Python
# Chunking the sentence with fixed size
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=20)
all_splits = text_splitter.split_documents(documents)
```
``` Python
from langchain_community.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-base-en"
model_kwargs = {"device": "cpu"}
encode_kwargs = {"normalize_embeddings": True}
embeddings = HuggingFaceBgeEmbeddings(
model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs
)
```
``` Python
# Creating New Index:
!CREATE VECTOR INDEX KG_QuestionAnswering IF NOT EXISTS FOR (n:LabelName) ON (n.propertyName) OPTIONS {dimension: 1024};
```
``` Python
index_name="KG_QuestionAnswering"
# Instantiate Neo4j vector from documents
store = Neo4jVector.from_existing_index(
documents,
HuggingFaceBgeEmbeddings(),
url=os.environ["NEO4J_URI"],
username=os.environ["NEO4J_USERNAME"],
password=os.environ["NEO4J_PASSWORD"]
)
```
### Description
```
AttributeError Traceback (most recent call last)
[<ipython-input-24-4e3c558558b3>](https://localhost:8080/#) in <cell line: 3>()
1 index_name="KG_QuestionAnswering"
2 # Instantiate Neo4j vector from documents
----> 3 store = Neo4jVector.from_existing_index(
4 documents,
5 HuggingFaceBgeEmbeddings(),
1 frames
[/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/neo4j_vector.py](https://localhost:8080/#) in __init__(self, embedding, search_type, username, password, url, keyword_index_name, database, index_name, node_label, embedding_node_property, text_node_property, distance_strategy, logger, pre_delete_collection, retrieval_query, relevance_score_fn)
236 self.search_type = search_type
237 # Calculate embedding dimension
--> 238 self.embedding_dimension = len(embedding.embed_query("foo"))
239
240 # Delete existing data if flagged
AttributeError: 'list' object has no attribute 'embed_query'
```
### System Info
``` YAML
Windows: 11
pip == 23.3.1
python == 3.10.10
langchain == 0.1.0
transformers == 4.36.2
sentence_transformers == 2.2.2
Neo4j == 5
```
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | AttributeError: 'list' object has no attribute 'embed_query' | https://api.github.com/repos/langchain-ai/langchain/issues/16034/comments | 1 | 2024-01-15T14:11:22Z | 2024-04-22T16:39:52Z | https://github.com/langchain-ai/langchain/issues/16034 | 2,082,115,236 | 16,034 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
My code was fully working till i updated from 0.0.259 to 0.1.0 langchain
```
embeddings = OpenAIEmbeddings(openai_api_key=os.environ["OPENAI_API_KEY"],
model=os.environ["AZUREOPENAI_EMBEDDINGS_MODEL"],
deployment=os.environ["AZUREOPENAI_EMBEDDINGS_DEPLOYMENT"])
db = ElasticVectorSearch(
embedding=embeddings,
elasticsearch_url=url,
index_name=os.environ["AZUREOPENAI_INDEX_NAME"],
ssl_verify={"ca_certs": os.environ["APP_CERT_PATH"]}
)
docs = db.similarity_search(query=query, # new
k=10)
# Here is when it crashes
```
### Description
The code is crashing giving a weird error and It looks it is because of similarity_search doesnt work in the new version
### System Info
aiohttp==3.9.1
aiosignal==1.3.1
annotated-types==0.6.0
anyio==4.2.0
async-timeout==4.0.3
attrs==23.2.0
certifi==2023.11.17
charset-normalizer==3.3.2
click==8.1.7
dataclasses-json==0.6.3
distro==1.9.0
elastic-transport==8.11.0
elasticsearch==8.11.1
exceptiongroup==1.2.0
fastapi==0.109.0
frozenlist==1.4.1
greenlet==3.0.3
h11==0.14.0
httpcore==1.0.2
httpx==0.26.0
idna==3.6
jsonpatch==1.33
jsonpointer==2.4
langchain==0.1.0
langchain-community==0.0.12
langchain-core==0.1.10
langsmith==0.0.80
marshmallow==3.20.2
multidict==6.0.4
mypy-extensions==1.0.0
numpy==1.26.3
openai==1.7.2
packaging==23.2
pydantic==2.5.3
pydantic_core==2.14.6
PyYAML==6.0.1
regex==2023.12.25
requests==2.31.0
sniffio==1.3.0
SQLAlchemy==2.0.25
starlette==0.35.1
tenacity==8.2.3
tiktoken==0.5.2
tqdm==4.66.1
typing-inspect==0.9.0
typing_extensions==4.9.0
urllib3==2.1.0
uvicorn==0.25.0
yarl==1.9.4
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async | ElasticVectorSearch Error similarity_search openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}} | https://api.github.com/repos/langchain-ai/langchain/issues/16032/comments | 3 | 2024-01-15T13:03:17Z | 2024-04-22T16:32:13Z | https://github.com/langchain-ai/langchain/issues/16032 | 2,081,995,442 | 16,032 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
pip install langchain-google-genai
### Description
I am trying to install langchain-google-genai package, however getting below error:
ERROR: Could not find a version that satisfies the requirement langchain-google-genai==0.0.5 (from versions: none)
ERROR: No matching distribution found for langchain-google-genai==0.0.5
### System Info
python 3.8.3
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Unable to install langchain-google-genai | https://api.github.com/repos/langchain-ai/langchain/issues/16029/comments | 1 | 2024-01-15T11:47:08Z | 2024-04-22T16:32:09Z | https://github.com/langchain-ai/langchain/issues/16029 | 2,081,867,005 | 16,029 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```llm = LlamaCpp(model_path=model_loc, n_ctx=0, verbose=True, n_gpu_layers=-1)
chain = LLMChain(llm=llm, prompt=prompt)
result = chain(inputs={"context": context, "question": question})
answer = result["text"]
```
I do use the PromptTemplate, but this is not relevant to my issue.
### Description
I am using 2x A100 GPU. What I have noticed is that there is GPU degradation over time.
This is my spool up data:
```
ggml_init_cublas: GGML_CUDA_FORCE_MMQ: no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 2 CUDA devices:
Device 0: NVIDIA A100-PCIE-40GB, compute capability 8.0, VMM: no
Device 1: NVIDIA A100-PCIE-40GB, compute capability 8.0, VMM: no
```
In my code, the above is part of an API that takes in a request and then responds to it. I am making 5 calls to the API one after another, and collating the data for a final summarisation call.
I am not overly impressed with the speeds here in any event - so if anyone has any tips for improving, I don't think 17.25 tokens is very good for dual A100s.
```
llama_print_timings: eval time = 57.98 ms / 1 runs ( 57.98 ms per token, 17.25 tokens per second)
llama_print_timings: total time = 6722.12 ms
llama_print_timings: eval time = 4804.66 ms / 70 runs ( 68.64 ms per token, 14.57 tokens per second)
llama_print_timings: total time = 17722.48 ms
llama_print_timings: eval time = 5950.61 ms / 77 runs ( 77.28 ms per token, 12.94 tokens per second)
llama_print_timings: total time = 23459.58 ms
```
I would have expected the "overall" time to change but the actual tokens per second to remain somewhat consistent - but instead it just gets worse.
I also created a GPU monitor tool, and in that tool, the first call is utilising 25% and 20% for GPU 1 and 2 - but then it goes down to 20% and 17% and then 16% and 15%, and when making a LOT of calls, I have seen it as low as 9% on each GPU - at this point the response time is a minute plus.
Why am I writing a bug report here? Sounds like a you problem....because I ran it with Llama-cpp-python - without Langchain and this was not a problem, there was no degradation over time.
### System Info
Wrinkles....
I am using Window 10, inside a VMWare Virtual Machine. Python, Langchain, Llama-cpp-python is all up to date, as are the Nvidia drivers and CUDA toolkit (12.2).
As noted above - this only occurs when using langchain - not when doing the same workflow, without langchain.
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Multiple GPU - Degradation over time | https://api.github.com/repos/langchain-ai/langchain/issues/16027/comments | 3 | 2024-01-15T11:21:31Z | 2024-04-22T16:36:46Z | https://github.com/langchain-ai/langchain/issues/16027 | 2,081,826,200 | 16,027 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
https://python.langchain.com/docs/integrations/text_embedding/together
import is incorrect in documentation
```python
from langchain_together.embeddings import TogetherEmbeddings
embeddings = TogetherEmbeddings(model="togethercomputer/m2-bert-80M-8k-retrieval")
```
### Idea or request for content:
update to:
```python
from langchain_together import TogetherEmbeddings <------------ this line changed
embeddings = TogetherEmbeddings(model="togethercomputer/m2-bert-80M-8k-retrieval")
``` | DOC: <Please write a comprehensive title after the 'DOC: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/16026/comments | 1 | 2024-01-15T10:46:12Z | 2024-04-22T16:39:44Z | https://github.com/langchain-ai/langchain/issues/16026 | 2,081,765,188 | 16,026 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
Following code in https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/sql_database.py from line 386:
```
with self._engine.begin() as connection: # type: Connection
if self._schema is not None:
if self.dialect == "snowflake":
connection.exec_driver_sql(
"ALTER SESSION SET search_path = %s", (self._schema,)
)
elif self.dialect == "bigquery":
connection.exec_driver_sql("SET @@dataset_id=?", (self._schema,))
elif self.dialect == "mssql":
pass
elif self.dialect == "trino":
connection.exec_driver_sql("USE ?", (self._schema,))
elif self.dialect == "duckdb":
# Unclear which parameterized argument syntax duckdb supports.
# The docs for the duckdb client say they support multiple,
# but `duckdb_engine` seemed to struggle with all of them:
# https://github.com/Mause/duckdb_engine/issues/796
connection.exec_driver_sql(f"SET search_path TO {self._schema}")
elif self.dialect == "oracle":
connection.exec_driver_sql(
f"ALTER SESSION SET CURRENT_SCHEMA = {self._schema}"
)
elif self.dialect == "sqlany":
# If anybody using Sybase SQL anywhere database then it should not
# go to else condition. It should be same as mssql.
pass
else: # postgresql and other compatible dialects
connection.exec_driver_sql("SET search_path TO %s", (self._schema,))
```
### Description
Can we change the code of line 412:
```
else: # postgresql and other compatible dialects
connection.exec_driver_sql("SET search_path TO %s", (self._schema,))
```
to
```
elif self.dialect == "postgresql":
connection.exec_driver_sql("SET search_path TO %s", (self._schema,))
else:
pass # Do nothing
```
It is in my opinion simply wrong to assume that any other DB will work like PostgreSQL. And it's not possible to cut that statement in the dialect easily. If we need more `elif`, so be it.
### System Info
Python 3.10 - it's independent of the Python version or environment.
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | langchain_community: SQL Database - Don't assume "PostgreSQL" in the else | https://api.github.com/repos/langchain-ai/langchain/issues/16023/comments | 2 | 2024-01-15T10:22:57Z | 2024-04-22T16:39:41Z | https://github.com/langchain-ai/langchain/issues/16023 | 2,081,714,878 | 16,023 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
from langchain.embeddings import HuggingFaceEmbeddings
neo4j_db = Neo4jVector.from_documents(
documents,
HuggingFaceEmbeddings(),
url=os.environ["NEO4J_URI"],
username=os.environ["NEO4J_USERNAME"],
password=os.environ["NEO4J_PASSWORD"],
database="neo4j",
index_name="graphdb",
node_label="huggingface_dataset",
text_node_property="info",
embedding_node_property="vector",
create_id_index=True,
)
```
### Description
```
ERROR:neo4j.io:Failed to read from defunct connection IPv4Address(('ae3cd622.databases.neo4j.io', 7687)) (ResolvedIPv4Address(('34.126.64.110', 7687)))
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_common.py](https://localhost:8080/#) in _buffer_one_chunk(self)
50 # Determine the chunk size and skip noop
---> 51 receive_into_buffer(self._socket, self._buffer, 2)
52 chunk_size = self._buffer.pop_u16()
16 frames
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_common.py](https://localhost:8080/#) in receive_into_buffer(sock, buffer, n_bytes)
327 if n == 0:
--> 328 raise OSError("No data")
329 buffer.used += n
OSError: No data
The above exception was the direct cause of the following exception:
SessionExpired Traceback (most recent call last)
[<ipython-input-43-6c577b606d00>](https://localhost:8080/#) in <cell line: 1>()
----> 1 neo4j_db = Neo4jVector.from_documents(
2 documents,
3 HuggingFaceEmbeddings(),
4 url=os.environ["NEO4J_URI"],
5 username=os.environ["NEO4J_USERNAME"],
[/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/neo4j_vector.py](https://localhost:8080/#) in from_documents(cls, documents, embedding, distance_strategy, ids, **kwargs)
815 metadatas = [d.metadata for d in documents]
816
--> 817 return cls.from_texts(
818 texts=texts,
819 embedding=embedding,
[/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/neo4j_vector.py](https://localhost:8080/#) in from_texts(cls, texts, embedding, metadatas, distance_strategy, ids, **kwargs)
679 embeddings = embedding.embed_documents(list(texts))
680
--> 681 return cls.__from(
682 texts,
683 embeddings,
[/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/neo4j_vector.py](https://localhost:8080/#) in __from(cls, texts, embeddings, embedding, metadatas, ids, create_id_index, search_type, **kwargs)
471 )
472
--> 473 store.add_embeddings(
474 texts=texts, embeddings=embeddings, metadatas=metadatas, ids=ids, **kwargs
475 )
[/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/neo4j_vector.py](https://localhost:8080/#) in add_embeddings(self, texts, embeddings, metadatas, ids, **kwargs)
520 }
521
--> 522 self.query(import_query, params=parameters)
523
524 return ids
[/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/neo4j_vector.py](https://localhost:8080/#) in query(self, query, params)
272 with self._driver.session(database=self._database) as session:
273 try:
--> 274 data = session.run(query, params)
275 return [r.data() for r in data]
276 except CypherSyntaxError as e:
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/work/session.py](https://localhost:8080/#) in run(self, query, parameters, **kwargs)
311 bookmarks = self._get_bookmarks()
312 parameters = dict(parameters or {}, **kwargs)
--> 313 self._auto_result._run(
314 query, parameters, self._config.database,
315 self._config.impersonated_user, self._config.default_access_mode,
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/work/result.py](https://localhost:8080/#) in _run(self, query, parameters, db, imp_user, access_mode, bookmarks, notifications_min_severity, notifications_disabled_categories)
179 self._pull()
180 self._connection.send_all()
--> 181 self._attach()
182
183 def _pull(self):
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/work/result.py](https://localhost:8080/#) in _attach(self)
296 if self._exhausted is False:
297 while self._attached is False:
--> 298 self._connection.fetch_message()
299
300 def _buffer(self, n=None):
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_common.py](https://localhost:8080/#) in inner(*args, **kwargs)
176 def inner(*args, **kwargs):
177 try:
--> 178 func(*args, **kwargs)
179 except (Neo4jError, ServiceUnavailable, SessionExpired) as exc:
180 assert not asyncio.iscoroutinefunction(self.__on_error)
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_bolt.py](https://localhost:8080/#) in fetch_message(self)
844
845 # Receive exactly one message
--> 846 tag, fields = self.inbox.pop(
847 hydration_hooks=self.responses[0].hydration_hooks
848 )
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_common.py](https://localhost:8080/#) in pop(self, hydration_hooks)
70
71 def pop(self, hydration_hooks):
---> 72 self._buffer_one_chunk()
73 try:
74 size, tag = self._unpacker.unpack_structure_header()
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_common.py](https://localhost:8080/#) in _buffer_one_chunk(self)
66 ) as error:
67 self._broken = True
---> 68 Util.callback(self.on_error, error)
69 raise
70
[/usr/local/lib/python3.10/dist-packages/neo4j/_async_compat/util.py](https://localhost:8080/#) in callback(cb, *args, **kwargs)
116 def callback(cb, *args, **kwargs):
117 if callable(cb):
--> 118 return cb(*args, **kwargs)
119
120 @staticmethod
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_bolt.py](https://localhost:8080/#) in _set_defunct_read(self, error, silent)
870 self.unresolved_address, self.server_info.address
871 )
--> 872 self._set_defunct(message, error=error, silent=silent)
873
874 def _set_defunct_write(self, error=None, silent=False):
[/usr/local/lib/python3.10/dist-packages/neo4j/_sync/io/_bolt.py](https://localhost:8080/#) in _set_defunct(self, message, error, silent)
922 else:
923 if error:
--> 924 raise SessionExpired(message) from error
925 else:
926 raise SessionExpired(message)
SessionExpired: Failed to read from defunct connection IPv4Address(('ae3cd622.databases.neo4j.io', 7687)) (ResolvedIPv4Address(('34.126.64.110', 7687)))
```
### System Info
```
Windows: 11
pip == 23.3.1
python == 3.10.10
langchain == 0.1.0
transformers == 4.36.2
sentence_transformers == 2.2.2
Neo4j == 5
```
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | SessionExpired: Failed to read from defunct connection IPv4Address(('ae3cd622.databases.neo4j.io', 7687)) (ResolvedIPv4Address(('34.126.64.110', 7687))) | https://api.github.com/repos/langchain-ai/langchain/issues/16021/comments | 1 | 2024-01-15T09:50:22Z | 2024-01-17T08:18:48Z | https://github.com/langchain-ai/langchain/issues/16021 | 2,081,650,878 | 16,021 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
import os
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores.qdrant import Qdrant
from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder, \
HumanMessagePromptTemplate
from qdrant_client import QdrantClient
os.environ['OPENAI_API_KEY'] = "key"
client = QdrantClient(host="192.168.0.312", port=6333)
COLLECTION_NAME = "embed"
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
vectorstore = Qdrant(
client=client,
collection_name=COLLECTION_NAME,
embeddings=embeddings
)
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"You are robot."
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
llm_name = "gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_name, temperature=0)
memory_token_limit = 100
retriever = vectorstore.as_retriever()
memory = ConversationTokenBufferMemory(
llm=llm,
prompt=prompt,
max_token_limit=int(memory_token_limit),
memory_key="chat_history",
return_messages=True,
)
qa = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=retriever,
memory=memory,
verbose=True,
)
chat_history = []
while True:
memory.load_memory_variables({"chat_history": chat_history})
question = input('提問:')
result = qa.run({'question': question, 'chat_history': chat_history})
print(result)
chat_history.append([f'User: {question}', f'Ai: {result}'])
print(chat_history)
if question.lower() == 'bye':
break

### Idea or request for content:
During each round of questioning, OpenAI will retrieve and reference similar conversation histories from a vector database before providing a response | retriever=retriever has a yellow underline, unable to perform the retrieval of vector database function | https://api.github.com/repos/langchain-ai/langchain/issues/16014/comments | 2 | 2024-01-15T01:53:38Z | 2024-04-22T16:44:30Z | https://github.com/langchain-ai/langchain/issues/16014 | 2,081,104,570 | 16,014 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
I'm basically running the same code as [Weaviate Hybrid Search](https://python.langchain.com/docs/integrations/retrievers/weaviate-hybrid):
```python
import os
import dotenv
import weaviate
from langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetriever
from langchain.schema import Document
dotenv.load_dotenv()
# using an existing weaviate database (definitely works with llama-index)
client = weaviate.Client(
url="http://34.170.94.90",
auth_client_secret=weaviate.AuthApiKey(os.environ["WEAVIATE_API_KEY"]),
additional_headers={
"X-Openai-Api-Key": os.getenv("OPENAI_API_KEY"),
},
)
retriever = WeaviateHybridSearchRetriever(
client=client,
index_name="LangChain",
text_key="text",
attributes=[],
create_schema_if_missing=True,
)
```
The error:
```
UnexpectedStatusCodeException Traceback (most recent call last)
Cell 4 line 1
----> [1] retriever = WeaviateHybridSearchRetriever(
[2] client=client,
[3] index_name="LangChain",
[4] text_key="text",
[5] attributes=[],
[6] create_schema_if_missing=True,
[7]
File usr/local/lib/python3.9/site-packages/langchain_core/load/serializable.py:107 /usr/local/lib/python3.9/site-packages/langchain_core/load/serializable.py:107), in Serializable.__init__(self, **kwargs)
106 def __init__(self, **kwargs: Any) -> None:
--> 107 super().__init__(**kwargs)
108 self._lc_kwargs = kwargs
File [usr/local/lib/python3.9/site-packages/pydantic/v1/main.py:339, in BaseModel.__init__(__pydantic_self__, **data)
333 """
334 Create a new model by parsing and validating input data from keyword arguments.
335
336 Raises ValidationError if the input data cannot be parsed to form a valid model.
337 """
338 # Uses something other than `self` the first arg to allow "self" as a settable attribute
--> 339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
...
814 raise RequestsConnectionError("Class may not have been created properly.") from conn_err
815 if response.status_code != 200:
--> 816 raise UnexpectedStatusCodeException("Create class", response)
UnexpectedStatusCodeException: Create class! Unexpected status code: 422, with response body: {'error': [{'message': 'vectorizer: no module with name "text2vec-openai" present'}]}.
```
I'm using:
```
python-dotenv==1.0.0
langchain==0.1.0
openai==1.7.2
langchain-openai==0.0.2.post1
weaviate-client==3.26.0
```
### Idea or request for content:
It appears that the docs are out-of-date, at least for:
```
langchain==0.1.0
openai==1.7.2
weaviate-client==3.26.0
``` | WeaviateHybridSearchRetriever docs result in error | https://api.github.com/repos/langchain-ai/langchain/issues/16008/comments | 2 | 2024-01-14T20:14:55Z | 2024-04-21T17:18:52Z | https://github.com/langchain-ai/langchain/issues/16008 | 2,080,881,498 | 16,008 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
The [Weaviate docs](https://python.langchain.com/docs/integrations/vectorstores/weaviate) just show examples of using `Weaviate.from_documents()` or `Weaviate.from_texts()`, but do not describe how to use an existing database with embedded documents already stored in the database.
Moreover, the descriptions of the `index_name` or `text_key` that are required when calling `Weaviate()` are not described anywhere in the [Weaviate docs](https://python.langchain.com/docs/integrations/vectorstores/weaviate), and `index_name` seems to be very misleading, given that it should be the name of the Weaviate *class*.
A naive attempt of:
```python
import os
import dotenv
import weaviate
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_community.vectorstores import Weaviate
from langchain_core.runnables import RunnablePassthrough
dotenv.load_dotenv()
llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo")
client = weaviate.Client(
url="http://34.170.94.90",
auth_client_secret=weaviate.AuthApiKey(os.environ["WEAVIATE_API_KEY"])
)
db = Weaviate(client, 'my_class_name', 'text')
retriever = db.as_retriever()
chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| llm
| output_parser
)
chain.invoke({"input" : "My particular query"})
```
Results in the error:
```
ValueError: Error during query: [{'locations': [{'column': 46, 'line': 1}], 'message': 'Syntax Error GraphQL request (1:46) Expected Name, found String "input"\n\n1: {Get{my_class_name(limit: 4 nearText: {concepts: [{"input": "My particular query"}]} ){text}}}\n ^\n', 'path': None}]
```
...which isn't too helpful on understanding the issue.
Note: I did actually use the correct class name instead of `my_class_name`.
### Idea or request for content:
It would be very helpful to include an example of using an existing Weaviate database, which included inspecting the DB schema to determine the values to use for `index_name` and `text_key`. | unclear docs for using existing Weaviate database | https://api.github.com/repos/langchain-ai/langchain/issues/16007/comments | 6 | 2024-01-14T19:34:08Z | 2024-05-22T15:02:33Z | https://github.com/langchain-ai/langchain/issues/16007 | 2,080,868,064 | 16,007 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
from langchain import hub
from langchain.agents import create_react_agent
from langchain.tools import Tool
from langchain_community.utilities.wikipedia import WikipediaAPIWrapper
from langchain_community.llms.openai import OpenAI
instructions = """You are an agent designed to write and execute python code to answer questions.
You have access to Wikipedia, which you can use to search informations.
Be smart and select the correct wikipedia for a specific search when several are available.
"""
base_prompt = hub.pull("langchain-ai/react-agent-template")
prompt = base_prompt.partial(instructions=instructions)
wiki = WikipediaAPIWrapper()
wiki_tool = Tool(
name="wikipedia",
description="A wikipedia tool that allow to search in it",
func=wiki.run,
)
agent = create_react_agent(OpenAI(name="gpt-3.5-turbo", temperature=0), tools=[wiki_tool], prompt=prompt)
print(agent.invoke({"input":"What is Nostradamus know for"}) )
```
### Description
I was trying to use the Wikipedia tool in a ReAct agent. I thought it will run smoothly but a KeyError intermediate_step trigged
### System Info
langchain 0.1.0
langchain-community 0.0.12
langchain-core 0.1.10
langchain-experimental 0.0.49
langchainhub 0.1.14
langsmith 0.0.80
wikipedia 1.4.0
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | KeyError: 'intermediate_steps' when using Wikipedia tool within the create_react_agent function | https://api.github.com/repos/langchain-ai/langchain/issues/16006/comments | 3 | 2024-01-14T18:20:13Z | 2024-01-16T20:35:08Z | https://github.com/langchain-ai/langchain/issues/16006 | 2,080,840,998 | 16,006 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
using python and FastAPI
``` async def websocket_handler(websocket: WebSocket, chain, ws_name="Websocket"):
await websocket.accept()
try:
json_payload = await websocket.receive_json()
payload = Payload.parse_obj(json_payload)
async with atrace_as_chain_group(ws_name, run_id=uuid.uuid4(),
tags=[f"store_id:{payload.metadata.store_id}"]) as group_manager:
res = await chain.arun(dialog=payload.dialog, page_sku=payload.metadata.sku,
extracted_properties=payload.metadata.extracted_properties,
callbacks=group_manager, websocket=websocket)
await websocket.send_json(res.json())
await websocket.close()
except WebSocketDisconnect:
print(f"WebSocket connection - {ws_name} - Closed")
@app.websocket("/chat")
async def chat(websocket: WebSocket):
await websocket_handler(websocket, router_chain, ws_name="Chat")
@app.post("/quick_suggestions")
async def quick_suggestions(payload: Payload):
print(f"Quick Suggestions: {payload}")
async with atrace_as_chain_group("QuickSuggestions",
run_id=uuid.uuid4(),
tags=[f"store_id:{payload.metadata.store_id}"]) as group_manager:
return await quick_suggestions_chain.arun(dialog=payload.dialog,
page_sku=payload.metadata.sku,
callbacks=group_manager) ```
When calling to chat route that is using websocket connection it stays in a pending status forever no matter what.
When calling QuickSuggestions it behave as expected.
I guess its something related to the websocket connection.
Any ideas?
### Description
when using a websocket connection it stays in a pending status forever no matter what
### System Info
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
langchain = ">=0.0.257"
openai = "*"
pydantic = "==1.10.11"
requests = "*"
python-dotenv = "*"
pandas = "*"
pydantic-sqlalchemy = "*"
sentence-transformers = "*"
google-cloud-logging = "*"
asyncpg = "*"
fastapi = "*"
starlette = "*"
uvicorn = "*"
sqlalchemy = {extras = ["asyncio"], version = "*"}
asyncio = "*"
elasticsearch = {extras = ["async"], version = "*"}
faiss-cpu = "*"
nltk = "*"
async-lru = "*"
posthog = "*"
websockets = "*"
spacy = "*"
[dev-packages]
scikit-learn = "*"
gradio = "*"
[requires]
python_version = "3.10"
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Status is Pending Forever while using a websocket connection (py & FastAPI) | https://api.github.com/repos/langchain-ai/langchain/issues/16004/comments | 1 | 2024-01-14T15:59:22Z | 2024-04-21T16:17:42Z | https://github.com/langchain-ai/langchain/issues/16004 | 2,080,785,194 | 16,004 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
None
### Description
* I am using the `AzureChatOpenAI` integration combined with the `SQLiteCache`.
* I noticed that the cache was working as intended within a single script invocation but not across multiple script invocations.
* I looked at what was being used as the cache key by inspecting what was being passed to the `SQLAlchemyCache.update()` method in the `llm_string` argument. Included is the `azure_ad_token` field used to initialise the `AzureChatOpenAI` class:
```
{"lc": 1, "type": "constructor", "id": ["langchain", "chat_models", "azure_openai", "AzureChatOpenAI"], "kwargs": {"azure_ad_token": "<removed>", "azure_endpoint": "<removed>", "openai_api_version": "2023-09-01-preview", "model": "gpt-4-0613", "temperature": 0.0, "max_tokens": 2000, "openai_api_type": "azure"}}---[('stop', **None)]**
```
* Our workflow involves rotating the `azure_ad_token` for every script invocation which is invalidating any cache matches.
The `azure_ad_token` field does not have any effect on the model output and is thus not needed and should not be included in the caching key.
### System Info
N/A
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | LLM cache includes `azure_ad_token` in caching key which may frequently change across identical requests. | https://api.github.com/repos/langchain-ai/langchain/issues/16000/comments | 2 | 2024-01-14T11:52:00Z | 2024-04-30T16:14:05Z | https://github.com/langchain-ai/langchain/issues/16000 | 2,080,696,735 | 16,000 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
Not applicable
### Description
The pyproject.toml files of the langchain partner libraries (langchain-openai etc.) are missing a license field. This prevents acceptance with some automated package scanning software.
### System Info
Not applicable
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Langchain partner libraries (langchain-openai etc.) missing license field pyproject.toml files | https://api.github.com/repos/langchain-ai/langchain/issues/15998/comments | 1 | 2024-01-14T11:27:54Z | 2024-04-21T16:28:52Z | https://github.com/langchain-ai/langchain/issues/15998 | 2,080,688,723 | 15,998 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The followiwng simple example will log the warning:
```python
from langchain_openai import AzureChatOpenAI
from langchain.schema import HumanMessage
chat_llm = AzureChatOpenAI()
chat_llm.invoke([HumanMessage(content='howdy!')])
```
### Description
I'm seeing a few warning logs saying that langchain-openai is using a deprecated method from pydantic: `dict`
```txt
.venv/Lib/site-packages/langchain_openai/chat_models/azure.py:199: PydanticDeprecatedSince20: The `dict` method is deprecated; use `model_dump` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.5/migration/
response = response.dict()
tests/test_zllm_completion.py::test_prompt
.venv/Lib/site-packages/langchain_openai/llms/base.py:344: PydanticDeprecatedSince20: The `dict` method is deprecated; use `model_dump` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.5/migration/
response = response.dict()
```
Here:
https://github.com/langchain-ai/langchain/blob/7b084b4cc768a8e5cddc29bfdfb6e5dc1003a5d9/libs/partners/openai/langchain_openai/chat_models/azure.py#L199
And here:
https://github.com/langchain-ai/langchain/blob/7b084b4cc768a8e5cddc29bfdfb6e5dc1003a5d9/libs/partners/openai/langchain_openai/llms/base.py#L344
(There might me other cases)
### System Info
langchain==0.1.0
Microsoft Windows 11 Pro
Python 3.11.3
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | The `dict` method is deprecated; use `model_dump` instead [pydantic] | https://api.github.com/repos/langchain-ai/langchain/issues/15997/comments | 1 | 2024-01-14T02:06:34Z | 2024-04-21T17:39:23Z | https://github.com/langchain-ai/langchain/issues/15997 | 2,080,548,669 | 15,997 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Support Cloudflare AI Gateway's endpoint `https://gateway.ai.cloudflare.com/v1/ACCOUNT_TAG/GATEWAY`
See https://developers.cloudflare.com/ai-gateway/providers/universal/
### Motivation
CF Workers AI (https://developers.cloudflare.com/workers-ai/get-started/rest-api/) is currently supported via the `CloudflareWorkersAI`,
Cloudflare AI Gateway is a different product with a different API (`CloudflareWorkersAI` with a custom `endpoint_url` won't work), and is currently unsupported.
### Your contribution
Yes I can make a PR. | [Feature Request] Cloudflare AI Gateway | https://api.github.com/repos/langchain-ai/langchain/issues/15993/comments | 1 | 2024-01-13T22:08:54Z | 2024-04-20T16:29:38Z | https://github.com/langchain-ai/langchain/issues/15993 | 2,080,492,504 | 15,993 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Add the 'id' attribute to the Document class. Or add a method to get id.
### Motivation
Adding an 'id' attribute makes data manipulation easier.
When using the Weaviate database, I need to manually set additional parameters to query the 'id', and it's not convenient to use.
### Your contribution
At the moment, there isn't one. I'd like to discuss this issue first. | Add the 'id' attribute to the Document class. | https://api.github.com/repos/langchain-ai/langchain/issues/15982/comments | 2 | 2024-01-13T06:21:14Z | 2024-04-20T16:29:32Z | https://github.com/langchain-ai/langchain/issues/15982 | 2,080,122,853 | 15,982 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
qa_chain = (
RunnablePassthrough.assign(context=( lambda x: format_docs(x["context"])))
| rag_prompt
| local_llm_api()
| StrOutputParser()
)
### Description
I am trying to call my local api endpoint in the qa chain.
The return type of the endpoint is a dict of list.
The error i am getting is :
Expected a Runnable, callable or dict.Instead got an unsupported type: <class 'list'>
### System Info
python version - 3.11
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | not able to use local llm endpoints in chains | https://api.github.com/repos/langchain-ai/langchain/issues/15981/comments | 2 | 2024-01-13T06:07:52Z | 2024-04-20T16:23:22Z | https://github.com/langchain-ai/langchain/issues/15981 | 2,080,118,556 | 15,981 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
from langchain_community.embeddings import HuggingFaceHubEmbeddings
url = "https://svvwc5yh51gt1pp3.us-east-1.aws.endpoints.huggingface.cloud"
text = "You do not need a weatherman to know which way the wind blows"
token = "<my_token>"
embeddings = HuggingFaceHubEmbeddings(
model = url,
huggingfacehub_api_token=token
)
qembed = embeddings.embed_query(text)
### Description
The code above should call my HuggingFace inference endpoint and generate an embedding. Instead, it throws:
Traceback (most recent call last):
File "/Users/jraby/python/hfhub/callhub2.py", line 29, in <module>
qembed = embeddings.embed_query(text)
File "/opt/homebrew/lib/python3.10/site-packages/langchain_community/embeddings/huggingface_hub.py", line 109, in embed_query
response = self.embed_documents([text])[0]
KeyError: 0
The "[0]" at the end of line 109 is incorrect. self.embed_documents returns JSON, -not a list. The first node in the JSON is 'embeddings', and it contains the list of embedding arrays.
I believe that this line:
response = self.embed_documents([text])[0]
Should be replaced with this:
response = self.embed_documents([text])['embeddings'][0]
This resolves the problem for me.
### System Info
Platform = MacOS
python version = 3.10
pip3
huggingface_hub (0.20.2)
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Bug in langchain_community/embeddings/huggingface_hub.py HuggingFaceHubEmbeddings.embed_query | https://api.github.com/repos/langchain-ai/langchain/issues/15969/comments | 7 | 2024-01-12T21:50:53Z | 2024-04-23T17:11:59Z | https://github.com/langchain-ai/langchain/issues/15969 | 2,079,773,966 | 15,969 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
I wasn't able to follow the [quick start guide](https://python.langchain.com/docs/get_started/quickstart) because of a dependency issue. Everything was going well until I tried to install Faiss.
### Idea or request for content:
My experience trying to follow langchain quick start guide:
# The background
I'm a experienced developer, but I'm new to python and notebooks. I've been studying langchain for a while now, and I'm trying to get started with the quick start guide. I'm using a mac (Apple M2) and I'm trying to follow the guide using the jupyter notebook on VS code.
# The beginning
Starting with the first step, I'm trying to install the dependencies. I'm using the following command:
```
pyenv install 3.12.0 # install python version
pyenv local 3.12.0 # set python version
python -m venv venv # create virtual environment
source ./venv/bin/activate # activate virtual environment
pip3 install -r requirements.txt # install dependencies
```
Where the requirements.txt file contains the following:
```
jupyterlab
langchain==0.1.0
langchain-openai
beautifulsoup4
```
So far so good. Actually, I was so surprised and happy that everything worked so well, that I kind of knew something was going to go wrong. Anyways... I was able to install everything without any problems, and with that run the notebook from beginning to the beginning of "Retrieval Chain".
# The problem
When I try to [install Faiss](https://python.langchain.com/docs/get_started/quickstart#retrieval-chain):
```
pip install faiss-cpu
```
I get the following a [error](https://github.com/facebookresearch/faiss/issues/2868#issuecomment-1889724871) (which apparently is not only me who is getting it). Not only that, but it seems Faiss is not [supposed to be installed using `pip`](https://github.com/facebookresearch/faiss/issues/2991#issuecomment-1667640023).
After trying `conda` and a bunch of other things, none of which worked, I decided to try a different local vector db.
# The kafkaesque journey
So the next step was trying to install any local vector db that could work and hopefully be compatible with the quick start guide.
**Chroma**:
Added `chromadb` to the requirements.txt file and tried to install it. Got the following error:
```bash
...
ERROR: Cannot install -r requirements.txt (line 1), -r requirements.txt (line 3) and -r requirements.txt (line 5) because these package versions have conflicting dependencies.
The conflict is caused by:
chromadb 0.4.22 depends on onnxruntime>=1.14.1
chromadb 0.4.21 depends on onnxruntime>=1.14.1
chromadb 0.4.20 depends on onnxruntime>=1.14.1
chromadb 0.4.19 depends on onnxruntime>=1.14.1
chromadb 0.4.18 depends on onnxruntime>=1.14.1
chromadb 0.4.17 depends on onnxruntime>=1.14.1
chromadb 0.4.16 depends on onnxruntime>=1.14.1
chromadb 0.4.15 depends on onnxruntime>=1.14.1
chromadb 0.4.14 depends on onnxruntime>=1.14.1
chromadb 0.4.13 depends on onnxruntime>=1.14.1
chromadb 0.4.12 depends on onnxruntime>=1.14.1
chromadb 0.4.11 depends on onnxruntime>=1.14.1
chromadb 0.4.10 depends on onnxruntime>=1.14.1
chromadb 0.4.9 depends on onnxruntime>=1.14.1
chromadb 0.4.8 depends on onnxruntime>=1.14.1
chromadb 0.4.7 depends on onnxruntime>=1.14.1
chromadb 0.4.6 depends on onnxruntime>=1.14.1
chromadb 0.4.5 depends on onnxruntime>=1.14.1
chromadb 0.4.4 depends on onnxruntime>=1.14.1
chromadb 0.4.3 depends on onnxruntime>=1.14.1
chromadb 0.4.2 depends on onnxruntime>=1.14.1
chromadb 0.4.1 depends on onnxruntime>=1.14.1
chromadb 0.4.0 depends on onnxruntime>=1.14.1
chromadb 0.3.29 depends on onnxruntime>=1.14.1
chromadb 0.3.27 depends on onnxruntime>=1.14.1
chromadb 0.3.26 depends on onnxruntime>=1.14.1
chromadb 0.3.25 depends on onnxruntime>=1.14.1
langchain 0.1.0 depends on numpy<2 and >=1
langchain-openai 0.0.2 depends on numpy<2 and >=1
chromadb 0.3.2 depends on numpy~=1.21.6
langchain 0.1.0 depends on numpy<2 and >=1
langchain-openai 0.0.2 depends on numpy<2 and >=1
chromadb 0.3.1 depends on numpy~=1.21.6
langchain 0.1.0 depends on numpy<2 and >=1
langchain-openai 0.0.2 depends on numpy<2 and >=1
chromadb 0.3.0 depends on numpy~=1.21.6
langchain 0.1.0 depends on numpy<2 and >=1
langchain-openai 0.0.2 depends on numpy<2 and >=1
chromadb 0.2.0 depends on numpy~=1.21.6
langchain 0.1.0 depends on numpy<2 and >=1
langchain-openai 0.0.2 depends on numpy<2 and >=1
chromadb 0.1.0 depends on numpy~=1.21.6
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
```
As far as I can tell, and after reading a few posts, it seems that there is not much I can do about this. So I tried the next one.
**Milvus**:
Added `pymilvus` to the requirements.txt file and tried to install it. Got the following error:
```
distutils.errors.DistutilsExecError: command '/usr/bin/clang' failed with exit code 1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
...
...
File "/private/var/folders/tl/nm6kkwtd3nvdnxbqy2l70c5c0000gn/T/pip-install-e9hhztoq/grpcio_1166b67744e74d958d0c1fd853565676/src/python/grpcio/commands.py", line 293, in new_compile
return old_compile(
^^^^^^^^^^^^
File "/private/var/folders/tl/nm6kkwtd3nvdnxbqy2l70c5c0000gn/T/pip-build-env-3zwys1bz/overlay/lib/python3.12/site-packages/setuptools/_distutils/unixccompiler.py", line 187, in _compile
raise CompileError(msg)
distutils.errors.CompileError: command '/usr/bin/clang' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for grpcio
Failed to build grpcio
ERROR: Could not build wheels for grpcio, which is required to install pyproject.toml-based projects
```
I tried a few stackoverflow solutions, but none of them worked. Blocked again.
# The solution
I haven't found a solution yet, but I would like to share my experience so far. I will keep trying to find a solution, and if I do, I will update this post. However, I would like to enfatize how painful has been trying to start using langchain for someone who is new to python and the hole ecosystem. I know it's a new field and in a way I felt like configuring the video driver on my first linux distro in the 90s. Sharing this experience is not meant to be a rant, but to help the community to improve the experience for new users. | DOC: Quickstart | https://api.github.com/repos/langchain-ai/langchain/issues/15965/comments | 6 | 2024-01-12T19:27:42Z | 2024-05-17T17:44:32Z | https://github.com/langchain-ai/langchain/issues/15965 | 2,079,554,933 | 15,965 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
qa_chain = (
RunnablePassthrough.assign(context=( lambda x: format_docs(x["context"])))
| rag_prompt
| custom_model_api(query)
| StrOutputParser()
)
### Description
I am getting the below error
Expected a Runnable, callable or dict.Instead got an unsupported type: <class 'list'>
### System Info
python version 3.10
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Runnable error | https://api.github.com/repos/langchain-ai/langchain/issues/15960/comments | 3 | 2024-01-12T17:35:06Z | 2024-04-19T16:20:00Z | https://github.com/langchain-ai/langchain/issues/15960 | 2,079,356,436 | 15,960 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
multiquery_llm_retriever = MultiQueryRetriever.from_llm(
retriever = vectordb.as_retriever(),
llm=llm,
include_original = top_k_params["mq_include_original"]
)
docs = multiquery_llm_retriever.get_relevant_documents(query=user_input)
Response of the underlying generate_queries function:
{'question': 'The actual question', 'text': LineList(lines=['Alternative Question 1: First generated question?', '', 'Alternative Question 2: Second generated question?', '', 'Alternative Question 3: Third generated question?'])}
Notice that in between are empty strings: ''
They later cause an error in similarity search since empty strings cannot be embedded. Consequently, the retrieve_documents function fails.
### Description
I am trying to use the MultiQueryRetriever. However, the generate queries function returns a list that contains empty strings in between the 3 (as in the default prompt) generated answers.
This will later cause the retrieve_documents to fail since the similarity search throws an exception.
### System Info
langchain==0.0.348
langchain-community==0.0.2
langchain-core==0.1.0
Python 3.11.6
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | generate_queries results in list with empty inbetween generated questions. | https://api.github.com/repos/langchain-ai/langchain/issues/15959/comments | 2 | 2024-01-12T16:46:16Z | 2024-07-19T17:13:13Z | https://github.com/langchain-ai/langchain/issues/15959 | 2,079,265,947 | 15,959 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
prompt_template = ("Human: Say hello to the user using the given username."
"Username: {username}")
chain = LLMChain(
llm=llm,
prompt=PromptTemplate(template=prompt_template, input_variables=[]),
verbose=True,
)
def execute(args:dict):
with callbacks.collect_runs() as cb:
try:
output = chain.invoke(args)
print(f"output: {output}")
finally:
print(f"traced_runs: {cb.traced_runs}")
# executes successfully
execute({"username": "John"})
# failed with validation error, but traced runs is empty, while it should have one run with exception details
execute({"user_name": "John"})
``
### Description
Callback manager doesn't get invoked if there's an input or output validation error.
For logging purposes the callback manager should be invoked and any chain related error, even if it's a validation error.
The related logic exists in file `langchain/chains/base.py` in `acall` and `__call__` functions, as the `callback_manager` & `run_manager` gets initialized after validating input. To fix this, `run_manager` should be initialized first, then wrap preparing input, output and executing in try/except and update the `run_manager` accordingly
### System Info
langchain==0.0.354
langchain-community==0.0.11
langchain-core==0.1.9
Python 3.10.13
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Input or output validation errors are not reported to run manager | https://api.github.com/repos/langchain-ai/langchain/issues/15954/comments | 2 | 2024-01-12T15:10:12Z | 2024-02-15T13:21:46Z | https://github.com/langchain-ai/langchain/issues/15954 | 2,079,064,234 | 15,954 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
I run this in google colab and get error:
```
!pip3 install langchain==0.1.0
!CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip3 install llama-cpp-python==0.2.28
!pip3 install pypdf==3.17.4
!pip3 install gdown==4.7.0
!pip3 install sentence-transformers==2.2.2
!pip3 install chromadb==0.4.18
!wget https://huggingface.co/TheBloke/neural-chat-7B-v3-3-GGUF/resolve/main/neural-chat-7b-v3-3.Q4_K_M.gguf
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.llms import LlamaCpp
n_gpu_layers = 2 # Metal set to 1 is enough.
n_batch = 512 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
# Make sure the model path is correct for your system!
llm = LlamaCpp(
model_path='/content/neural-chat-7b-v3-3.Q4_K_M.gguf',
n_gpu_layers=n_gpu_layers,
n_batch=n_batch,
n_ctx=4096,
f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls
callback_manager=callback_manager,
verbose=False,
)
```
### Description
I am trying using LlamaCpp with `from langchain.llms import LlamaCpp`.
I run the previous code and I get this error:
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
[<ipython-input-4-a01cc1428fc3>](https://localhost:8080/#) in <cell line: 6>()
4
5 # Make sure the model path is correct for your system!
----> 6 llm = LlamaCpp(
7 model_path='/content/Merak-7B-v4-model-q4_k_m.gguf',
8 n_gpu_layers=n_gpu_layers,
1 frames
/usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for LlamaCpp
__root__
Could not load Llama model from path: '/content/neural-chat-7b-v3-3.Q4_K_M.gguf'. Received error fileno (type=value_error)
```
instead of it successfully loaded.
### System Info
```
!apt-get update
!apt-get install neofetch -y
!neofetch
```
This is the result
```
root@182e8520dc9b
-----------------
OS: Ubuntu 22.04.3 LTS x86_64
Host: Google Compute Engine
Kernel: 6.1.58+
Uptime: 14 mins
Packages: 1301 (dpkg)
Shell: bash 5.1.16
Terminal: jupyter-noteboo
CPU: Intel Xeon (2) @ 2.299GHz
GPU: NVIDIA Tesla T4
Memory: 827MiB / 12978MiB
```
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [x] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | 1 validation error for LlamaCpp __root__ Could not load Llama model from path | https://api.github.com/repos/langchain-ai/langchain/issues/15950/comments | 4 | 2024-01-12T14:18:25Z | 2024-07-30T16:05:46Z | https://github.com/langchain-ai/langchain/issues/15950 | 2,078,943,521 | 15,950 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
I want to create an OpenAI assistant using an existing ID, but I'm encountering an error.
1. assistant = OpenAIAssistantRunnable(assistant_id='assistant_id', as_agent=True)
`openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable`
2. assistant = OpenAIAssistantRunnable(assistant_id='assistant_id', as_agent=True, client=openai.OpenAI(api_key=self.model.openai_api_key))
`pydantic.v1.errors.ConfigError: field "client" not yet prepared, so the type is still a ForwardRef. You might need to call OpenAIAssistantRunnable.update_forward_refs()`
In both cases, an error is encountered. The only successful scenario is when the openai_api_key is set in the environment, but my requirement is for the user to provide it. Has anyone else faced a similar issue? Any assistance would be appreciated.
### Description
OpenAIAssistantRunnable raise exception when pass client. There should be a way to pass the key from outside
### System Info
langchain==0.1.0
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | OpenAIAssistantRunnable raise exception when pass client | https://api.github.com/repos/langchain-ai/langchain/issues/15948/comments | 2 | 2024-01-12T14:08:14Z | 2024-01-25T23:26:52Z | https://github.com/langchain-ai/langchain/issues/15948 | 2,078,924,254 | 15,948 |
[
"hwchase17",
"langchain"
]
| qa = RetrievalQA.from_chain_type(llm=llm,
chain_type='stuff',
retriever=retriever,
return_source_documents=True,
chain_type_kwargs={'prompt': prompt},
memory=memory,
)
print(qa)
below is my result when I am printing qa , I am implementing RAG
memory=ConversationBufferMemory(output_key='result', input_key='query', return_messages=True, memory_key='chat_history') combine_documents_chain=StuffDocumentsChain(llm_chain=LLMChain(prompt=PromptTemplate(input_variables=['context', 'question'], template="Use the following pieces of information to answer the user's question.\n If you don't know the answer, just say that you don't know, don't try to make up an answer.\n Context: {context}\n Question: {question}\n Only return the helpful answer below and nothing else.\n Helpful answer:\n "), llm=ChatOpenAI(client=<class 'openai.api_resources.chat_completion.ChatCompletion'>, model_name='gpt-3.5-turbo-16k', temperature=0.1, openai_api_key='sk-??', openai_proxy='')), document_variable_name='context') return_source_documents=True retriever=VectorStoreRetriever(tags=['Chroma', 'OpenAIEmbeddings'], vectorstore=<langchain_community.vectorstores.chroma.Chroma object at 0x7fbc6a8bfd60>, search_kwargs={'k': 2}) | is Retreival QA returning True result when I am implementing RAG | https://api.github.com/repos/langchain-ai/langchain/issues/15945/comments | 1 | 2024-01-12T11:52:06Z | 2024-04-19T16:20:24Z | https://github.com/langchain-ai/langchain/issues/15945 | 2,078,677,181 | 15,945 |
[
"hwchase17",
"langchain"
]
| ### Discussed in https://github.com/langchain-ai/langchain/discussions/15877
<div type='discussions-op-text'>
<sup>Originally posted by **arthurbrenno** January 11, 2024</sup>
I'm trying to make a **generalized method** that **works with any vector database**, so I can **change the implementation** later. The main focus is simply ingesting documents inside a vectorstore.
My main issue always has been **_data persistency_** and the _flexibility_ to change the vector database implementation, because I like to explore different alternatives and there are many of them. That's why I'm (trying) to make a method like this.
Here's a snippet of my implementation:
```py
from langchain_community.document_loaders.base import BaseLoader
from langchain_core.documents.base import Document
from langchain_core.embeddings import Embeddings
from langchain.vectorstores import VectorStore
from conversai.core.services import BaseService
from langchain.text_splitter import TextSplitter
from typing import List, Type, Dict
from typing import Any
class IngestionService(BaseService):
"""Class that handles the ingestion of documents inside a vector store."""
async def execute(url: str,
collection: str,
loader_class: Type[BaseLoader],
splitter: TextSplitter,
embeddings: Embeddings,
vector_store_cls: Type[VectorStore],
**vector_store_kwargs) -> Dict[str, Any]:
"""Ingests a document inside the desired vector store.
Args:
url (str): Url of the document to ingest.
collection (str): Collection of the vector store to ingest the document.
loader (BaseLoader): The loader of the document.
splitter (TextSplitter): The splitter of the document.
embeddings (Embeddings): The embedding model to apply on the splitted documents.
vector_store (VectorStore): The vector store to store the document.
Returns:
bool: True on success, False on failure.
"""
# Instantiating the loader
loader: BaseLoader = loader_class(url)
# Loading and splitting the document
documents: List[Document] = loader.load_and_split(text_splitter=splitter)
# Instantiating the vector store (assuming every implementation should support collection_name and embedding_func.)
vector_store: VectorStore = vector_store_cls(collection_name=collection,
embedding_func=embeddings,
**vector_store_kwargs)
# ids of the created texts
texts_ids: List[str] = await vector_store.aadd_documents(documents)
return {"success": True, "ingested_texts_ids": texts_ids}
```
The problem is that every vector database seems to have a different way of persisting itself (as I understood). For an example:
Chroma:
```py
db = Chroma(persist_directory=...)
```
(or using their separate client whitin Docker
```py
db = Chroma(client=client)
```
FAISS:
```py
# FAISS has
db.save_local("faiss_index")
```
I need to ingest documents **at runtime.** This is important. I also need to group them inside `collections` for better searching later and group similar documents. Creating collections is one of my future implementations.
My current vector store implementation is `Milvus`. And I think I'm generalising too much when I assume every vector db will have a "collection_name" and "embedding_func", like written in:
```py
vector_store: VectorStore = vector_store_cls(collection_name=collection,
embedding_func=embeddings,
**vector_store_kwargs)
```
Any support on making my method better is appreciated. I'm also reading the documentation and trying to solve this for myself, but I think community help would also be a great startpoint. One of my suggestions (I've seen some people talking about this) is making the documentation more clear about data persistency. The examples only show short-term usage of the vectorstores, which is not the case in many production enviroments</div> | Making a generalized method to ingest documents in any vector database. | https://api.github.com/repos/langchain-ai/langchain/issues/15944/comments | 1 | 2024-01-12T11:22:20Z | 2024-07-01T16:04:44Z | https://github.com/langchain-ai/langchain/issues/15944 | 2,078,621,814 | 15,944 |
[
"hwchase17",
"langchain"
]
| <code>
def set_custom_prompt():
prompt_template = """Use the following pieces of information to answer the user's question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Context: {context}
Question: {question}
Only return the helpful answer below and nothing else.
Helpful answer:
"""
prompt = PromptTemplate(template=prompt_template, input_variables=["context", "question"])
return prompt
def retreival_qa_chain(chroma_db_path):
prompt = set_custom_prompt()
embedding = OpenAIEmbeddings()
vectordb = Chroma(persist_directory=chroma_db_path, embedding_function=embedding)
llm = ChatOpenAI(model="gpt-3.5-turbo-16k",temperature=0.1)
retriever = vectordb.as_retriever(search_kwargs={"k": 2})
memory = ConversationTokenBufferMemory(llm=llm, memory_key="chat_history", return_messages=True, input_key='question', output_key='answer')
qa = RetrievalQA.from_chain_type(llm=llm,
chain_type='stuff',
retriever=retriever,
return_source_documents=True,
chain_type_kwargs={'prompt': prompt},
memory=memory,
)
# qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever,return_source_documents=True)
return qa
def create_global_qa_chain():
chroma_db_path = "chroma-databases"
folders = os.listdir(chroma_db_path)
qa_chains = {}
for index, folder in enumerate(folders):
folder_path = f"{chroma_db_path}/{folder}"
project = retreival_qa_chain(folder_path)
qa_chains[folder] = project
return qa_chains
def chat_langchain(new_project_qa, query):
bot_response = new_project_qa({"query": query})
result = bot_response['result']
return result
qa_chains = create_global_qa_chain()
</code>
I am getting below error
self.memory.save_context(inputs, outputs)
File "/home/hs/env/lib/python3.8/site-packages/langchain/memory/token_buffer.py", line 51, in save_context
super().save_context(inputs, outputs)
File "/home/hs/env/lib/python3.8/site-packages/langchain/memory/chat_memory.py", line 37, in save_context
input_str, output_str = self._get_input_output(inputs, outputs)
File "/home/hs/env/lib/python3.8/site-packages/langchain/memory/chat_memory.py", line 33, in _get_input_output
return inputs[prompt_input_key], outputs[output_key]
KeyError: 'question' | getting error while implementing memory | https://api.github.com/repos/langchain-ai/langchain/issues/15938/comments | 2 | 2024-01-12T09:22:41Z | 2024-04-23T17:00:47Z | https://github.com/langchain-ai/langchain/issues/15938 | 2,078,415,718 | 15,938 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
In langchain_community/vectorstores/jaguar.py line 196:
values_comma = "'" + "','".join(vvec) + "'"
If any element in vvec is not string, runtime error is thrown:
TypeError: sequence item 0: expected str instance, int found
The vvec is a list of values in metadata dictionary.
### Description
* Some document loaders, for instance, PyPDFDirectoryLoader, add an integer page number to the metadata. In such cases, the list of values contain integer elements. The statement "','".join(vvec) will report an error.
### System Info
langchain==0.1.0
langchain-community==0.0.11
langchain-core==0.1.10
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | jaguar vector store reports error when adding integer metadata field | https://api.github.com/repos/langchain-ai/langchain/issues/15937/comments | 1 | 2024-01-12T09:01:32Z | 2024-04-19T16:30:33Z | https://github.com/langchain-ai/langchain/issues/15937 | 2,078,379,367 | 15,937 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
``` if non_empty_ids:
metadatas = [metadatas[idx] for idx in non_empty_ids]
images_with_metadatas = [uris[idx] for idx in non_empty_ids]
embeddings_with_metadatas = (
[embeddings[idx] for idx in non_empty_ids] if embeddings else None
)
ids_with_metadata = [ids[idx] for idx in non_empty_ids]
try:
self._collection.upsert(
metadatas=metadatas,
embeddings=embeddings_with_metadatas,
documents=images_with_metadatas,
ids=ids_with_metadata,
```
should be
``` if non_empty_ids:
metadatas = [metadatas[idx] for idx in non_empty_ids]
images_with_metadatas = [b64_texts[idx] for idx in non_empty_ids]
embeddings_with_metadatas = (
[embeddings[idx] for idx in non_empty_ids] if embeddings else None
)
ids_with_metadata = [ids[idx] for idx in non_empty_ids]
try:
self._collection.upsert(
metadatas=metadatas,
embeddings=embeddings_with_metadatas,
documents=images_with_metadatas,
ids=ids_with_metadata,
```
### Description
When inserting images with metadata (Chroma), the 'add_images' function does not insert the b64 strings.
### System Info
Python = 3.10
Langchain = 0.1.0
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | b64 strings not inserted for Chroma 'add_images' method when metadata is present | https://api.github.com/repos/langchain-ai/langchain/issues/15935/comments | 1 | 2024-01-12T08:29:07Z | 2024-04-19T16:12:50Z | https://github.com/langchain-ai/langchain/issues/15935 | 2,078,326,124 | 15,935 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
here is my code:
import sqlalchemy as sal
import os, sys, openai
import pandas as pd
import constants
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
from langchain_experimental.sql import SQLDatabaseChain
from sqlalchemy import create_engine
from langchain.chat_models import ChatOpenAI
from typing import List, Optional
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.callbacks.manager import CallbackManagerForToolRun
from langchain.chat_models import ChatOpenAI
from langchain_experimental.plan_and_execute import (
PlanAndExecute,
load_agent_executor,
load_chat_planner,
)
from langchain.sql_database import SQLDatabase
from langchain.text_splitter import TokenTextSplitter
from langchain.tools import BaseTool
from langchain.tools.sql_database.tool import QuerySQLDataBaseTool
from secret_key import openapi_key
from langchain.prompts import PromptTemplate
os.environ['OPENAI_API_KEY'] = openapi_key
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
engine = create_engine(connection_uri)
def chat(question, sql_format):
greetings = ["hi", "hello", "hey"]
if question.lower() in greetings:
return "Hello! How can I assist you today?"
model_name="gpt-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
answer = None
if sql_format==False:
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
answer = db_chain.run(question)
else:
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True , return_sql =True)
sql_query = db_chain.run(question)
print("SQLQuery: "+str(sql_query))
# result = engine.execute(sql_query)
result_df = pd.read_sql(sql_query, engine)
if result_df.empty:
return "No results found"
answer = result_df.to_dict()
PROMPT = """
Given an input question, first create a syntactically correct mssql query to run,
then look at the results of the query and return the answer.Return the answer in a sentence form alsways.
The question: {db_chain.run}
"""
prompt_template = """
Use the following pieces of context to answer the question at the end. If you don't know the answer,
just say that you don't know, don't try to make up an answer. Return the answer in a sentence form alsways.
{context}
Question: {question}
Helpful Answer:"""
def handle_greetings(context, question):
greetings = ["hi", "hello", "hey"]
if any(greeting in question.lower() for greeting in greetings):
return "Hello! How can I assist you today?"
else:
return None
PROMPT = PromptTemplate(
template=prompt_template,
input_variables=["context", "question"],
preprocessor=handle_greetings
)
def split_text(text, chunk_size, chunk_overlap=0):
text_splitter = TokenTextSplitter(
chunk_size=chunk_size, chunk_overlap=chunk_overlap
)
yield from text_splitter.split_text(text)
class QuerySQLDatabaseTool2(QuerySQLDataBaseTool):
def _run(
self,
query: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> str:
result = self.db.run_no_throw(query)
return next(split_text(result, chunk_size=14_000))
class SQLDatabaseToolkit2(SQLDatabaseToolkit):
def get_tools(self) -> List[BaseTool]:
tools = super().get_tools()
original_query_tool_description = tools[0].description
new_query_tool = QuerySQLDatabaseTool2(
db=self.db, description=original_query_tool_description
)
tools[0] = new_query_tool
return tools
# return db_chain.run(question)
return answer
def chain1(question):
text = chat(question,False)
return text
def chain2(question):
query = chat(question,True)
return query
# answer=chat("what is Ganesh D desigation")
# print(answer)
# Define a function named chain1 that uses the chat function without SQL format
def chain1(question):
text = chat(question, False)
return text
# Define a function named chain2 that uses the chat function with SQL format
def chain2(question):
query = chat(question, True)
return query
chain1("give me aadhaar number of all Jayachitra R") #, give answer in sentence form")
print(chain1)
### Description
how to added instruction in the llm model to personalize the output request by modify the prompt here or add other way to tell the llm model to give the answer in sentence form or any other instructions.
here is my code:
import sqlalchemy as sal
import os, sys, openai
import pandas as pd
import constants
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
from langchain_experimental.sql import SQLDatabaseChain
from sqlalchemy import create_engine
from langchain.chat_models import ChatOpenAI
from typing import List, Optional
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.callbacks.manager import CallbackManagerForToolRun
from langchain.chat_models import ChatOpenAI
from langchain_experimental.plan_and_execute import (
PlanAndExecute,
load_agent_executor,
load_chat_planner,
)
from langchain.sql_database import SQLDatabase
from langchain.text_splitter import TokenTextSplitter
from langchain.tools import BaseTool
from langchain.tools.sql_database.tool import QuerySQLDataBaseTool
from secret_key import openapi_key
from langchain.prompts import PromptTemplate
os.environ['OPENAI_API_KEY'] = openapi_key
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
engine = create_engine(connection_uri)
def chat(question, sql_format):
greetings = ["hi", "hello", "hey"]
if question.lower() in greetings:
return "Hello! How can I assist you today?"
model_name="gpt-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
answer = None
if sql_format==False:
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
answer = db_chain.run(question)
else:
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True , return_sql =True)
sql_query = db_chain.run(question)
print("SQLQuery: "+str(sql_query))
# result = engine.execute(sql_query)
result_df = pd.read_sql(sql_query, engine)
if result_df.empty:
return "No results found"
answer = result_df.to_dict()
PROMPT = """
Given an input question, first create a syntactically correct mssql query to run,
then look at the results of the query and return the answer.Return the answer in a sentence form alsways.
The question: {db_chain.run}
"""
prompt_template = """
Use the following pieces of context to answer the question at the end. If you don't know the answer,
just say that you don't know, don't try to make up an answer. Return the answer in a sentence form alsways.
{context}
Question: {question}
Helpful Answer:"""
def handle_greetings(context, question):
greetings = ["hi", "hello", "hey"]
if any(greeting in question.lower() for greeting in greetings):
return "Hello! How can I assist you today?"
else:
return None
PROMPT = PromptTemplate(
template=prompt_template,
input_variables=["context", "question"],
preprocessor=handle_greetings
)
def split_text(text, chunk_size, chunk_overlap=0):
text_splitter = TokenTextSplitter(
chunk_size=chunk_size, chunk_overlap=chunk_overlap
)
yield from text_splitter.split_text(text)
class QuerySQLDatabaseTool2(QuerySQLDataBaseTool):
def _run(
self,
query: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> str:
result = self.db.run_no_throw(query)
return next(split_text(result, chunk_size=14_000))
class SQLDatabaseToolkit2(SQLDatabaseToolkit):
def get_tools(self) -> List[BaseTool]:
tools = super().get_tools()
original_query_tool_description = tools[0].description
new_query_tool = QuerySQLDatabaseTool2(
db=self.db, description=original_query_tool_description
)
tools[0] = new_query_tool
return tools
# return db_chain.run(question)
return answer
def chain1(question):
text = chat(question,False)
return text
def chain2(question):
query = chat(question,True)
return query
# answer=chat("what is Ganesh D desigation")
# print(answer)
# Define a function named chain1 that uses the chat function without SQL format
def chain1(question):
text = chat(question, False)
return text
# Define a function named chain2 that uses the chat function with SQL format
def chain2(question):
query = chat(question, True)
return query
chain1("give me aadhaar number of all Jayachitra R") #, give answer in sentence form")
print(chain1)
### System Info
python = 3.11
langchain= latest
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | adding special instruction to llm model. | https://api.github.com/repos/langchain-ai/langchain/issues/15934/comments | 3 | 2024-01-12T06:43:17Z | 2024-04-19T16:20:24Z | https://github.com/langchain-ai/langchain/issues/15934 | 2,078,174,147 | 15,934 |
[
"hwchase17",
"langchain"
]
| ### Feature request
The tool I'm talking about is:
from langchain.tools import ShellTool
As Shell tool is very versatile, while integrating it into applications as openai functions, developers have no clue about what command is being executed using the ShellTool. All one can see is:

Which shouldn't be the case right ? Developers need to know what command is going to be executed by the ShellTool, and whether its the right command to execute, or you just prompt better for the LLM to generate a new command and let that execute.
Summarising my feature request:
1. There's no visibility about what command was executed.
2. There's no mechanism to prevent a command to be executed using ShellTool, like a y/n human input which can be accepted from user to proceed with executing the command.
I would like to implement this in a PR if the project maintainer feels this feature adds value.
### Motivation
I was using ShellTool as a OpenAI Function in Autogen, which is a multi-agent conversation framework. Where in one of my AI Agent is a programmer, and it has access to the ShellTool. When its executing any command using ShellTool, we have no clue about what command is being executed, and are thrown to mercy of the LLM that the generated command is what we need.
### Your contribution
I have raised a [PR](https://github.com/langchain-ai/langchain/pull/15932), would like you to review and provide feedback on it. Or if there's some other way around to implement the feature I'm proposing, please let me know. | Adding more capabilities to Langchain Shell Tool | https://api.github.com/repos/langchain-ai/langchain/issues/15931/comments | 5 | 2024-01-12T05:19:27Z | 2024-02-05T05:40:21Z | https://github.com/langchain-ai/langchain/issues/15931 | 2,078,090,352 | 15,931 |
[
"hwchase17",
"langchain"
]
| ### Discussed in https://github.com/langchain-ai/langchain/discussions/15928
<div type='discussions-op-text'>
<sup>Originally posted by **BalajiPolisetty2207** January 11, 2024</sup>
i am using this langchain to query my postgres sql with natural language.
with sql server i am successful in asking and getting answers .
I am using azure open ai key and endpoint and gpt 4 model and deployment .
Can some one help what is missing here to query postgressql and how i can set the schema before querying.
Below is the code
# Importing necessary libraries
from sqlalchemy import create_engine, text
from langchain.chat_models import AzureChatOpenAI
from langchain.prompts.chat import ChatPromptTemplate
from langchain.agents import AgentType, create_sql_agent
from langchain.sql_database import SQLDatabase
from langchain.agents.agent_toolkits.sql.toolkit import SQLDatabaseToolkit
import os
db_url = 'postgresql://asdas:[email protected]:5432/asdad' # PostgreSQL connection
openai_api_key = 'asdsadad' # Azure OpenAI key
openai_model = 'eqwewqe-4' # OpenAI model, like GPT-4
openai_deployment = 'eqwewqe' # OpenAI deployment name
os.environ["OPENAI_API_KEY"] = 'xxxx'
os.environ["OPENAI_API_BASE"]="https://asdasd.qweqw.azure.com/"
os.environ["OPENAI_API_TYPE"]="azure"
os.environ["OPENAI_API_VERSION"]="2023-07-01-preview"
# Creating the database engine
engine = create_engine(db_url)
# Setting up the Azure OpenAI for LangChain
llm = AzureChatOpenAI(
model=openai_model,
deployment_name=openai_deployment,
temperature=0
)
# Setting up the prompt template
final_prompt = ChatPromptTemplate.from_messages(
[
("system",
"""
You are an AI assistant expert in querying the PostgreSQL database.
you are going to respond the question on the students schema.
"""
),
("user", "{question}\n ai: "),
]
)
# Creating the SQL Database object for LangChain
db = SQLDatabase(engine)
# Setting up the SQL Toolkit for LangChain
sql_toolkit = SQLDatabaseToolkit(db=db, llm=llm)
sql_toolkit.get_tools()
# Creating the SQL Agent for LangChain
sqldb_agent = create_sql_agent(
llm=llm,
toolkit=sql_toolkit,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
handle_parsing_errors=True,
)
# Example usage of the agent to answer a question
question = "get me list of tables in the students schema ?"
final_prompt.messages[-1] = ("user", f"{question}\n ai: ")
response = sqldb_agent.run(final_prompt)
print(response)
</div> | i am working on SQL Langchain to ask natural english questions and get answers from postgress sql but not getting any answers | https://api.github.com/repos/langchain-ai/langchain/issues/15929/comments | 2 | 2024-01-12T03:15:11Z | 2024-01-12T03:38:11Z | https://github.com/langchain-ai/langchain/issues/15929 | 2,077,988,964 | 15,929 |
[
"hwchase17",
"langchain"
]
| ### Feature request
When there are multiple GPUs available, the Ollama API provides the main_gpu option to specify which GPU to use as the main one. Please modify Langchain's ChatOllama to also include this feature.

### Motivation
When running multiple tasks simultaneously on a server, it is necessary to designate specific processes for Ollama.
### Your contribution
None | Please modify ChatOllama to allow the option to specify main_gpu | https://api.github.com/repos/langchain-ai/langchain/issues/15924/comments | 2 | 2024-01-12T01:38:21Z | 2024-01-12T02:59:13Z | https://github.com/langchain-ai/langchain/issues/15924 | 2,077,910,530 | 15,924 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
With langchain version == 0.1.0
This stop param of the Class VLLM does not work.
For instance, this code has no effect regarding stop word.
```
model = VLLM(
stop=["stop_word"],
model=model_name,
trust_remote_code=True, # mandatory for hf models
max_new_tokens=512,
top_k=10,
top_p=0.95,
temperature=0.2,
vllm_kwargs=vllm_kwargs,
)
```
But this code works:
```
prompt = PromptTemplate(
template=template, input_variables=["system_message", "question"]
)
llm_chain = LLMChain(prompt=prompt, llm=model)
llm_chain.run(
{"system_message": system_message, "question": question, "stop":["stop_word"]}
)
```
### Idea or request for content:
Clear the documentation or it is a bug. | DOC: stop params does not work with langchain_community.llms import VLLM but work in LLMChain | https://api.github.com/repos/langchain-ai/langchain/issues/15921/comments | 1 | 2024-01-12T00:02:57Z | 2024-04-19T16:19:52Z | https://github.com/langchain-ai/langchain/issues/15921 | 2,077,798,404 | 15,921 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
metadata_field_info = [
AttributeInfo(name="source",description="The document this chunk is from.",type="string"),
AttributeInfo(name="origin",description="The origin the document came from. Comes from either scraped websites like TheKinection.org, Kinecta.org or database files like Bancworks. Bancworks is the higher priority.",type="string"),
AttributeInfo(name="date_day",description="The day the document was uploaded.",type="string"),
AttributeInfo(name="date_uploaded",description="The month year the document is current to.",type="string"),
AttributeInfo(name="date_month",description="The month the document was uploaded.",type="string"),
AttributeInfo(name="date_month_name",description="The month name the document was uploaded.",type="string"),
AttributeInfo(name="date_year_long",description="The full year the document was uploaded.",type="string"),
AttributeInfo(name="date_year_short",description="The short year the document was uploaded.",type="string"),
]
llm = ChatOpenAI(temperature=0)
vectorstore = Pinecone.from_existing_index(index_name="test", embedding=get_embedding_function())
# print("Load existing vector store")\
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
"Information about when the document was created and where it was grabbed from.",
metadata_field_info,
)
```
```python
question = "Give the minimum opening deposits for each accounts for the rate sheets in January"
retriever.get_relevant_documents(question)
```
### Description
When I ask to fetch relevant documents with the following query:
- "Give the minimum opening deposits for each accounts for the rate sheets in January"
There is no problem. However, if I make this query a little more robust...
- "Give the minimum opening deposits for each accounts for the rate sheets in January 2023"
I get a CNAME "and" error. This happens in both Pinecone and ChromaDB. Something is wrong with how the query translator is operating or I am missing some crucial step. We should be able to use multiple metadata flags at once.
### System Info
Python 3.11
Langchain 0.1.0
Chroma 0.4.22
Pinecone 2.2.4
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Issues with SelfQueryRetriever and the "AND" operator failing in queries that search for multiple metadata flags | https://api.github.com/repos/langchain-ai/langchain/issues/15919/comments | 8 | 2024-01-11T23:03:50Z | 2024-06-08T16:09:06Z | https://github.com/langchain-ai/langchain/issues/15919 | 2,077,750,276 | 15,919 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
See description.
### Description
I am using SelfQueryRetriever. For a response JSON that contains null query, for example:
```
json
{
"query": null,
"filter": ...
}
```
The output parser throws OutputParserException at [line 51](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/query_constructor/base.py#L51).
OutputParserException('Parsing text\n```json\n{\n "query": null,\n "filter": "eq(\\"kategorie\\", \\"Pravo\\")"\n}\n```\n raised following error:\nobject of type \'NoneType\' has no len()')Traceback (most recent call last):
File "/home/MetaExponential/.local/lib/python3.10/site-packages/langchain/chains/query_constructor/base.py", line 51, in parse
if len(parsed["query"]) == 0:
### System Info
absl-py==2.0.0
ai21==1.2.8
aioboto3==12.0.0
aiobotocore==2.7.0
aiohttp==3.8.4
aioitertools==0.11.0
aiosignal==1.3.1
altgraph @ file:///AppleInternal/Library/BuildRoots/9dd5efe2-7fad-11ee-b588-aa530c46a9ea/Library/Caches/com.apple.xbs/Sources/python3/altgraph-0.17.2-py2.py3-none-any.whl
annotated-types==0.6.0
annoy==1.17.3
antlr4-python3-runtime==4.9.3
anyio==3.7.1
argilla==1.7.0
astunparse==1.6.3
async-generator==1.10
async-timeout==4.0.2
attrs==23.1.0
Babel==2.12.1
backoff==2.2.1
bcrypt==4.0.1
beautifulsoup4==4.12.2
blinker==1.6.2
boto3==1.28.64
botocore==1.31.64
build==0.10.0
CacheControl==0.12.11
cachetools==5.3.1
camel-converter==3.1.0
certifi==2022.12.7
cffi==1.15.1
cfgv==3.3.1
chardet==5.2.0
charset-normalizer==3.1.0
Chroma==0.2.0
chroma-hnswlib==0.7.3
chromadb==0.4.13
cleo==2.0.1
click==8.1.7
clickhouse-connect==0.6.18
CoffeeScript==2.0.3
cohere==4.31
coloredlogs==15.0.1
commonmark==0.9.1
contourpy==1.0.7
crashtest==0.4.1
cryptography==40.0.2
cssselect==1.2.0
cycler==0.11.0
dataclasses-json==0.5.7
datasets==2.12.0
decorator==5.1.1
Deprecated==1.2.13
deprecation==2.1.0
dill==0.3.7
distlib==0.3.6
distro==1.8.0
dnspython==2.3.0
docutils==0.19
duckdb==0.7.1
dulwich==0.21.5
effdet==0.3.0
elastic-transport==8.10.0
elasticsearch==7.13.4
et-xmlfile==1.1.0
exceptiongroup==1.1.1
facebook-sdk==3.1.0
facebooktoken==0.0.1
faiss-cpu==1.7.4
fastapi==0.103.2
fastavro==1.8.2
feedfinder2==0.0.4
feedparser==6.0.11
filelock==3.12.0
Flask==2.3.2
Flask-Cors==4.0.0
Flask-Limiter==3.4.1
Flask-Mail==0.9.1
flatbuffers==23.5.26
fonttools==4.39.4
frozenlist==1.3.3
fsspec==2023.6.0
future @ file:///AppleInternal/Library/BuildRoots/9dd5efe2-7fad-11ee-b588-aa530c46a9ea/Library/Caches/com.apple.xbs/Sources/python3/future-0.18.2-py3-none-any.whl
fuzzywuzzy==0.18.0
gast==0.5.4
google-ai-generativelanguage==0.4.0
google-api-core==2.12.0
google-auth==2.23.3
google-auth-oauthlib==1.0.0
google-cloud-aiplatform==1.38.1
google-cloud-bigquery==3.12.0
google-cloud-core==2.3.3
google-cloud-resource-manager==1.10.4
google-cloud-storage==2.12.0
google-crc32c==1.5.0
google-generativeai==0.3.2
google-pasta==0.2.0
google-resumable-media==2.6.0
googleapis-common-protos==1.56.4
grpc-gateway-protoc-gen-openapiv2==0.1.0
grpc-google-iam-v1==0.12.6
grpcio==1.59.0
grpcio-status==1.59.0
grpcio-tools==1.59.0
h11==0.14.0
h2==4.1.0
h5py==3.10.0
hnswlib==0.7.0
hpack==4.0.0
html5lib==1.1
httpcore==0.16.3
httptools==0.5.0
httpx==0.23.3
huggingface-hub==0.14.1
humanfriendly==10.0
humbug==0.3.2
hyperframe==6.0.1
identify==2.5.23
idna==3.4
importlib-metadata==6.6.0
importlib-resources==5.12.0
iniconfig==2.0.0
install==1.3.5
installer==0.7.0
iopath==0.1.10
itsdangerous==2.1.2
jaraco.classes==3.2.3
jieba3k==0.35.1
Jinja2==3.1.2
jmespath==1.0.1
joblib==1.2.0
jq==1.6.0
jsonpatch==1.33
jsonpointer==2.4
jsonschema==4.17.3
jwt==1.3.1
keras==2.14.0
keyring==23.13.1
kiwisolver==1.4.4
lancedb==0.3.2
langchain==0.1.0
langchain-community==0.0.11
langchain-core==0.1.9
langchain-google-genai==0.0.5
langsmith==0.0.77
lark==1.1.7
layoutparser==0.3.4
Levenshtein==0.23.0
libclang==16.0.6
libdeeplake==0.0.84
limits==3.5.0
llama-cpp-python==0.1.39
lockfile==0.12.2
loguru==0.7.0
lxml==4.9.2
lz4==4.3.2
macholib @ file:///AppleInternal/Library/BuildRoots/9dd5efe2-7fad-11ee-b588-aa530c46a9ea/Library/Caches/com.apple.xbs/Sources/python3/macholib-1.15.2-py2.py3-none-any.whl
Mako==1.2.4
Markdown==3.4.3
markdown2==2.4.8
MarkupSafe==2.1.2
marshmallow==3.19.0
marshmallow-enum==1.5.1
matplotlib==3.7.1
meilisearch==0.28.4
ml-dtypes==0.2.0
monotonic==1.6
more-itertools==9.1.0
mpmath==1.3.0
msg-parser==1.2.0
msgpack==1.0.5
multidict==6.0.4
multiprocess==0.70.15
mypy-extensions==1.0.0
nest-asyncio==1.5.8
networkx==3.1
newspaper3k==0.2.8
nltk==3.8.1
nodeenv==1.7.0
numexpr==2.8.4
numpy==1.26.1
oauthlib==3.2.2
olefile==0.46
omegaconf==2.3.0
onnxruntime==1.14.1
openai==1.3.5
openapi-schema-pydantic==1.2.4
opencv-python==4.7.0.72
openpyxl==3.1.2
opt-einsum==3.3.0
ordered-set==4.1.0
outcome==1.2.0
overrides==7.4.0
packaging==23.2
pandas==1.5.3
pathos==0.3.1
pdf2image==1.16.3
pdfminer.six==20221105
pdfplumber==0.9.0
pexpect==4.8.0
Pillow==9.5.0
pinecone-client==2.2.4
pkginfo==1.9.6
platformdirs==2.6.2
Plim==1.0.0
pluggy==1.3.0
poetry==1.4.2
poetry-core==1.5.2
poetry-plugin-export==1.3.1
poppler-utils==0.1.0
portalocker==2.7.0
posthog==3.0.1
pox==0.3.3
ppft==1.7.6.7
pre-commit==3.2.2
proto-plus==1.22.3
protobuf==4.24.4
ptyprocess==0.7.0
pulsar-client==3.3.0
py==1.11.0
pyarrow==12.0.0
pyasn1==0.5.0
pyasn1-modules==0.3.0
pycocotools==2.0.6
pycparser==2.21
pydantic==2.4.2
pydantic_core==2.10.1
PyExecJS==1.5.1
pyfb==0.6.0
Pygments==2.15.1
PyJWT==2.7.0
pylance==0.8.7
PyMuPDF==1.22.3
pypandoc==1.11
pyparsing==3.0.9
pypdf==3.8.1
PyPDF2==3.0.1
PyPika==0.48.9
pyproject_hooks==1.0.0
pyrsistent==0.19.3
pyScss==1.4.0
PySocks==1.7.1
pytesseract==0.3.10
pytest==7.4.4
python-dateutil==2.8.2
python-docx==0.8.11
python-dotenv==1.0.0
python-Levenshtein==0.23.0
python-magic==0.4.27
python-multipart==0.0.6
python-pptx==0.6.21
pytz==2023.3
PyYAML==6.0
qdrant-client==1.6.4
rank-bm25==0.2.2
rapidfuzz==3.4.0
ratelimiter==1.2.0.post0
readability-lxml==0.8.1
redis==5.0.1
regex==2023.3.23
requests==2.31.0
requests-file==1.5.1
requests-oauthlib==1.3.1
requests-toolbelt==0.10.1
responses==0.18.0
retry==0.9.2
rfc3986==1.5.0
rich==13.0.1
rsa==4.9
s3transfer==0.7.0
safetensors==0.3.1
scikit-learn==1.2.2
scipy==1.10.1
selenium==4.9.1
semver==3.0.2
sentence-transformers==2.2.2
sentencepiece==0.1.98
sgmllib3k==1.0.0
shapely==2.0.2
shellingham==1.5.0.post1
simplejson==3.19.1
six @ file:///AppleInternal/Library/BuildRoots/9dd5efe2-7fad-11ee-b588-aa530c46a9ea/Library/Caches/com.apple.xbs/Sources/python3/six-1.15.0-py2.py3-none-any.whl
snakeviz==2.2.0
sniffio==1.3.0
sortedcontainers==2.4.0
soupsieve==2.5
SQLAlchemy==2.0.16
sqlean.py==0.21.8.4
starlette==0.27.0
stylus==0.1.2
sympy==1.11.1
tenacity==8.2.2
tensorboard==2.14.1
tensorboard-data-server==0.7.2
tensorflow==2.14.0
tensorflow-estimator==2.14.0
tensorflow-io-gcs-filesystem==0.34.0
tensorflow-macos==2.14.0
termcolor==2.3.0
threadpoolctl==3.1.0
tiktoken==0.4.0
timm==0.9.1
tinysegmenter==0.3
tldextract==5.1.1
tokenizers==0.13.3
tomli==2.0.1
tomlkit==0.11.8
torch==2.1.0
torchvision==0.15.1
tornado==6.2
tqdm==4.65.0
transformers==4.28.1
trio==0.22.0
trio-websocket==0.10.3
trove-classifiers==2023.5.2
typer==0.9.0
typing-inspect==0.8.0
typing_extensions==4.8.0
tzdata==2023.3
unstructured==0.6.6
unstructured-inference==0.4.4
urllib3==1.26.15
uvicorn==0.22.0
uvloop==0.17.0
virtualenv==20.21.1
Wand==0.6.11
watchfiles==0.19.0
webencodings==0.5.1
websockets==11.0.2
Werkzeug==2.3.6
wrapt==1.14.1
wsproto==1.2.0
xattr==0.10.1
XlsxWriter==3.1.0
xxhash==3.2.0
yarl==1.9.2
zipp==3.15.0
zstandard==0.21.0
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | query_constructor throws OutputParserException is query is null | https://api.github.com/repos/langchain-ai/langchain/issues/15914/comments | 1 | 2024-01-11T21:54:11Z | 2024-04-18T16:21:30Z | https://github.com/langchain-ai/langchain/issues/15914 | 2,077,661,061 | 15,914 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Make it easy to use `tokenizers` for HF tokenizers instead of `transformers`
### Motivation
`tokenizers` has far fewer dependencies | Add ability to use `tokenizers` instead of `transformers` for HF tokenizers | https://api.github.com/repos/langchain-ai/langchain/issues/15902/comments | 1 | 2024-01-11T18:42:00Z | 2024-04-18T16:30:29Z | https://github.com/langchain-ai/langchain/issues/15902 | 2,077,368,838 | 15,902 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Tool for OpenAI image generation API using openai's v1 sdk https://platform.openai.com/docs/guides/images
### Motivation
Useful for image-gen applications with language interfaces | Integration for OpenAI image gen with v1 sdk | https://api.github.com/repos/langchain-ai/langchain/issues/15901/comments | 3 | 2024-01-11T18:37:41Z | 2024-06-01T00:19:27Z | https://github.com/langchain-ai/langchain/issues/15901 | 2,077,362,764 | 15,901 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Tool for OpenAI speech-to-text (using openai v1) https://platform.openai.com/docs/guides/speech-to-text
### Motivation
Useful for building voice interfaces | OpenAI speech-to-text API integration | https://api.github.com/repos/langchain-ai/langchain/issues/15900/comments | 2 | 2024-01-11T18:35:31Z | 2024-06-15T16:06:57Z | https://github.com/langchain-ai/langchain/issues/15900 | 2,077,359,821 | 15,900 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
from langchain_openai import ChatOpenAI
llm = ChatOpenAI()
llm.invoke("how can langsmith help with testing?")
### Description
site-packages\langchain_openai\chat_models\base.py", line 454, in _create_chat_result
response = response.dict()
AttributeError: 'str' object has no attribute 'dict'
### System Info
Python 3.10.12
langchain 0.1.0
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | The sample code of version 0.1.0 of the official website cannot be executed. | https://api.github.com/repos/langchain-ai/langchain/issues/15888/comments | 13 | 2024-01-11T15:14:23Z | 2024-07-14T13:05:49Z | https://github.com/langchain-ai/langchain/issues/15888 | 2,076,928,679 | 15,888 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The following code:
```python
from arango import ArangoClient
from langchain_community.graphs import ArangoGraph
from langchain.chains import ArangoGraphQAChain
# Initialize the ArangoDB client.
client = ArangoClient(hosts='http://localhost:8529')
# Connect to Database
db = client.db('mydb', username='myuser', password='mypass')
# Instantiate the ArangoDB-LangChain Graph
graph = ArangoGraph(db)
```
Produces the following exception:
Traceback (most recent call last):
File "/Users/vgreen/working_dir/xpm/graph_qa_01.py", line 19, in <module>
graph = ArangoGraph(db)
File "/Users/vgreen/Documents/2023/repos/Trancendence/pipeline/devenv/lib/python3.10/site-packages/langchain_community/graphs/arangodb_graph.py", line 23, in __init__
self.set_db(db)
File "/Users/vgreen/Documents/2023/repos/Trancendence/pipeline/devenv/lib/python3.10/site-packages/langchain_community/graphs/arangodb_graph.py", line 42, in set_db
self.set_schema()
File "/Users/vgreen/Documents/2023/repos/Trancendence/pipeline/devenv/lib/python3.10/site-packages/langchain_community/graphs/arangodb_graph.py", line 49, in set_schema
self.__schema = self.generate_schema() if schema is None else schema
File "/Users/vgreen/Documents/2023/repos/Trancendence/pipeline/devenv/lib/python3.10/site-packages/langchain_community/graphs/arangodb_graph.py", line 96, in generate_schema
for doc in self.__db.aql.execute(aql):
File "/Users/vgreen/Documents/2023/repos/Trancendence/pipeline/devenv/lib/python3.10/site-packages/arango/aql.py", line 453, in execute
return self._execute(request, response_handler)
File "/Users/vgreen/Documents/2023/repos/Trancendence/pipeline/devenv/lib/python3.10/site-packages/arango/api.py", line 74, in _execute
return self._executor.execute(request, response_handler)
File "/Users/vgreen/Documents/2023/repos/Trancendence/pipeline/devenv/lib/python3.10/site-packages/arango/executor.py", line 66, in execute
return response_handler(resp)
File "/Users/vgreen/Documents/2023/repos/Trancendence/pipeline/devenv/lib/python3.10/site-packages/arango/aql.py", line 450, in response_handler
raise AQLQueryExecuteError(resp, request)
arango.exceptions.AQLQueryExecuteError: [HTTP 400][ERR 1501] AQL: syntax error, unexpected FOR declaration near 'for
LIMIT 1
...' at position 2:37 (while parsing)
### Description
I'm trying to use Langchain to connect to an ArangoDB Graph database to perform question and answering and when attempting to instantiate an ArangoGraph object it throws an AQLQueryExecuteError.
### System Info
Langchain version: v0.1.0
Platform: Mac OS Sonoma
Python version: 3.10 (venv)
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Instantiating ArangoGraph(db) produces [HTTP 400][ERR 1501] AQL: syntax error | https://api.github.com/repos/langchain-ai/langchain/issues/15886/comments | 1 | 2024-01-11T14:50:40Z | 2024-04-18T16:21:27Z | https://github.com/langchain-ai/langchain/issues/15886 | 2,076,869,945 | 15,886 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
model_name = "Intel/dynamic_tinybert"
tokenizer = AutoTokenizer.from_pretrained(model_name, padding=True, truncation=True, max_length=512)
question_answerer = pipeline(
"question-answering",
model=model_name,
tokenizer=tokenizer,
return_tensors='pt'
)
llm = HuggingFacePipeline(
pipeline=question_answerer,
model_kwargs={"temperature": 0.7, "max_length": 50},
)
prompt_template = """
As literature critic answer me
question: {question}
context: {context}
"""
prompt = PromptTemplate(template=prompt_template, input_variables=["context", "question"])
chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
chain_type_kwargs = {"prompt": prompt})
question = "Who is Hamlet ?"
answer = chain.invoke({"query": question}) # issue here <--
print(answer)
```
### Description
I tried to implement simple RetrievalQA from with langchain_chain.faiss vector search but I've faced with such assert,
argument needs to be of type (SquadExample, dict)
Here, there is an issue.
answer = chain.invoke({"query": question})
Thank you in advance.
### System Info
Windows 10, python 3.11, langchain 0.1.0
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | argument needs to be of type (SquadExample, dict) | https://api.github.com/repos/langchain-ai/langchain/issues/15884/comments | 18 | 2024-01-11T14:29:24Z | 2024-06-08T16:09:01Z | https://github.com/langchain-ai/langchain/issues/15884 | 2,076,818,792 | 15,884 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
- With `LLM`:
```py
import os
from typing import Any, List
import requests
from langchain.callbacks.base import Callbacks
from langchain.chains import LLMChain
from langchain.chains.base import Chain
from langchain.prompts import PromptTemplate
from langchain_core.callbacks.manager import CallbackManagerForLLMRun
from langchain_core.language_models.llms import LLM
required_envs = ["API_BASE", "API_KEY", "DEPLOYMENT_NAME"]
for env in required_envs:
if env not in os.environ:
raise ValueError(f"Missing required environment variable: {env}")
class CustomLLM(LLM):
@property
def _llm_type(self) -> str:
return "CustomLLM"
def _call(
self,
prompt: str,
stop: List[str] | None = None,
run_manager: CallbackManagerForLLMRun | None = None,
**kwargs: Any
) -> str:
"""Call the API with the given prompt and return the result."""
self._api_endpoint: str = str(os.getenv("API_BASE"))
self._api_key: str = str(os.getenv("API_KEY"))
self._deployment_name: str = str(os.getenv("DEPLOYMENT_NAME"))
result = requests.post(
f"{self._api_endpoint}/llm/deployments/{self._deployment_name}/chat/completions?api-version=2023-05-15",
headers={
"Content-Type": "application/json",
"api-key": self._api_key,
},
json={
"messages": prompt,
"temperature": 0,
"top_p": 0,
"model": "gpt-4-32k",
},
)
if result.status_code != 200:
raise RuntimeError(
f"Failed to call API: {result.status_code} {result.content}"
)
else:
return result.json()["choices"][0]["message"]
def get_chain(prompt: PromptTemplate, callbacks: Callbacks = []) -> Chain:
"""
This function initializes and returns an LLMChain with a given prompt and callbacks.
Args:
prompt (str): The prompt to initialize the LLMChain with.
callbacks (Callbacks): Langchain callbacks fo
Returns:
Chain: An instance of LLMChain.
"""
llm = CustomLLM()
chain = LLMChain(llm=llm, prompt=prompt, callbacks=callbacks)
return chain
if __name__ == "__main__":
prompt_template = """
You are an insurance agent. You are provided with instructions, and you must provide an answer.
Question: {question}
"""
prompt = PromptTemplate(
template=prompt_template,
input_variables=["question"],
)
chain = get_chain(prompt)
result = chain.invoke({"question": "What is the best insurance policy for me?"})
print(result)
```
- With `Runnable`:
```py
import os
from typing import Any, List
import requests
from langchain.callbacks.base import Callbacks
from langchain.chains import LLMChain
from langchain.chains.base import Chain
from langchain.prompts import PromptTemplate
from langchain_core.callbacks.manager import CallbackManagerForLLMRun
from langchain.schema.runnable import Runnable
from langchain.schema.language_model import LanguageModelInput
required_envs = ["API_BASE", "API_KEY", "DEPLOYMENT_NAME"]
for env in required_envs:
if env not in os.environ:
raise ValueError(f"Missing required environment variable: {env}")
class CustomLLM(LLM):
@property
def _llm_type(self) -> str:
return "CustomLLM"
def invoke(
self, input: LanguageModelInput, config: RunnableConfig | None = None
) -> str:
return super().invoke(input)
def _call(
self,
prompt: str,
run_manager: Optional[CallbackManagerForLLMRun] = None,
) -> str:
"""Call the API with the given prompt and return the result."""
self._api_endpoint: str = str(os.getenv("OPENAI_API_BASE"))
self._api_key: str = str(os.getenv("OPENAI_API_BASE"))
self._deployment_name: str = str(os.getenv("DEPLOYMENT_NAME"))
result = requests.post(
f"{self._api_endpoint}/llm/deployments/{self._deployment_name}/chat/completions?api-version=2023-05-15",
headers={
"Content-Type": "application/json",
"api-key": self._api_key,
},
json={
"messages": prompt,
"temperature": 0,
"top_p": 0,
"model": "gpt-4-32k",
},
)
if result.status_code != 200:
raise RuntimeError(
f"Failed to call API: {result.status_code} {result.content}"
)
else:
return result.json()["choices"][0]["message"]
def get_chain(prompt: PromptTemplate, callbacks: Callbacks = []) -> Chain:
"""
This function initializes and returns an LLMChain with a given prompt and callbacks.
Args:
prompt (str): The prompt to initialize the LLMChain with.
callbacks (Callbacks): Langchain callbacks fo
Returns:
Chain: An instance of LLMChain.
"""
llm = CustomLLM()
chain = LLMChain(llm=llm, prompt=prompt, callbacks=callbacks)
return chain
if __name__ == "__main__":
prompt_template = """
You are an insurance agent. You are provided with instructions, and you must provide an answer.
Question: {question}
"""
prompt = PromptTemplate(
template=prompt_template,
input_variables=["question"],
)
chain = get_chain(prompt)
result = chain.invoke({"question": "What is the best insurance policy for me?"})
print(result)
```
### Description
Hi!
I'm not exactly whether this is a bug, or an expected behavior.
I'm in a situation where I cannot use the LLM directly, and instead need to use APIs that interact with the LLM itself.
I've hence decided to create a CustomLLM using the documentation [here](https://python.langchain.com/docs/modules/model_io/llms/custom_llm) to keep leveraging `Chain` features.
Here are the problems I've been facing:
- When using the `LLM` class as the Base class of my `CustomLLM` class, I run into the following error:
```
Traceback (most recent call last):
File "custom_llm.py", line 83, in <module>
chain = get_chain(prompt)
^^^^^^^^^^^^^^^^^
File "custom_llm.py", line 70, in get_chain
chain = LLMChain(llm=llm, prompt=prompt, callbacks=callbacks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/langchain/load/serializable.py", line 97, in __init__
super().__init__(**kwargs)
File ".venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 2 validation errors for LLMChain
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
```
- Following this error, I've decided to modify the class, so it extends from `Runnable` (cf second code snippet in the example) but when running the new code I get this:
```
Traceback (most recent call last):
File "utils/custom_llm.py", line 90, in <module>
result = chain.invoke({"question": "What is the best insurance policy for me?"})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/langchain/chains/base.py", line 87, in invoke
return self(
^^^^^
File ".venv/lib/python3.11/site-packages/langchain/chains/base.py", line 310, in __call__
raise e
File ".venv/lib/python3.11/site-packages/langchain/chains/base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File ".venv/lib/python3.11/site-packages/langchain/chains/llm.py", line 108, in _call
response = self.generate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/langchain/chains/llm.py", line 127, in generate
results = self.llm.bind(stop=stop, **self.llm_kwargs).batch(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/langchain/schema/runnable/base.py", line 2658, in batch
return self.bound.batch(
^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/langchain/schema/runnable/base.py", line 321, in batch
return cast(List[Output], [invoke(inputs[0], configs[0])])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/langchain/schema/runnable/base.py", line 317, in invoke
return self.invoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: CustomLLM.invoke() got an unexpected keyword argument 'stop'
```
### System Info
langchain==0.0.329
langchain-core==0.1.9
Platform: MacOS 13.6.2
Python: 3.11
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | CustomLLM cannot be used to build `Chains` when using `LLM` or `Runnable` | https://api.github.com/repos/langchain-ai/langchain/issues/15880/comments | 5 | 2024-01-11T13:49:44Z | 2024-06-05T07:44:12Z | https://github.com/langchain-ai/langchain/issues/15880 | 2,076,708,819 | 15,880 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
import os
import openai
import sys
import panel as pn # GUI
pn.extension()
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file
import datetime
current_date = datetime.datetime.now().date()
if current_date < datetime.date(2023, 9, 2):
llm_name = "gpt-3.5-turbo-0301"
else:
llm_name = "gpt-3.5-turbo"
print(llm_name)
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.document_loaders import PyPDFLoader
docs = PyPDFLoader("sameer_mahajan.pdf").load()
from langchain.text_splitter import TokenTextSplitter
text_splitter = TokenTextSplitter(chunk_size=1, chunk_overlap=0)
splits = text_splitter.split_documents(docs)
embedding = OpenAIEmbeddings(
deployment = "embeddings",
openai_api_key = os.environ['OPENAI_API_KEY'],
openai_api_base = os.environ['OPENAI_ENDPOINT'],
openai_api_version = os.environ['OPENAI_DEPLOYMENT_VERSION'],
openai_api_type = "azure",
chunk_size = 1)
vectordb = Chroma.from_documents(
documents=splits,
embedding=embedding,
persist_directory=persist_directory
)
### Description
I expect vectordb to persist for my further chatbot however I get an exception of
`NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}`
### System Info
python 3.10.2
embedding model text-embedding-ada-002
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Resource not found error trying to use chromadb with Azure Open AI | https://api.github.com/repos/langchain-ai/langchain/issues/15878/comments | 7 | 2024-01-11T13:05:25Z | 2024-06-01T00:07:38Z | https://github.com/langchain-ai/langchain/issues/15878 | 2,076,601,164 | 15,878 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
from langchain_google_genai import GoogleGenerativeAIEmbeddings
embeddings = GoogleGenerativeAIEmbeddings(model="models/embedding-001")
vector = embeddings.embed_query("hello, world!")
### Description
langchain_google_genai._common.GoogleGenerativeAIError: Error embedding content: Deadline of 60.0s exceeded while calling target function
### System Info
langchain 0.1.0
langchain-community 0.0.10
langchain-core 0.1.8
langchain-google-genai 0.0.5
langchain-openai 0.0.2
langchainhub 0.1.14
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | langchain_google_genai._common.GoogleGenerativeAIError: Error embedding content: Deadline of 60.0s exceeded while calling target function | https://api.github.com/repos/langchain-ai/langchain/issues/15876/comments | 1 | 2024-01-11T12:44:17Z | 2024-04-18T16:07:30Z | https://github.com/langchain-ai/langchain/issues/15876 | 2,076,546,582 | 15,876 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
Here is my custom parser code:
```
def parse(output):
# If no function was invoked, return to user
if "function_call" not in output.additional_kwargs:
return AgentFinish(return_values={"answer": output.content, "sources":""}, log=output.content)
# Parse out the function call
function_call = output.additional_kwargs["function_call"]
name = function_call["name"]
inputs = json.loads(function_call["arguments"])
# If the Response function was invoked, return to the user with the function inputs
if name == "Response":
return AgentFinish(return_values=inputs, log=str(function_call))
# Otherwise, return an agent action
else:
return AgentActionMessageLog(
tool=name, tool_input=inputs, log="", message_log=[output]
)
```
Here is my agent code:
```
agent = (
{
"input": itemgetter("input"),
# Format agent scratchpad from intermediate steps
"agent_scratchpad": lambda x: format_to_openai_functions(
x["intermediate_steps"]),
"history" : lambda x:x["history"]
}
| prompt
| condense_prompt
| llm_with_tools
| parse
)
agent_executor = AgentExecutor(tools=[retriever_tool],
agent=agent,
memory=st.session_state.agentmemory,
verbose=True,
handle_parsing_errors=True,
)
```
### Description
I get the following error when I call my agent_executor.invoke method - An error occurred: Invalid control character at: line 2 column 129 (char 130) - This is only when my retriever returns some special characters such as " • " - I'm assuming it is this - a bullet point dots.
I have been using a custom parser: How can I use the solution from below link for output parser solution to solve the problem with a custom parser? Or add "strict=False" in the json response?
Or is there any other solution?
https://github.com/langchain-ai/langchain/issues/9460
### System Info
langchain==0.0.315
pydantic==2.5.2
streamlit==1.29.0
openai==0.28
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | agent_executor.invoke method: An error occurred: Invalid control character at: line y column xxx (char xxx) | https://api.github.com/repos/langchain-ai/langchain/issues/15872/comments | 2 | 2024-01-11T08:29:48Z | 2024-01-11T09:11:37Z | https://github.com/langchain-ai/langchain/issues/15872 | 2,076,039,740 | 15,872 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
from langchain.chat_models import ChatOpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser
from langchain.schema.runnable import RunnableMap
from langchain.schema.messages import HumanMessage, SystemMessage
from langchain.vectorstores import DocArrayInMemorySearch
from langchain.utils.openai_functions import convert_pydantic_to_openai_function
from typing import List
from pydantic import BaseModel, Field
class PopulationSearch(BaseModel):
"""Get the population size based on the given city"""
city: str = Field(description="city")
population_function = convert_pydantic_to_openai_function(PopulationSearch)
model = ChatOpenAI(
temperature=0,
model_name="gpt4-turbo"
)
response = model.invoke("What is the population of Wuhan?", functions=[population_function])
print(response.additional_kwargs)
### Description
The result is {}. Why is it empty value?
The screenshots are as follows:

### System Info
Python version is 3.11
LangChain version is 0.0.343
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | No function called | https://api.github.com/repos/langchain-ai/langchain/issues/15871/comments | 2 | 2024-01-11T08:24:30Z | 2024-01-11T18:50:01Z | https://github.com/langchain-ai/langchain/issues/15871 | 2,076,031,634 | 15,871 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
from langchain.document_loaders import ConfluenceLoader
loader = ConfluenceLoader(
url="<my confluence link>", username="<my user name>",
api_key="<my token>"
)
documents = loader.load(space_key="<my space>", include_attachments=True, limit=1, max_pages=1)
### Description
I am trying to load all confluence pages using ConflueceLoader. I expect to get all the pages but instead I get the AttributeError: 'str' object has no attribute 'get'
### System Info
python version 3.10.2
langchain version 0.0.345
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | ConfluenceLoader.load giving AttributeError: 'str' object has no attribute 'get' while reading all documents from space | https://api.github.com/repos/langchain-ai/langchain/issues/15869/comments | 11 | 2024-01-11T06:48:42Z | 2024-07-03T16:05:07Z | https://github.com/langchain-ai/langchain/issues/15869 | 2,075,891,242 | 15,869 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I noticed that the MongoDBChatMessageHistory class object is attempting to create an index during connection, causing each request to take longer than usual. Index Creation is one time process, So no need create index everytime. By default, index creation is enabled. To address this, add an additional parameter, index_creation. If index_creation is set to False, this step should be ignored.
Current Code:
```
def __init__(
self,
connection_string: str,
session_id: str,
database_name: str = DEFAULT_DBNAME,
collection_name: str = DEFAULT_COLLECTION_NAME,
):
from pymongo import MongoClient, errors
self.connection_string = connection_string
self.session_id = session_id
self.database_name = database_name
self.collection_name = collection_name
try:
self.client: MongoClient = MongoClient(connection_string)
except errors.ConnectionFailure as error:
logger.error(error)
self.db = self.client[database_name]
self.collection = self.db[collection_name]
self.collection.create_index("SessionId")
```
Proposed modification:
```
def __init__(
self,
connection_string: str,
session_id: str,
database_name: str = DEFAULT_DBNAME,
collection_name: str = DEFAULT_COLLECTION_NAME,
index_creation:bool =True #New argument
):
from pymongo import MongoClient, errors
self.connection_string = connection_string
self.session_id = session_id
self.database_name = database_name
self.collection_name = collection_name
try:
self.client: MongoClient = MongoClient(connection_string)
except errors.ConnectionFailure as error:
logger.error(error)
self.db = self.client[database_name]
self.collection = self.db[collection_name]
if index_creation: #Conditional Index Creation
self.collection.create_index("SessionId")
```
### Motivation
Developer can do custom modification, But if you make this feature that feature comes package.
### Your contribution
yes, can do this
Proposed modification:
```
def __init__(
self,
connection_string: str,
session_id: str,
database_name: str = DEFAULT_DBNAME,
collection_name: str = DEFAULT_COLLECTION_NAME,
index_creation:bool =True #New argument
):
from pymongo import MongoClient, errors
self.connection_string = connection_string
self.session_id = session_id
self.database_name = database_name
self.collection_name = collection_name
try:
self.client: MongoClient = MongoClient(connection_string)
except errors.ConnectionFailure as error:
logger.error(error)
self.db = self.client[database_name]
self.collection = self.db[collection_name]
if index_creation: #Conditional Index Creation
self.collection.create_index("SessionId")
``` | Index Creation | https://api.github.com/repos/langchain-ai/langchain/issues/15868/comments | 2 | 2024-01-11T06:35:00Z | 2024-06-01T00:20:58Z | https://github.com/langchain-ai/langchain/issues/15868 | 2,075,869,822 | 15,868 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
import os
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationTokenBufferMemory
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores.qdrant import Qdrant
from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder, \
HumanMessagePromptTemplate
from qdrant_client import QdrantClient
os.environ['OPENAI_API_KEY'] = "mykey"
client = QdrantClient(host="192.168.0.313", port=6333)
COLLECTION_NAME = "embed"
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
vectorstore = Qdrant.from_documents(
client=client,
collection_name=COLLECTION_NAME,
embeddings=embeddings,
search_params={"metric_type": "cosine"},
)
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"you are robot."
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
llm_name = "gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_name, temperature=0)
memory_token_limit = 100
retriever = vectorstore.as_retriever()
memory = ConversationTokenBufferMemory(
llm=llm,
prompt=prompt,
max_token_limit=int(memory_token_limit),
memory_key="chat_history",
return_messages=True,
)
qa = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=retriever,
memory=memory,
verbose=True,
)
chat_history = []
while True:
memory.load_memory_variables({})
question = input('ask:')
result = qa.run({'question': question, 'chat_history': chat_history})
print(result)
chat_history.append([f'User: {question}', f'Ai: {result}'])
print(chat_history)
st_history = ' '.join(map(str, chat_history))
res = embeddings.embed_query(st_history)
print(f'ok: {res[:4]}...')
if question.lower() == 'bye':
break
### Idea or request for content:
embeddings
cosine | How can I store chat history in a database and retrieve results using cosine similarity when querying the database?" | https://api.github.com/repos/langchain-ai/langchain/issues/15866/comments | 1 | 2024-01-11T05:20:34Z | 2024-04-18T16:30:26Z | https://github.com/langchain-ai/langchain/issues/15866 | 2,075,774,096 | 15,866 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
import os
from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain import hub
from langchain.agents import create_openai_functions_agent
from langchain.agents import AgentExecutor
os.environ["OPENAI_API_KEY"] = "*************************************"
os.environ["TAVILY_API_KEY"] = "*************************************"
search = TavilySearchResults()
tools = [search]
# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/openai-functions-agent")
llm = ChatOpenAI(
model="gpt-3.5-turbo",
temperature=0,
verbose=True,
)
agent = create_openai_functions_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "what is the weather in SF?"})
```
### Description
Running the example code above will cause an error for both `gpt-3-turbo` and `gpt-4-0613`. The error message is:
```
openai.NotFoundError: Error code: 404 - {'error': {'message': 'Unrecognized request argument supplied: functions (request id: XXXXX)', 'type': 'invalid_request_error', 'param': '', 'code': None}}
```
I searched for this error and found a solution, which involves adding the parameter `api-version="2023-07-01-preview"`. However, I couldn't find a place to input this parameter. After reading through some source code, I finally figured out how:
```python
function_obj = agent.middle[1]
if function_obj.kwargs:
function_obj.kwargs["extra_query"] = {"api-version": "2023-07-01-preview"}
else:
function_obj.kwargs = {"extra_query": {"api-version": "2023-07-01-preview"}}
```
This led to another error:
```
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid value for 'content': expected a string, got null. (request id: XXXXX", 'type': 'invalid_request_error', 'param': 'messages.[2].content', 'code': None}}
```
After some debugging, I found the reason. Inside `langchain_openai`, there is a function `_convert_message_to_dict`:
```python
elif isinstance(message, AIMessage):
message_dict = {"role": "assistant", "content": message.content}
if "function_call" in message.additional_kwargs:
message_dict["function_call"] = message.additional_kwargs["function_call"]
# If function call only, content is None not empty string
if message_dict["content"] == "":
message_dict["content"] = None
```
This code turns the search result into an `AIMessage` and, for some reason, does not allow the content to be an empty string, so it makes it `None`. However, the OpenAI API does not accept this. To make it work, I had to rewrite the code:
```python
def _convert_message_to_dict(message: BaseMessage) -> dict:
...
elif isinstance(message, AIMessage):
message_dict = {"role": "assistant", "content": message.content}
if "function_call" in message.additional_kwargs:
message_dict["function_call"] = message.additional_kwargs["function_call"]
# If function call only, content is None not empty string
# ATTENTION: CHANGE HERE
# if message_dict["content"] == "":
# message_dict["content"] = None
if "tool_calls" in message.additional_kwargs:
message_dict["tool_calls"] = message.additional_kwargs["tool_calls"]
# If tool calls only, content is None not empty string
if message_dict["content"] == "":
message_dict["content"] = None
...
return message_dict
def new_create_message_dicts(
self, messages: List[BaseMessage], stop: Optional[List[str]]
) -> Tuple[List[Dict[str, Any]], Dict[str, Any]]:
params = self._default_params
if stop is not None:
if "stop" in params:
raise ValueError("`stop` found in both the input and default params.")
params["stop"] = stop
message_dicts = [_convert_message_to_dict(m) for m in messages]
return message_dicts, params
llm.Config.extra = Extra.allow
llm._create_message_dicts = partial(new_create_message_dicts, llm)
```
I mean, really? I'm not sure what I did wrong, but it's certainly not easy to make it work. If it's not a bug, I hope to get a simpler and more elegant solution.
### System Info
langchain==0.1.0
langchain-community==0.0.10
langchain-core==0.1.8
langchain-openai==0.0.2
langchainhub==0.1.14
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | the agent example in the quickstart documentation failed to run | https://api.github.com/repos/langchain-ai/langchain/issues/15863/comments | 4 | 2024-01-11T03:54:33Z | 2024-05-07T16:07:53Z | https://github.com/langchain-ai/langchain/issues/15863 | 2,075,684,626 | 15,863 |
[
"hwchase17",
"langchain"
]
| I am creating a tool using _run and _arun versions for my FastAPI code to use the tool in AgentExecutor. When I test my agent, I am running into this AttributeError which I am unable to resolve even with a debugger. Am I missing anything here?
from fastapi import Request
from langchain.tools import tool, BaseTool
from pydantic import BaseModel, Field
from typing import Type, Optional
from langchain.callbacks.manager import (
AsyncCallbackManagerForToolRun,
CallbackManagerForToolRun,
)
from settings import app_settings as settings
from ai.tools import run_chain
import httpx
class PlaceHolderSchema(BaseModel):
dummy: Optional[str]
async def run_chain_tool(request: Request) :
class ChainTool(BaseTool):
name = "run_chain"
description = "This tool takes a user question as input and returns the answer using the Cube JSON extraction and Cube API response."
args_schema: Type[PlaceHolderSchema] = PlaceHolderSchema
def _run(self,
question: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
dummy: Optional[str] = None,
) -> str:
"""
This function "synchronously" runs the tool.
Args:
question (str): User question about the Cube data model.
Returns:
answer (str): Answer to the user question by running a chain of steps such as generation of Cube JSON, calling Cube API, and generating final answer.
"""
raise NotImplementedError("run_chain tool does not support synchronous execution")
async def _arun(self,
question: str,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
dummy: Optional[str] = None,
) -> str:
"""
This function "asynchronously" runs the tool.
Args:
question (str): User question about the Cube data model.
Returns:
answer (str): Answer to the user question by running a chain of steps such as generation of Cube JSON, calling Cube API, and generating final answer.
"""
try:
answer = await run_chain(question, request)
print(answer)
return answer
except Exception as e: print(f"Error: {e}")
return ChainTool() | AttributeError: 'str' object has no attribute 'log' | https://api.github.com/repos/langchain-ai/langchain/issues/15861/comments | 2 | 2024-01-11T03:44:47Z | 2024-04-18T16:33:09Z | https://github.com/langchain-ai/langchain/issues/15861 | 2,075,674,679 | 15,861 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The following code:
```
import os
from langchain_community.utilities.google_trends import GoogleTrendsAPIWrapper
os.environ['SERPAPI_API_KEY'] = ''
tool = GoogleTrendsAPIWrapper()
tool.run("Something that will yield an error like totally")
```
will yield this:
```
Traceback (most recent call last):
File "/home/ubuntu/Work/luc/langchain-google-trends-issue-1/langchain_google_trends_issue_1/main.py", line 9, in <module>
tool.run("Something that will yield an error like totally")
File "/home/ubuntu/Work/luc/langchain-google-trends-issue-1/.venv/lib/python3.10/site-packages/langchain_community/utilities/google_trends.py", line 68, in run
total_results = client.get_dict()["interest_over_time"]["timeline_data"]
KeyError: 'interest_over_time'
```
### Description
* I'm trying to use the Google Trends tool with some AI agent.
* Now the (not so smart) agent ran a query with Google Trend that did NOT return what its implementation expected.
* I would expect the implementation to follow a more foolproof logic.
### System Info
# pyproject.toml
python = "^3.10"
langchain = "0.0.354"
pytest = "^7.4.4"
google-search-results = "^2.4.2"
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Google Trend utility makes assumptions on keys from response | https://api.github.com/repos/langchain-ai/langchain/issues/15859/comments | 4 | 2024-01-11T03:03:52Z | 2024-04-18T16:21:26Z | https://github.com/langchain-ai/langchain/issues/15859 | 2,075,635,675 | 15,859 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
Metadata
```python
doc.metadata["date_year_short"] = date_obj.strftime("%y") # 23
doc.metadata["date_year_long"] = date_obj.strftime("%Y") # 2023
doc.metadata["date_month"] = date_obj.strftime("%-m") # 12
doc.metadata["date_month_name"] = calendar.month_name[date_obj.month] # December
doc.metadata["date_day"] = date_obj.strftime("%-d") # 31
doc.metadata["date_uploaded"] = calendar.month_name[date_obj.month] + " " + date_obj.strftime("%Y") # January 2023
```
Self-Query Retriever + Pinecone DB Instatiation
```python
llm = ChatOpenAI(temperature=0)
vectorstore = Pinecone.from_existing_index(index_name="test", embedding=get_embedding_function())
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
"Information about when the document was created and where it was grabbed from.",
metadata_field_info,
)
# bancworks_docs[1359]
retriever.vectorstore.similarity_search_with_score(question)
```
### Description
Should be able to see my metadata instantiated in string form the way I created them instead of being converted to date time.
For example, my date_month_name field is "Feburary 2023". It should not be converted to 2/1/2000.
### System Info
Docker image container, Python v3.11, Langchain v0.1.0
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | SelfQueryRetriever with Pinecone Automatically Converts String Metadata into DateTime | https://api.github.com/repos/langchain-ai/langchain/issues/15856/comments | 5 | 2024-01-11T02:23:09Z | 2024-04-19T16:30:32Z | https://github.com/langchain-ai/langchain/issues/15856 | 2,075,587,256 | 15,856 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
I wrote the code for categorization of prompt.
```python
def prompt_router(input, embeddings, prompt_templates, prompt_embeddings):
query_embedding = embeddings.embed_query(input["query"])
similarity = cosine_similarity([query_embedding], prompt_embeddings)[0]
most_similar = prompt_templates[similarity.argmax()]
print(most_similar)
return PromptTemplate.from_template(most_similar)
def main_categorizer(message):
global memory, entry
formatted_history = memory.get_history_as_string()
case1_template = """Description of what case1 does
Chat History:
{chat_history}
Here is a question:
{query}"""
case2_template = """Description of what case2 does
Chat History:
{chat_history}
Here is a question:
{query}"""
case3_template = """Description of what case3 does
Chat History:
{chat_history}
Here is a question:
{query}"""
case4_template = """Description of what case4 does
Chat History:
{chat_history}
Here is a question:
{query}"""
prompt_templates = [case1_template, case2_template, case3_template, case4_template]
prompt_embeddings = embeddings.embed_documents(prompt_templates)
chain = (
{"query": RunnablePassthrough()}
| RunnableLambda(prompt_router)
| llm
| StrOutputParser()
)
```
### Description
Based on the document in https://python.langchain.com/docs/expression_language/cookbook/embedding_router,
I've tried to implement embedding router.
What I would like to do is, adding conversation history to the case prompts so that they can use historical conversation as well to consider which category the user prompt is.
In here, I have no idea where to put {chat_history} value just like the query being inserted with ```"query": RunnablePassthrough()```
### System Info
langchian==0.0.352
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | embedding router with conversation history | https://api.github.com/repos/langchain-ai/langchain/issues/15854/comments | 6 | 2024-01-11T01:33:54Z | 2024-01-11T02:34:53Z | https://github.com/langchain-ai/langchain/issues/15854 | 2,075,537,332 | 15,854 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Description
`AmadeusClosestAirport` contains a "hardcoded" call to `ChatOpenAI` (see [here](https://github.com/langchain-ai/langchain/blob/a06db53c37344b5a9906fbf656173c3421109398/libs/community/langchain_community/tools/amadeus/closest_airport.py#L50)), while it would make sense to use the same `llm` passed to the chain/agent when initialized.
In addition, this implies that `AmadeusToolkit` implicitly depends on `openai`, which should not be the case.
Example (source code from the [docs](https://python.langchain.com/docs/integrations/toolkits/amadeus))
```py
from langchain_community.agent_toolkits.amadeus.toolkit import AmadeusToolkit
# Set environmental variables here
import os
os.environ["AMADEUS_CLIENT_ID"] = "CLIENT_ID"
os.environ["AMADEUS_CLIENT_SECRET"] = "CLIENT_SECRET"
os.environ["OPENAI_API_KEY"] = "API_KEY"
# os.environ["AMADEUS_HOSTNAME"] = "production" or "test"
toolkit = AmadeusToolkit()
tools = toolkit.get_tools()
llm = OpenAI(temperature=0) # this can be any `BaseLLM`
agent = initialize_agent(
tools=tools,
llm=llm,
verbose=False,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
)
# ==> agent calls `ChatOpenAI` regardless of `llm` <===
agent.run("What is the name of the airport in Cali, Colombia?")
```
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | AmadeusClosestAirport tool should accept any LLM | https://api.github.com/repos/langchain-ai/langchain/issues/15847/comments | 3 | 2024-01-10T22:22:03Z | 2024-01-12T12:00:49Z | https://github.com/langchain-ai/langchain/issues/15847 | 2,075,315,430 | 15,847 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
When running the `WebBaseLoader` it requires `bs4` installed which is not mentioned in the docs.
https://github.com/langchain-ai/langchain/blob/21a153894917e530cbe82a778be6f9cf10c9ae5f/docs/docs/get_started/quickstart.mdx#L185C1-L194C1
### Idea or request for content:
I think it should be mentioned just like `faiss` a few lines below. | DOC: Missing dependency when going through the Quickstart section | https://api.github.com/repos/langchain-ai/langchain/issues/15845/comments | 1 | 2024-01-10T21:47:23Z | 2024-01-11T03:32:56Z | https://github.com/langchain-ai/langchain/issues/15845 | 2,075,269,126 | 15,845 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Clicking on any of the `agent_types` [here](https://python.langchain.com/docs/modules/agents/agent_types) leads to faulty links with the following message:
Example from this link: https://python.langchain.com/docs/modules/agents/openai_tools
<img width="873" alt="image" src="https://github.com/langchain-ai/langchain/assets/8833114/d6d5268c-397a-4eda-9a07-e5ec4b4b2d13">
### Idea or request for content:
Update table to point to correct hyperlink i.e. https://python.langchain.com/docs/modules/agents/agent_types/openai_tools for the example above. | DOC: Page Not Found when clicking on different agent types in table | https://api.github.com/repos/langchain-ai/langchain/issues/15837/comments | 2 | 2024-01-10T18:42:33Z | 2024-01-24T20:11:42Z | https://github.com/langchain-ai/langchain/issues/15837 | 2,074,963,053 | 15,837 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation, with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field
from langchain.memory import ConversationBufferMemory
from langchain.prompts.chat import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder
from langchain.chains import LLMChain
class TennisPlayer(BaseModel):
age: int = Field(description="Age of the player")
nb_victories: int = Field(description="Nb of victories in major tournaments")
llm = None # Instantiate the LLM here
parser = PydanticOutputParser(pydantic_object=TennisPlayer)
prompt = "You'll be asked information about a tennis player.\n" \
"You'll answer with the following format:\n" \
"{format_instructions}"
memory = ConversationBufferMemory(memory_key="chat_history", input_key="query", return_messages=True)
chat_prompt = ChatPromptTemplate.from_messages([SystemMessagePromptTemplate.from_template(prompt),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{query}")])
chain = LLMChain(llm=llm, prompt=chat_prompt, memory=memory, output_parser=parser)
chain.invoke(input={"query": "Rafael Nadal", "format_instructions": parser.get_format_instructions()})
```
### Description
The previous code triggers an error while converting the output from the LLM to an AIMessage to place in the ConversationBufferMemory object. The problem is that it passes the constructed object (the output of PydanticOutputParser.parse) instead of the output message as a plain string.
### System Info
langchain 0.1.0
python 3.10.13
Windows 10
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Cannot combine an output parser and a conversation buffer memory | https://api.github.com/repos/langchain-ai/langchain/issues/15835/comments | 3 | 2024-01-10T18:08:47Z | 2024-04-18T16:33:06Z | https://github.com/langchain-ai/langchain/issues/15835 | 2,074,909,445 | 15,835 |
[
"hwchase17",
"langchain"
]
| ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation, with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
# Chunking the sentence with fixed size
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=20)
all_splits = text_splitter.split_documents(documents)
```
```
# Creating Embdeddings of the sentences and storing it into Graph DB
from langchain_community.embeddings import HuggingFaceBgeEmbeddings
from langchain.vectorstores import Neo4jVector
model_name = "BAAI/bge-small-en"
model_kwargs = {"device": "cuda"}
embeddings = HuggingFaceBgeEmbeddings(model_name=model_name, model_kwargs=model_kwargs)
# storing embeddings in the vector store
vectorstore = Neo4jVector.from_documents(all_splits, embeddings)
```
```
# Instantiate Neo4j vector from documents
neo4j_new_index = Neo4jVector.from_documents(
documents,
HuggingFaceBgeEmbeddings(),
url=os.environ["NEO4J_URI"],
username=os.environ["NEO4J_USERNAME"],
password=os.environ["NEO4J_PASSWORD"]
)
```
```
ERROR:neo4j.io:Failed to write data to connection ResolvedIPv4Address(('34.126.171.25', 7687)) (ResolvedIPv4Address(('34.126.171.25', 7687)))
ERROR:neo4j.io:Failed to write data to connection IPv4Address(('07e87ccd.databases.neo4j.io', 7687)) (ResolvedIPv4Address(('34.126.171.25', 7687)))
ERROR:neo4j.io:Failed to write data to connection ResolvedIPv4Address(('34.126.171.25', 7687)) (ResolvedIPv4Address(('34.126.171.25', 7687)))
ERROR:neo4j.io:Failed to write data to connection IPv4Address(('07e87ccd.databases.neo4j.io', 7687)) (ResolvedIPv4Address(('34.126.171.25', 7687)))
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-25-7220f62f84c8>](https://localhost:8080/#) in <cell line: 2>()
1 # Instantiate Neo4j vector from documents
----> 2 neo4j_new_index = Neo4jVector.from_documents(
3 documents,
4 HuggingFaceBgeEmbeddings(),
5 url=os.environ["NEO4J_URI"],
2 frames
[/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/neo4j_vector.py](https://localhost:8080/#) in __from(cls, texts, embeddings, embedding, metadatas, ids, create_id_index, search_type, **kwargs)
445 # If the index already exists, check if embedding dimensions match
446 elif not store.embedding_dimension == embedding_dimension:
--> 447 raise ValueError(
448 f"Index with name {store.index_name} already exists."
449 "The provided embedding function and vector index "
ValueError: Index with name vector already exists.The provided embedding function and vector index dimensions do not match.
Embedding function dimension: 1024
Vector index dimension: 384
```
### Description
```
ERROR:neo4j.io:Failed to write data to connection ResolvedIPv4Address(('34.126.171.25', 7687)) (ResolvedIPv4Address(('34.126.171.25', 7687)))
ERROR:neo4j.io:Failed to write data to connection IPv4Address(('07e87ccd.databases.neo4j.io', 7687)) (ResolvedIPv4Address(('34.126.171.25', 7687)))
ERROR:neo4j.io:Failed to write data to connection ResolvedIPv4Address(('34.126.171.25', 7687)) (ResolvedIPv4Address(('34.126.171.25', 7687)))
ERROR:neo4j.io:Failed to write data to connection IPv4Address(('07e87ccd.databases.neo4j.io', 7687)) (ResolvedIPv4Address(('34.126.171.25', 7687)))
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-25-7220f62f84c8>](https://localhost:8080/#) in <cell line: 2>()
1 # Instantiate Neo4j vector from documents
----> 2 neo4j_new_index = Neo4jVector.from_documents(
3 documents,
4 HuggingFaceBgeEmbeddings(),
5 url=os.environ["NEO4J_URI"],
2 frames
[/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/neo4j_vector.py](https://localhost:8080/#) in __from(cls, texts, embeddings, embedding, metadatas, ids, create_id_index, search_type, **kwargs)
445 # If the index already exists, check if embedding dimensions match
446 elif not store.embedding_dimension == embedding_dimension:
--> 447 raise ValueError(
448 f"Index with name {store.index_name} already exists."
449 "The provided embedding function and vector index "
ValueError: Index with name vector already exists.The provided embedding function and vector index dimensions do not match.
Embedding function dimension: 1024
Vector index dimension: 384
```
### System Info
Windows: `11`
pip == `23.3.1`
python == `3.10.10`
langchain ==` 0.1.0`
transformers == `4.36.2`
sentence_transformers == `2.2.2`
Neo4j == `5`
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | ValueError: Index with name vector already exists.The provided embedding function and vector index dimensions do not match. | https://api.github.com/repos/langchain-ai/langchain/issues/15834/comments | 5 | 2024-01-10T18:02:46Z | 2024-01-12T12:40:02Z | https://github.com/langchain-ai/langchain/issues/15834 | 2,074,900,437 | 15,834 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.