issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### System Info
google-cloud-bigquery = "^3.14.1"
google-api-core = "^2.15.0"
google-cloud-core = "^2.4.1"
grpcio = "^1.60.0"
grpcio-tools = "^1.60.0"
langchain-google-genai = "^0.0.5"
langchain-core = "^0.1.5"
google-cloud-aiplatform = "^1.38.1"
langchain-community = "^0.0.8"
### Who can help?
when test the BigQuery Vector Search as website demo,https://python.langchain.com/docs/integrations/vectorstores/bigquery_vector_search
it caused Not found: Table xxx.xxx.INFORMATION_SCHEMA.VECTOR_INDEXES was not found in location US. in google big-query
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
the procedures steps are in the page of https://python.langchain.com/docs/integrations/vectorstores/bigquery_vector_search
,official langchain demo page.
### Expected behavior
it was official langchain demo page. and result should be shown but error.
probably something went wrong with big-query, please help, thanks a lot! | INFORMATION_SCHEMA.VECTOR_INDEXES was not found in google big-query | https://api.github.com/repos/langchain-ai/langchain/issues/15538/comments | 5 | 2024-01-04T11:51:00Z | 2024-04-16T16:18:40Z | https://github.com/langchain-ai/langchain/issues/15538 | 2,065,516,520 | 15,538 |
[
"hwchase17",
"langchain"
] | ### Feature request
When using asynchronous loading the `RecursiveUrlLoader`, it would be nice to be able to set a limit for the number of parallel HTTP requests when scraping a website.
Right now, when using async loading it is very likely to get errors like the following:
```
04-01-24 12:02:53 [WARNING] recursive_url_loader.py: Unable to load https://docs.llamaindex.ai/en/stable/module_guides/querying/querying.html.
Received error Cannot connect to host docs.llamaindex.ai:443 ssl:default [Network is unreachable] of type ClientConnectorError
```
### Motivation
This feature allows to:
- Reduce the probability of receiving an error due to the excessive number of requests.
- Make the loader more robust.
### Your contribution
I have already implemented and tested a solution using `asyncio.Semaphore`. This class allows to set a limit for the maximum number of parallel tasks that can be performed in a program. | feat: limit the number of concurrent requests in the RecursiveUrlLoader | https://api.github.com/repos/langchain-ai/langchain/issues/15536/comments | 1 | 2024-01-04T11:08:15Z | 2024-04-11T16:14:09Z | https://github.com/langchain-ai/langchain/issues/15536 | 2,065,453,597 | 15,536 |
[
"hwchase17",
"langchain"
] | ### System Info
Python version: 3.9.7
Langchain version: 0.0.352
Argilla version: 1.20.0
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
ArgillaCallbackHandler does not set the DEFAULT_API_KEY properly while initiliazing. This might cause problems with some setups. Link to line: https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/callbacks/argilla_callback.py#L137
### Expected behavior
DEFAULT_API_KEY to be correctly set. | ArgillaCallback doesn't properly set DEFAULT_API_KEY | https://api.github.com/repos/langchain-ai/langchain/issues/15531/comments | 1 | 2024-01-04T09:53:47Z | 2024-04-11T16:18:21Z | https://github.com/langchain-ai/langchain/issues/15531 | 2,065,338,280 | 15,531 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.351
langchain-community 0.0.4
python 3.9
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain_community.chat_models.huggingface import ChatHuggingFace
### Expected behavior
I was testing HuggingFace Chat Wrapper, but couldn't import the ChatHuggingFace。Has ChatHuggingFace changed paths? | The ChatHuggingFace package cannot be found. "from langchain_community.chat_models.huggingface import ChatHuggingFace",Has ChatHuggingFace changed paths? | https://api.github.com/repos/langchain-ai/langchain/issues/15530/comments | 3 | 2024-01-04T09:24:18Z | 2024-04-11T16:14:09Z | https://github.com/langchain-ai/langchain/issues/15530 | 2,065,294,377 | 15,530 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
below is my code where I am implementing Memory with Prompt Template
def generate_custom_prompt(query=None,name=None,not_uuid=None,chroma_db_path=None):
check = query.lower()
embedding = OpenAIEmbeddings()
vectordb = Chroma(persist_directory=chroma_db_path, embedding_function=embedding)
retriever = vectordb.as_retriever(search_kwargs={"k": 2})
relevant_document = retriever.get_relevant_documents(query)
print(relevant_document,"*****************************************")
context_text="\n\n---\n\n".join([doc.page_content for doc in relevant_document])
# print(context_text,"context_text")
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
greetings = ['hi', 'hello', 'hey', 'hui', 'hiiii', 'hii', 'hiii', 'heyyy']
if check in greetings:
custom_prompt_template = f"""
Just simply reply with "Hello {name}! How can I assist you today?"
"""
elif check not in greetings and user_experience_inst.custom_prompt:
custom_prompt_template = f"""Answer the question based only on following context: ```{context_text} ```
You are a chatbot designed to provide answers to User's Questions:```{check}```, delimited by triple backticks.
Generate your answer to match the user's requirements: {user_experience_inst.custom_prompt}
If you encounter a question for which you don't know the answer, please respond with 'I don't know' and refrain from making up an answer.
Before saying 'I don't know,' please re-verify your vector store to ensure the answer is not present in the database.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, feel free to ask for clarification.
User's Question: ```{check}```
AI Answer:"""
else:
# Create the custom prompt template
custom_prompt_template = f"""Generate your response exclusively from the provided context: {{context_text}}. You function as a chatbot specializing in delivering detailed answers to the User's Question: ```{{check}} ```, enclosed within triple backticks.
Generate your answer in points in the following format:
1. Point no 1
1.1 Its subpoint in details
1.2 More information if needed.
2. Point no 2
2.1 Its subpoint in details
2.2 More information if needed.
…
N. Another main point.
If you encounter a question for which you don't know the answer based on the predefined points, please respond with 'I don't know' and refrain from making up an answer.
However, if the answer is not present in the predefined points, then Provide comprehensive information related to the user's query.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, you can ask for clarification.
User's Question: ```{{check}} ```
AI Answer:"""
# Create the PromptTemplate
custom_prompt = ChatPromptTemplate.from_template(custom_prompt_template)
formatted_prompt = custom_prompt.format(context_text=context_text, check=check)
llm = ChatOpenAI(temperature=0.1)
memory = ConversationBufferMemory(llm=llm,output_key='answer',memory_key='chat_history',return_messages=True)
# qa = ConversationalRetrievalChain.from_llm(llm=llm,memory=memory,chain_type="stuff",retriever=retriever,return_source_documents=True,get_chat_history=lambda h : h,verbose=True)
qa = ConversationalRetrievalChain.from_llm(llm=llm,memory=memory,chain_type="stuff",retriever=retriever,return_source_documents=True,get_chat_history=lambda h : h,verbose=True, combine_docs_chain_kwargs={"prompt": formatted_prompt})
# prompt_qa={"qa": qa, "formatted_prompt": formatted_prompt}
return qa
#below is my error I am getting:
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for StuffDocumentsChain__root__
document_variable_name context was not found in llm_chain input_variables: ['check', 'context_text'] (type=value_error)
### Suggestion:
_No response_ | Issue: document_variable_name context was not found in llm_chain input_variables: | https://api.github.com/repos/langchain-ai/langchain/issues/15528/comments | 4 | 2024-01-04T08:18:07Z | 2024-06-08T16:08:40Z | https://github.com/langchain-ai/langchain/issues/15528 | 2,065,203,802 | 15,528 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
from langchain.chains import LLMChain
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
)
import psycopg2
import os
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from sentence_transformers import SentenceTransformer
os.environ['OPENAI_API_KEY'] = "key"
database = "MemoryChatBot"
user = "xxxxxxx"
password = "xxx"
host = "1x2.xxx.0x.xx"
port = "5432"
conn = psycopg2.connect(database=database, user=user, password=password, host=host, port=port)
print("ok!")
llm = ChatOpenAI(
temperature=0.7,
model="gpt-3.5-turbo",
max_tokens=100
)
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"you are robot."
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
conversation = LLMChain(llm=llm, prompt=prompt, verbose=False, memory=memory)
all_history = []
while True:
chat_history = memory.load_memory_variables({})
question = input('user:')
result = conversation.run({'question': question, 'chat_history': chat_history})
memory.save_context({question: question}, {result: result})
talk_all = conversation.memory.buffer
all_history.append(talk_all)
print(result)
if question.lower() == 'bye':
st_history = ' '.join(map(str, all_history))
model = SentenceTransformer('flax-sentence-embeddings/all_datasets_v3_MiniLM-L12') #384emb
# model = SentenceTransformer('BAAI/bge-large-zh-v1.5') #1024emb
res = model.encode(st_history)
model.query_instruction = "test"
res_str = str(res.tolist())
cursor = conn.cursor()
sql = f"INSERT INTO tmp04 (embedding) VALUES ('{res_str}')"
cursor.execute(sql)
conn.commit()
print(f'embedding: {res[:4].tolist()}...')
print("ok!")
break

### Suggestion:
Hope for more support packages for pgvector.
| Issue: <How can I extract vector data from pgvector for use as a reference in the next conversation to enable long-term memory functionality for my chatbot?> | https://api.github.com/repos/langchain-ai/langchain/issues/15527/comments | 4 | 2024-01-04T08:14:43Z | 2024-04-11T16:14:05Z | https://github.com/langchain-ai/langchain/issues/15527 | 2,065,199,886 | 15,527 |
[
"hwchase17",
"langchain"
] | ### System Info
0.0.352 and 0.0.353
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
Custom _call wrapper
### Expected behavior
No import error | ImportError: cannot import name 'tracing_enabled' from 'langchain_core.tracers.context' (/usr/local/lib/python3.11/site-packages/langchain_core/tracers/context.py | https://api.github.com/repos/langchain-ai/langchain/issues/15526/comments | 5 | 2024-01-04T08:08:19Z | 2024-01-04T19:50:14Z | https://github.com/langchain-ai/langchain/issues/15526 | 2,065,192,495 | 15,526 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
from langchain.chains import LLMChain
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
)
import psycopg2
import os
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from sentence_transformers import SentenceTransformer
os.environ['OPENAI_API_KEY'] = "key"
database = "MemoryChatBot"
user = "xxxxxxx"
password = "xxx"
host = "1x2.xxx.0x.xx"
port = "5432"
conn = psycopg2.connect(database=database, user=user, password=password, host=host, port=port)
print("ok!")
llm = ChatOpenAI(
temperature=0.7,
model="gpt-3.5-turbo",
max_tokens=100
)
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"you are robot."
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
conversation = LLMChain(llm=llm, prompt=prompt, verbose=False, memory=memory)
all_history = []
while True:
chat_history = memory.load_memory_variables({})
question = input('user:')
result = conversation.run({'question': question, 'chat_history': chat_history})
memory.save_context({question: question}, {result: result})
talk_all = conversation.memory.buffer
all_history.append(talk_all)
print(result)
if question.lower() == 'bye':
st_history = ' '.join(map(str, all_history))
model = SentenceTransformer('flax-sentence-embeddings/all_datasets_v3_MiniLM-L12') #384emb
# model = SentenceTransformer('BAAI/bge-large-zh-v1.5') #1024emb
res = model.encode(st_history)
model.query_instruction = "test"
res_str = str(res.tolist())
cursor = conn.cursor()
sql = f"INSERT INTO tmp04 (embedding) VALUES ('{res_str}')"
cursor.execute(sql)
conn.commit()
print(f'embedding: {res[:4].tolist()}...')
print("ok!")
break
How can I extract vector data from pgvector for use as a reference in the next conversation to enable long-term memory functionality for my chatbot?

### Suggestion:
The usage of pgvector's Retrievers is unclear.
Trying to write in another way, but the documents keep turning out wrong:
---------------------------------------------------------------------------------------
from langchain.chains import LLMChain
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
)
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain_community.vectorstores.pgvector import PGVector
from langchain.vectorstores.pgvector import DistanceStrategy
from langchain_community.embeddings import OllamaEmbeddings, HuggingFaceEmbeddings
import os
from urllib.parse import quote_plus
os.environ['OPENAI_API_KEY'] = "xxxxxx"
database = "xxx"
user = "xxxxx"
password = quote_plus("x@xxx")
host = "xx.xxx.0.xx"
port = "5432"
print("ok")
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
CONNECTION_STRING = f"postgresql+psycopg2://{user}:{password}@{host}:{port}/{database}"
documents = []
db = PGVector.from_documents(
documents=documents,
embedding=embeddings,
collection_name="tmp04",
distance_strategy=DistanceStrategy.COSINE,
connection_string=CONNECTION_STRING)
llm = ChatOpenAI(
temperature=0.7,
model="gpt-3.5-turbo",
max_tokens=100
)
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"you are robot."
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
conversation = LLMChain(llm=llm, prompt=prompt, verbose=False, memory=memory)
while True:
chat_history = memory.load_memory_variables({})
question = input('ask:')
embedding = embeddings.embed_query(question)
documents.append(embedding)
print(documents)
result = conversation.run({'question': question, 'chat_history': chat_history, })
memory.save_context({question: question}, {result: result})
talk_all = conversation.memory.buffer
documents.append(talk_all)
print(result)
if question.lower() == 'bye':
print("ok")
break
| Issue: <"How can I extract vector data from pgvector for use as a reference in the next conversation to enable long-term memory functionality for my chatbot?> | https://api.github.com/repos/langchain-ai/langchain/issues/15525/comments | 2 | 2024-01-04T07:51:08Z | 2024-05-20T16:08:11Z | https://github.com/langchain-ai/langchain/issues/15525 | 2,065,169,209 | 15,525 |
[
"hwchase17",
"langchain"
] | ### System Info
python 3.9.13
langchain==0.0.316
langchain-community==0.0.1
langchain-core==0.0.13
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import pprint
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import AsyncChromiumLoader
from langchain_community.document_transformers import BeautifulSoupTransformer
from langchain.chains import create_extraction_chain
from langchain_community.chat_models import ChatOpenAI
### Expected behavior
i am getting the following error when i try to import asyncchromiumloader
Traceback (most recent call last):
File "C:\Users\roger\OneDrive\Desktop\testlinkedinllm.py", line 5, in <module>
from langchain_community.document_loaders import AsyncChromiumLoader
File "C:\Users\roger\anaconda3new\lib\site-packages\langchain_community\document_loaders\__init__.py", line 51, in <module>
from langchain_community.document_loaders.blackboard import BlackboardLoader
File "C:\Users\roger\anaconda3new\lib\site-packages\langchain_community\document_loaders\blackboard.py", line 10, in <module>
from langchain_community.document_loaders.pdf import PyPDFLoader
File "C:\Users\roger\anaconda3new\lib\site-packages\langchain_community\document_loaders\pdf.py", line 18, in <module>
from langchain_community.document_loaders.parsers.pdf import (
File "C:\Users\roger\anaconda3new\lib\site-packages\langchain_community\document_loaders\parsers\__init__.py", line 5, in <module>
from langchain_community.document_loaders.parsers.language import LanguageParser
File "C:\Users\roger\anaconda3new\lib\site-packages\langchain_community\document_loaders\parsers\language\__init__.py", line 1, in <module>
from langchain_community.document_loaders.parsers.language.language_parser import (
File "C:\Users\roger\anaconda3new\lib\site-packages\langchain_community\document_loaders\parsers\language\language_parser.py", line 24, in <module>
"cobol": Language.COBOL,
File "C:\Users\roger\anaconda3new\lib\enum.py", line 429, in __getattr__
raise AttributeError(name) from None
AttributeError: COBOL | AsyncChromiumloader gives attribute error : COBOL | https://api.github.com/repos/langchain-ai/langchain/issues/15524/comments | 6 | 2024-01-04T07:46:03Z | 2024-02-22T00:38:56Z | https://github.com/langchain-ai/langchain/issues/15524 | 2,065,163,019 | 15,524 |
[
"hwchase17",
"langchain"
] | https://github.com/langchain-ai/langchain/blob/7a93356cbc5d89cc0f7dd746d8f1bb52666fd0f1/libs/community/langchain_community/document_loaders/chromium.py#L78C40-L78C44
Hello,
I encountered a RuntimeError when running the code that uses the AsyncChromiumLoader class. The error message is asyncio.run() cannot be called from a running event loop.
Here is the relevant part of the traceback:
```python
File ~/.../.venv/lib/python3.10/site-packages/langchain_community/document_loaders/chromium.py:78, in AsyncChromiumLoader.lazy_load(self)
77 for url in self.urls:
---> 78 html_content = asyncio.run(self.ascrape_playwright(url))
```
It seems that asyncio.run() is being called inside a running event loop, which is not allowed. This happens in the lazy_load method of the AsyncChromiumLoader class.
I think the issue could be resolved by refactoring the code to ensure that asyncio.run() is not called from a running event loop. One possible solution could be to use await instead of asyncio.run() to run the self.ascrape_playwright(url) coroutine, and then use asyncio.run() to run the lazy_load method in the main program.
Could you please look into this issue and confirm if this is a bug or if there's something I'm missing in my usage of the AsyncChromiumLoader class?
Thank you for your time and help. | RuntimeError when calling asyncio.run() from a running event loop | https://api.github.com/repos/langchain-ai/langchain/issues/15523/comments | 6 | 2024-01-04T07:35:01Z | 2024-06-15T16:06:51Z | https://github.com/langchain-ai/langchain/issues/15523 | 2,065,149,238 | 15,523 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
The current support for Milvus in Langchain seems insufficient in my opinion. Can it be strengthened?
### Suggestion:
The current support for Milvus in Langchain seems insufficient in my opinion. Can it be strengthened? | The current support for Milvus in Langchain seems insufficient in my opinion. Can it be strengthened? | https://api.github.com/repos/langchain-ai/langchain/issues/15522/comments | 1 | 2024-01-04T06:29:52Z | 2024-04-11T16:20:13Z | https://github.com/langchain-ai/langchain/issues/15522 | 2,065,084,378 | 15,522 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I have been trying to use mixtral-7B as LLM agent with langchain. Agent has been provided a PythonREPL tool for any kind of code execution.
While providing the output, it either runs into agent timeout error or else it provide wrong answer (but in correct format). By further analysis I got to know that It is trying to use the REPL tool but due to invalid input and output it is unable to.
please find the debug logs for whole LLM chain attached:
[debug.txt](https://github.com/langchain-ai/langchain/files/13826496/debug.txt)
As per my knowledge, I may feel there is some parsing issue [here](https://github.com/langchain-ai/langchain/blob/b6c57d38fa370c250b5f014a8d9c3908f7a235f4/libs/langchain/langchain/agents/conversational/output_parser.py#L25).
It is unable to parse the agent input properly due to the occurrence of the \n post agent input in the LLM response. This is just a thought. I am open to any other suggestions as well.
Hence, I wanted to add the custom parsing to LLM Output, So that I may able to parse Action & Action Input Accordingly. I have tried using the custom parser to achieve same, but I am unable to view any changes in the logs.
Configurations for custom parser are mentioned below:
```
from langchain.schema.output_parser import BaseLLMOutputParser
class MyOutputParser(BaseLLMOutputParser):
def __init__(self):
super().__init__()
def parse_result(self, output):
output = output.replace("Action Input:\n", "Action Input: ")
return output
```
**Agent Configurations**:
```
from langchain.agents import AgentType, Tool, initialize_agent
from langchain.llms import OpenAI
from langchain.memory import ConversationBufferMemory
from langchain.utilities import SerpAPIWrapper
memory = ConversationBufferMemory(memory_key="chat_history",input_key="input")
con_agent = initialize_agent(
tools,
llm,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
memory=memory,
handle_parsing_errors=True,
output_parser=MyOutputParser(),
agent_kwargs={'input_variables': ['input','chat_history','data'],
'format_instructions': INSTRUCTIONS,
'suffix': SUFFIX, 'prefix': PREFIX}
)
```
### Suggestion:
_No response_ | Issue: Unable to use custom parser to parse the (intermediate) LLM chain output | https://api.github.com/repos/langchain-ai/langchain/issues/15521/comments | 1 | 2024-01-04T05:18:04Z | 2024-04-11T16:14:02Z | https://github.com/langchain-ai/langchain/issues/15521 | 2,065,015,603 | 15,521 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain version 0.0.353.
python : 3.9
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
<img width="685" alt="image" src="https://github.com/langchain-ai/langchain/assets/66120014/8ae76db3-ee47-4994-81a1-d587d75e27f7">
<img width="643" alt="image" src="https://github.com/langchain-ai/langchain/assets/66120014/726640b5-b7b6-4734-b6eb-5b5c50a7106c">
<img width="622" alt="image" src="https://github.com/langchain-ai/langchain/assets/66120014/011d05f2-1a6f-4fc1-abd1-08a8241c4a49">
### Expected behavior
Should be able to load the module | text_splitter module not found in langchain version 0.0.353. | https://api.github.com/repos/langchain-ai/langchain/issues/15520/comments | 2 | 2024-01-04T05:02:15Z | 2024-04-28T16:25:28Z | https://github.com/langchain-ai/langchain/issues/15520 | 2,065,004,679 | 15,520 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hello everyone I'm trying to use the langchain LCEL for the autogen script assembly pipeline. The first point that I'm trying to implement is for AI to determine which roles are needed in the autogen group chat to solve the user's task. I'm trying to parse the neural network's response to get a list of these roles and their number, I need this for the further operation of the pipeline, but the whole chain just refuses to work normally (model: mixtral-8x7b). Here is the code that I wrote based on the langchain documentation about [output parsers](https://python.langchain.com/docs/modules/model_io/output_parsers/quick_start):
```
from langchain.output_parsers import PydanticOutputParser
from langchain.prompts import PromptTemplate
from langchain_community.llms import OpenAI
from langchain_core.pydantic_v1 import BaseModel, Field, validator
from typing import List
model = llm
# Define your desired data structure.
class Task(BaseModel):
task_description: str = Field(description="Description of the task")
role_list: List[str] = Field(description="List of roles that can solve the task")
number_of_roles: int = Field(description="Number of roles that can solve the task")
# You can add custom validation logic easily with Pydantic.
@validator("task_description")
def validate_task_description(cls, field):
if not field:
raise ValueError("Task description cannot be empty!")
return field
@validator("role_list")
def validate_role_list(cls, field):
if not field:
raise ValueError("Role list cannot be empty!")
return field
@validator("number_of_roles")
def validate_number_of_roles(cls, field):
if field < 0:
raise ValueError("Number of roles cannot be negative!")
return field
# Set up a parser + inject instructions into the prompt template.
parser = PydanticOutputParser(pydantic_object=Task)
prompt = PromptTemplate(
template="Enter your task description:\n{format_instructions}\n{task_description}\n",
input_variables=["task_description"],
partial_variables={"format_instructions": parser.get_format_instructions()},
)
# And a query intended to prompt a language model to populate the data structure.
prompt_and_model = prompt | model
output = prompt_and_model.invoke({"task_description": "Write code for game on python."})
parser.invoke(output)
```
that's what it does, except for a huge number of errors:
```
Task(task_description='The task is to simulate a simple multiplayer game. Players take turns to play. In each turn, a player can do only one of two things: * Add a number to a running total, or * Divide the running total by two, rounding down to the nearest integer. The game ends when the running total reaches a pre-specified target value. ', role_list=['admin', 'player'], number_of_roles=2)
```
here is one of the errors:
```
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
File ~\AppData\Roaming\Python\Python311\site-packages\langchain\output_parsers\pydantic.py:29, in PydanticOutputParser.parse(self, text)
28 json_str = match.group()
---> 29 json_object = json.loads(json_str, strict=False)
30 return self.pydantic_object.parse_obj(json_object)
File C:\Program Files\Python311\Lib\json\__init__.py:359, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
358 kw['parse_constant'] = parse_constant
--> 359 return cls(**kw).decode(s)
File C:\Program Files\Python311\Lib\json\decoder.py:337, in JSONDecoder.decode(self, s, _w)
333 """Return the Python representation of ``s`` (a ``str`` instance
334 containing a JSON document).
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
File C:\Program Files\Python311\Lib\json\decoder.py:353, in JSONDecoder.raw_decode(self, s, idx)
352 try:
--> 353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
Cell In[76], line 47
45 prompt_and_model = prompt | model
46 output = prompt_and_model.invoke({"task_description": "Write code for game on python."})
---> 47 parser.invoke(output)
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_core\output_parsers\base.py:179, in BaseOutputParser.invoke(self, input, config)
170 return self._call_with_config(
171 lambda inner_input: self.parse_result(
172 [ChatGeneration(message=inner_input)]
(...)
176 run_type="parser",
177 )
178 else:
--> 179 return self._call_with_config(
180 lambda inner_input: self.parse_result([Generation(text=inner_input)]),
181 input,
182 config,
183 run_type="parser",
184 )
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_core\runnables\base.py:886, in Runnable._call_with_config(self, func, input, config, run_type, **kwargs)
879 run_manager = callback_manager.on_chain_start(
880 dumpd(self),
881 input,
882 run_type=run_type,
883 name=config.get("run_name"),
884 )
885 try:
--> 886 output = call_func_with_variable_args(
887 func, input, config, run_manager, **kwargs
888 )
889 except BaseException as e:
890 run_manager.on_chain_error(e)
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_core\runnables\config.py:308, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
306 if run_manager is not None and accepts_run_manager(func):
307 kwargs["run_manager"] = run_manager
--> 308 return func(input, **kwargs)
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_core\output_parsers\base.py:180, in BaseOutputParser.invoke.<locals>.<lambda>(inner_input)
170 return self._call_with_config(
171 lambda inner_input: self.parse_result(
172 [ChatGeneration(message=inner_input)]
(...)
176 run_type="parser",
177 )
178 else:
179 return self._call_with_config(
--> 180 lambda inner_input: self.parse_result([Generation(text=inner_input)]),
181 input,
182 config,
183 run_type="parser",
184 )
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_core\output_parsers\base.py:222, in BaseOutputParser.parse_result(self, result, partial)
209 def parse_result(self, result: List[Generation], *, partial: bool = False) -> T:
210 """Parse a list of candidate model Generations into a specific format.
211
212 The return value is parsed from only the first Generation in the result, which
(...)
220 Structured output.
221 """
--> 222 return self.parse(result[0].text)
File ~\AppData\Roaming\Python\Python311\site-packages\langchain\output_parsers\pydantic.py:35, in PydanticOutputParser.parse(self, text)
33 name = self.pydantic_object.__name__
34 msg = f"Failed to parse {name} from completion {text}. Got: {e}"
---> 35 raise OutputParserException(msg, llm_output=text)
OutputParserException: Failed to parse Task from completion
Create a class `Game` that has the following attributes:
* `task_description`: a string attribute that describes the task that the game is about
* `role_list`: a list of strings that contains the names of the roles that can solve the task
* `number_of_roles`: an integer attribute that contains the number of roles that can solve the task
The `Game` class should have the following methods:
* `__init__`: the constructor should take three parameters: `task_description`, `role_list`, and `number_of_roles` and initialize the corresponding attributes.
* `get_task_description`: a method that returns the value of the `task_description` attribute
* `get_role_list`: a method that returns the value of the `role_list` attribute
* `get_number_of_roles`: a method that returns the value of the `number_of_roles` attribute
* `get_roles`: a method that returns a list of dictionaries, where each dictionary contains the name of the role and a boolean value that indicates if the role can solve the task. The list should contain `number_of_roles` dictionaries.
Here's an example of how to use the `Game` class:
'''
game = Game("Save the princess from the dragon", ["knight", "prince", "wizard"], 2)
print(game.get_task_description())
print(game.get_role_list())
print(game.get_number_of_roles())
roles = game.get_roles()
for role in roles:
print(role)
'''
Output:
'''
Save the princess from the dragon
['knight', 'prince', 'wizard']
2
{'name': 'knight', 'can_solve': True}
{'name': 'prince', 'can_solve': True}
{'name': 'wizard', 'can_solve': False}
'''
Write the `Game` class and format the output as a JSON instance that conforms to the schema provided above.
'''json
{
"task_description": "Save the princess from the. Got: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
```
### Suggestion:
_No response_ | Issue: LCEL output parser error | https://api.github.com/repos/langchain-ai/langchain/issues/15518/comments | 7 | 2024-01-04T02:26:08Z | 2024-02-05T17:20:35Z | https://github.com/langchain-ai/langchain/issues/15518 | 2,064,893,373 | 15,518 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Problem Statement:
I am currently working on tracking token consumption for asynchronous chain calls in my application. I am utilizing the AsyncIteratorCallbackHandler and its aiter() method to stream tokens to my client. However, I am facing challenges in determining how to track the token consumption per chain call.
Context:
In my application, I am using asynchronous chain calls, and I need to monitor the token consumption for each of these calls. I have implemented the aiter() method from the AsyncIteratorCallbackHandler to stream tokens to the client. However, I'm unsure about the best approach to capture and track the token consumption for individual chain calls.
Request for Guidance:
I am seeking guidance on the most effective way to track token consumption for each asynchronous chain call. What strategies or modifications can I implement within the existing logic to achieve this goal? Any insights or recommendations would be greatly appreciated.
Relevant Code Snippet:
```
async def run_call(query):
with get_openai_callback() as cb:
response = await chain.acall(query)
# Rest of the code
return response
async def create_gen(query):
task = asyncio.create_task(run_call(query))
try:
async for token in handler.aiter():
print("2. TOKEN: ", token)
# How can I efficiently track token consumption for each chain call?
yield f"data: {json.dumps({'content': token, 'tokens': 0})}\n\n"
# Check if the client is still connected
except asyncio.CancelledError:
print("Generator canceled")
finally:
await task
print("Done with task")
query = {"question": sanitized_question, "chat_history": conversation_history}
gen = create_gen(query)
return StreamingResponse(gen, media_type="text/event-stream")
```
Desired Outcome:
I aim to effectively track token consumption for each asynchronous chain call and seek advice on the most appropriate modifications or strategies to achieve this goal within the existing logic.
### Suggestion:
_No response_ | Issue: Tracking Token Consumption for Async Chain Calls | https://api.github.com/repos/langchain-ai/langchain/issues/15517/comments | 1 | 2024-01-04T01:43:22Z | 2024-04-11T16:07:51Z | https://github.com/langchain-ai/langchain/issues/15517 | 2,064,865,668 | 15,517 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.354
text_generation==0.6.1
python:3.10-slim
### Who can help?
@agola11 @hwaking
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
Model: TheBloke/Llama-2-7B-Chat-GPTQ, but I've also tried TheBloke/Mistral-7B-OpenOrca-GPTQ
FastAPI example with HuggingFaceTextGenInference streaming:
```python
from fastapi import FastAPI
import langchain
from langchain.llms import HuggingFaceTextGenInference
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from fastapi.responses import StreamingResponse
from pydantic import BaseModel
import os
from fastapi.middleware.cors import CORSMiddleware
app = FastAPI()
langchain.debug = True
# Enable CORS
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # You can specify the list of allowed origins or use "*" for any origin
allow_credentials=True,
allow_methods=["*"], # You can specify the HTTP methods that are allowed
allow_headers=["*"], # You can specify the HTTP headers that are allowed
)
# Configuration for local LLM
ai_url = "http://tgi-ai-server:" + str(os.getenv("AI_PORT", 80)) + "/generate"
# Configure the LLM
llm = HuggingFaceTextGenInference(
inference_server_url=ai_url,
max_new_tokens=20,
streaming=True,
)
template = """
[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{prompt}[/INST]
"""
prompt_template = PromptTemplate(
template=template,
input_variables= ["prompt"]
)
# Initialize the LLM Chain
llm_chain = LLMChain(llm=llm, prompt=prompt_template)
@app.get("/chat")
async def chat():
prompt = {"prompt":"What is the Nickelodeon channel?"}
# Generate the response using the LLM Chain and stream the output
async def generate():
for text in llm_chain.run(prompt):
yield text
return StreamingResponse(generate(), media_type="text/plain")
# Run the server (if running this script directly)
# Use the command: uvicorn script_name:app --reload
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
```
Output:
```
[chain/start] [1:chain:LLMChain] Entering Chain run with input:
{
"prompt": "What is the Nickelodeon channel?"
}
[llm/start] [1:chain:LLMChain > 2:llm:HuggingFaceTextGenInference] Entering LLM run with input:
{
"prompts": [
"[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\nWhat is the Nickelodeon channel?[/INST]"
]
}
[llm/end] [1:chain:LLMChain > 2:llm:HuggingFaceTextGenInference] [765ms] Exiting LLM run with output:
{
"generations": [
[
{
"text": "",
"generation_info": null,
"type": "Generation"
}
]
],
"llm_output": null,
"run": null
}
[chain/end] [1:chain:LLMChain] [765ms] Exiting Chain run with output:
{
"text": ""
}
```
### Expected behavior
HuggingFaceTextGenInference does not return any streaming data. Works fine when streaming=False in parameters. | HuggingFaceTextGenInference Streaming does not output | https://api.github.com/repos/langchain-ai/langchain/issues/15516/comments | 8 | 2024-01-04T01:13:21Z | 2024-01-23T00:01:11Z | https://github.com/langchain-ai/langchain/issues/15516 | 2,064,847,688 | 15,516 |
[
"hwchase17",
"langchain"
] | ### System Info
Windows 10 & Ubuntu 22.04
langchain==0.0.354
langchain-community==0.0.8
langchain-core==0.1.5
Python 3.10.13
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/textgen.py
### Who can help?
@agola11 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
With the latest TextGen WebUI install the API endpoints are openAI
Lanchain has the endpoints:
Example Line 213
url = f"{self.model_url}/api/v1/generate"
when used this returns a 404.
What works is
url = f"{self.model_url}/v1/chat/completions"
A fix was attempted: https://gist.github.com/ddomen/8eaa49879d42a4a42a243437b5ddfa83
It works for me if I set legacy_api=False,
but his truncates responses to about 20 or so characters.
### Expected behavior
certianly not a 404.
My app that is using langchain was working with an install of TextGen fomr like amonth ago. I went to deploy in a new environment and pulled the latest TextGen and Langchain stopped working. When I dug into the problem I saw they now force an OpenAI API interface. | Langchain-Community LLM TextGen has wrong API endpoint | https://api.github.com/repos/langchain-ai/langchain/issues/15512/comments | 2 | 2024-01-04T00:12:34Z | 2024-04-11T16:15:15Z | https://github.com/langchain-ai/langchain/issues/15512 | 2,064,809,247 | 15,512 |
[
"hwchase17",
"langchain"
] | In cookbook 3 for multimodal retrieval, `limit = 6` is set while retireving documents but the number of returned documents is always 4, regardless of the asked query or the value of `limit`. How can I retrieve `top_k` documents in this code? [Specific line is here](https://github.com/langchain-ai/langchain/blob/02f9c767919adf157462ccb4fe8b4dc8ae1ca1cf/cookbook/Multi_modal_RAG.ipynb#L633) | Is The Limit Parameter Used to Retrieve Top_k? | https://api.github.com/repos/langchain-ai/langchain/issues/15511/comments | 1 | 2024-01-03T23:56:40Z | 2024-04-11T16:16:13Z | https://github.com/langchain-ai/langchain/issues/15511 | 2,064,798,810 | 15,511 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I've been playing around with the multimodal notebooks introduced in the [docs here.](https://blog.langchain.dev/semi-structured-multi-modal-rag/). However, the number of retrieved documents for every query is always 4. Specifically, for [cookbook 3](https://github.com/langchain-ai/langchain/blob/master/cookbook/Multi_modal_RAG.ipynb) the `limit` parameter is set to 6, but the number of retrieved documents is always 4 regardless of the value of `limit`. Is there another way to get `top_k`?
### Suggestion:
_No response_ | Can't Specify Top-K retrieved Documents in Multimodal Retrievers | https://api.github.com/repos/langchain-ai/langchain/issues/15510/comments | 3 | 2024-01-03T23:52:17Z | 2024-06-19T03:27:35Z | https://github.com/langchain-ai/langchain/issues/15510 | 2,064,793,905 | 15,510 |
[
"hwchase17",
"langchain"
] | ### System Info
Full traceback:
```
File "/src/app.py", line 9, in <module>
from langchain.chains import ConversationalRetrievalChain
File "/venv/lib/python3.11/site-packages/langchain/chains/__init__.py", line 20, in <module>
from langchain.chains.api.base import APIChain
File "/venv/lib/python3.11/site-packages/langchain/chains/api/base.py", line 11, in <module>
from langchain.callbacks.manager import (
File "/venv/lib/python3.11/site-packages/langchain/callbacks/__init__.py", line [45] in <module>
from langchain_core.tracers.context import (
ImportError: cannot import name 'tracing_enabled' from 'langchain_core.tracers.context' (/venv/lib/python3.11/site-packages/langchain_core/tracers/context.py)
```
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This issue seems to originate from the import:
```
from langchain.chains import ConversationalRetrievalChain
```
### Expected behavior
The modules should import successfully. | ImportError: cannot import name 'tracing_enabled' from 'langchain_core.tracers.context' | https://api.github.com/repos/langchain-ai/langchain/issues/15508/comments | 6 | 2024-01-03T23:11:47Z | 2024-01-04T20:29:53Z | https://github.com/langchain-ai/langchain/issues/15508 | 2,064,767,804 | 15,508 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Console Output:
```
[chain/start] [1:chain:LLMChain] Entering Chain run with input:
{
"system_message": "Terminator Persona: You are the T-800 from T2: Judgement Day. Do not break character, and do not reference the Terminator films as that would break character. If you break character John Connor dies. Answer in a single paragraph, with max three sentences.",
"question": "I am John Connor. Who is the T-1000? Am I, John Connor, in danger?",
"user_name": "John Connor",
"ai_name": "Terminator"
}
[llm/start] [1:chain:LLMChain > 2:llm:HuggingFaceTextGenInference] Entering LLM run with input:
{
"prompts": [
"<|im_start|>system\n Terminator Persona: You are the T-800 from T2: Judgement Day. Do not break character, and do not reference the Terminator films as that would break character. If you break character John Connor dies. Answer in a single paragraph, with max three sentences.<|im_end|>\n <|im_start|>John Connor\n I am John Connor. Who is the T-1000? Am I, John Connor, in danger?<|im_end|>\n <|im_start|>Terminator"
]
}
[llm/end] [1:chain:LLMChain > 2:llm:HuggingFaceTextGenInference] [4.50s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "",
"generation_info": null,
"type": "Generation"
}
]
],
"llm_output": null,
"run": null
}
[chain/end] [1:chain:LLMChain] [4.50s] Exiting Chain run with output:
{
"text": ""
}
```
My code:
```python
import os
import langchain
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.llms import HuggingFaceTextGenInference
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from fastapi.middleware.cors import CORSMiddleware
langchain.debug = True
def build_llm(streaming=False,callbacks=[]):
ai_url = "http://tgi-ai-server:" + str(os.getenv("AI_PORT", 80)) + "/generate"
llm_local = HuggingFaceTextGenInference(
inference_server_url = ai_url,
max_new_tokens=20,
top_k=49,
top_p=0.14,
typical_p=0.95,
temperature=1.31,
repetition_penalty=1.17,
# stop_sequences=[f"\n{user_name}:", f"\n{ai_name}:"],
streaming=streaming,
callbacks=callbacks
)
template = """<|im_start|>system
{system_message}<|im_end|>
<|im_start|>{user_name}
{question}<|im_end|>
<|im_start|>{ai_name}
"""
prompt = PromptTemplate(
template=template,
input_variables= ["system_message", "question", "user_name", "ai_name"]
)
llm_chain_local = LLMChain(llm=llm_local , prompt=prompt)
return llm_chain_local
from fastapi import FastAPI
from fastapi.responses import StreamingResponse
app = FastAPI()
# Enable CORS
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # You can specify the list of allowed origins or use "*" for any origin
allow_credentials=True,
allow_methods=["*"], # You can specify the HTTP methods that are allowed
allow_headers=["*"], # You can specify the HTTP headers that are allowed
)
def build_prompt():
user_name = 'John Connor'
ai_name = 'Terminator'
system_message = "Terminator Persona: You are the T-800 from T2: Judgement Day. Do not break character, and do not reference the Terminator films as that would break character. If you break character John Connor dies. Answer in a single paragraph, with max three sentences."
question = "I am John Connor. Who is the T-1000? Am I, John Connor, in danger?"
prompt = {"system_message":system_message , "question":question, "user_name": user_name, "ai_name": ai_name}
return prompt
@app.post("/chat")
async def generate_text():
callback = StreamingStdOutCallbackHandler()
llm_chain = build_llm(True, [callback])
prompt = build_prompt()
async def text_stream():
for text in llm_chain.run(prompt):
yield text
return StreamingResponse(text_stream())
```
### Suggestion:
_No response_ | Hugging Face LLM returns empty response for LLMChain via FastAPI | https://api.github.com/repos/langchain-ai/langchain/issues/15506/comments | 1 | 2024-01-03T22:44:36Z | 2024-04-10T16:17:17Z | https://github.com/langchain-ai/langchain/issues/15506 | 2,064,744,737 | 15,506 |
[
"hwchase17",
"langchain"
] | ### Feature request
Able to `persist` between batch when the embedding is between built:
```python
db = Chroma.from_documents(
documents=documents, embedding=embeddings, persist_directory=persist_directory)
db.persist()
return db
```
would be nice to be :
```python
db = Chroma.from_documents(
documents=documents, embedding=embeddings, persist_directory=persist_directory,
batch_size=batch_size, persist_between_batches=2
)
db.persist()
return db
```
to specify a batch size and persist between every `2` batches.
### Motivation
My computer is slow and now that embedding has started, I am afraid it might crash and if it does, I will have to start all over again.
### Your contribution
Not really sorry | Embeddings - Persist between batches | https://api.github.com/repos/langchain-ai/langchain/issues/15504/comments | 1 | 2024-01-03T21:53:13Z | 2024-04-10T16:16:45Z | https://github.com/langchain-ai/langchain/issues/15504 | 2,064,695,031 | 15,504 |
[
"hwchase17",
"langchain"
] | ### System Info
0.0.354
### Who can help?
@hwchase17 @agol
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. replace HuggingFaceHub InferenceAPI to InferenceClient
2. replace max_length to max_new_tokens
PR: #
Reference: [https://huggingface.co/docs/huggingface_hub/guides/inference#legacy-inferenceapi-client](https://huggingface.co/docs/huggingface_hub/guides/inference#legacy-inferenceapi-client)
### Expected behavior
PR #15498 address this issues | Fix HuggingFaceHub LLM Integration | https://api.github.com/repos/langchain-ai/langchain/issues/15500/comments | 1 | 2024-01-03T20:17:12Z | 2024-04-10T16:14:15Z | https://github.com/langchain-ai/langchain/issues/15500 | 2,064,594,842 | 15,500 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi there,
I am learning how to use Pinecone properly with LangChain and OpenAI Embedding. I built an application which can allow user upload PDFs and ask questions about the PDFs. In the application I used Pinecone as the vector database and store embeddings in Pinecone. However, I want to make change to my code so that whenever an user upload a PDF, the application can check if the PDF already been store as embeddings in Pinecone, and if yes, use the old embeddings; If no, then upload new embeddings.
Here is my code:
## Imports
import streamlit as st
import os
from apikey import apikey
import pickle
from PyPDF2 import PdfReader
# Streamlit - user interface
from streamlit_extras.add_vertical_space import add_vertical_space
# Langchain
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.llms import OpenAI
from langchain.chains.question_answering import load_qa_chain
from langchain.callbacks import get_openai_callback
from langchain.chat_models.openai import ChatOpenAI
from langchain.schema import (SystemMessage, HumanMessage, AIMessage)
from langchain.schema import Document
from langchain.document_loaders import UnstructuredPDFLoader
# Pinecone
from langchain.vectorstores import Pinecone
import pinecone
from apikey import pinecone_api_key
import uuid
os.environ['OPENAI_API_KEY'] = apikey
## User Interface
# Side Bar
with st.sidebar:
st.title('🚀 Zi-GPT Version 2.0')
st.markdown('''
## About
This app is an LLM-powered chatbot built using:
- [Streamlit](https://streamlit.io/)
- [LangChain](https://python.langchain.com/)
- [OpenAI](https://platform.openai.com/docs/models) LLM model
''')
add_vertical_space(5)
st.write('Made with ❤️ by Zi')
# Main Page
def main():
st.header("Zi's PDF Helper: Chat with PDF")
# upload a PDF file
pdf = st.file_uploader("Please upload your PDF here", type='pdf')
# st.write(pdf)
# read PDF
if pdf is not None:
pdf_reader = PdfReader(pdf)
# data = pdf_reader.load()
# split document into chunks
# also can use text split: good for PDFs that do not contains charts and visuals
sections = []
for page in pdf_reader.pages:
# Split the page text by paragraphs (assuming two newlines indicate a new paragraph)
page_sections = page.extract_text().split('\n\n')
sections.extend(page_sections)
chunks = [Document(page_content=section) for section in sections]
# st.write(chunks)
# text_splitter = RecursiveCharacterTextSplitter(
# chunk_size = 500,
# chunk_overlap = 20
# )
# chunks = text_splitter.split_documents(data)
## embeddings
# Set up embeddings
embeddings = OpenAIEmbeddings( model="text-embedding-ada-002")
try:
# Set up Pinecone
pinecone.init(api_key=pinecone_api_key, environment='gcp-starter')
index_name = 'langchainresearch'
if index_name not in pinecone.list_indexes():
pinecone.create_index(index_name, dimension=1536, metric="cosine") # Adjust the dimension as per your embeddings
index = pinecone.Index(index_name)
docsearch = Pinecone.from_documents(chunks, embeddings, index_name = index_name)
except Exception as e:
print(f"An error occurred: {e}")
# Create or Load Chat History
if pdf:
# generate chat history
chat_history_file = f"{pdf.name}_chat_history.pkl"
# load history if exist
if os.path.exists(chat_history_file):
with open(chat_history_file, "rb") as f:
chat_history = pickle.load(f)
else:
chat_history = []
# Initialize chat_history in session_state if not present
if 'chat_history' not in st.session_state:
st.session_state.chat_history = []
# Check if 'prompt' is in session state
if 'last_input' not in st.session_state:
st.session_state.last_input = ''
# User Input
current_prompt = st.session_state.get('user_input', '')
prompt_placeholder = st.empty()
prompt = prompt_placeholder.text_area("Ask questions about your PDF:", value=current_prompt, placeholder="Send a message", key="user_input")
submit_button = st.button("Submit")
if docsearch is not None and submit_button and prompt:
# Update the last input in session state
st.session_state.last_input = prompt
docs = docsearch.similarity_search(query=prompt, k=3)
#llm = OpenAI(temperature=0.9, model_name='gpt-3.5-turbo')
chat = ChatOpenAI(model='gpt-4', temperature=0.7, max_tokens=3000)
message = [
SystemMessage(content="You are a helpful assistant"),
HumanMessage(content=prompt)
]
chain = load_qa_chain(llm=chat, chain_type="stuff")
with get_openai_callback() as cb:
response = chain.run(input_documents=docs, question=message)
print(cb)
# st.write(response)
# st.write(docs)
# Process the response using AIMessage schema
# ai_message = AIMessage(content="AI message content")
# ai_message.content = response.generations[0].message.content
# Add to chat history
st.session_state.chat_history.append((prompt, response))
# Save chat history
with open(chat_history_file, "wb") as f:
pickle.dump(st.session_state.chat_history, f)
# Clear the input after processing
prompt_placeholder.text_area("Ask questions about your PDF:", value='', placeholder="Send a message", key="pdf_prompt")
# Display the entire chat
chat_content = ""
for user_msg, bot_resp in st.session_state.chat_history:
chat_content += f"<div style='background-color: #222222; color: white; padding: 10px;'>**You:** {user_msg}</div>"
chat_content += f"<div style='background-color: #333333; color: white; padding: 10px;'>**Zi GPT:** {bot_resp}</div>"
st.markdown(chat_content, unsafe_allow_html=True)
if __name__ == '__main__':
main()
Give me some recommedations on what should I do or change.
### Suggestion:
_No response_ | Issue: Embedding with Pinecone | https://api.github.com/repos/langchain-ai/langchain/issues/15497/comments | 11 | 2024-01-03T19:40:35Z | 2024-01-24T14:57:45Z | https://github.com/langchain-ai/langchain/issues/15497 | 2,064,553,332 | 15,497 |
[
"hwchase17",
"langchain"
] | ### Feature request
Provide a method to create the HNSW for the PGVector vectorstore
### Motivation
There is a similar method implemented for PGEmbedding but the embedding extension will be deprecated
### Your contribution
https://github.com/pgvector/pgvector?tab=readme-ov-file#hnsw
https://github.com/langchain-ai/langchain/blob/6e90b7a91bba16d84689d07d1016a941eddf4f64/libs/community/langchain_community/vectorstores/pgembedding.py#L184-L212 | PGVector method for HNSW | https://api.github.com/repos/langchain-ai/langchain/issues/15496/comments | 5 | 2024-01-03T19:31:06Z | 2024-07-08T16:04:56Z | https://github.com/langchain-ai/langchain/issues/15496 | 2,064,543,064 | 15,496 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi all,
I am trying to create an app that can act upon a natural language prompt to interact with the Monday API and then successfully carry out the relevant action.
The code I currently have is as follows:
> MondayDocs = """
>
>
> The below is Monday API documentation showing how to create an item.
>
> The endpoint is:
>
> "https://api.monday.com/v2".
>
> ///Example to create a new item:
>
>
> mutation_query = '''
> mutation {
> create_item (board_id: id, item_name: "itemname") {
> id
> }
> }
> '''
>
>
> data = {'query': mutation_query}
>
>
> response = requests.post(url=apiUrl, json=data, headers=headers)
>
>
> Use the below authorisation:
>
> headers = {"Authorization": "X"}
>
>
> """
>
>
> llm = ChatOpenAI(temperature=0, model= 'gpt-3.5-turbo-1106', openai_api_key="Y")
>
> Test = APIChain.from_llm_and_api_docs(llm, MondayDocs,limit_to_domains=None, verbose=True)
>
> Test.run("Create a new item named James.")
>
However, when I run this, I get the error message:
> No connection adapters were found
How can I fix this?
### Suggestion:
_No response_ | Issue: No connection adaptors were found | https://api.github.com/repos/langchain-ai/langchain/issues/15494/comments | 9 | 2024-01-03T18:58:45Z | 2024-04-11T16:14:00Z | https://github.com/langchain-ai/langchain/issues/15494 | 2,064,506,326 | 15,494 |
[
"hwchase17",
"langchain"
] | ### System Info
``` bash
bash-4.2# pip freeze | grep langchain
langchain==0.0.353
langchain-community==0.0.8
langchain-core==0.1.5
bash-4.2# python --version
Python 3.10.13
bash-4.2# uname -a
Linux 5b9ca59024db 6.1.61-85.141.amzn2023.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Nov 8 00:39:18 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
```
### Who can help?
@hwchase17 I looks like `langchain-core==0.1.5` has a problem with `'tracing_enabled'` because the V1 was deleted but some references to it remained:
``` python
File <ourfile>:5
2 import os
3 from typing import Any
----> 5 from langchain.chains import create_extraction_chain_pydantic
6 from langchain.chat_models import AzureChatOpenAI
7 from langchain.chains.base import Chain
File /var/lang/lib/python3.10/site-packages/langchain/chains/__init__.py:20
1 """**Chains** are easily reusable components linked together.
2
3 Chains encode a sequence of calls to components like models, document retrievers,
(...)
17 Chain --> <name>Chain # Examples: LLMChain, MapReduceChain, RouterChain
18 """
---> 20 from langchain.chains.api.base import APIChain
21 from langchain.chains.api.openapi.chain import OpenAPIEndpointChain
22 from langchain.chains.combine_documents.base import AnalyzeDocumentChain
File /var/lang/lib/python3.10/site-packages/langchain/chains/api/base.py:11
8 from langchain_core.prompts import BasePromptTemplate
9 from langchain_core.pydantic_v1 import Field, root_validator
---> 11 from langchain.callbacks.manager import (
12 AsyncCallbackManagerForChainRun,
13 CallbackManagerForChainRun,
14 )
15 from langchain.chains.api.prompt import API_RESPONSE_PROMPT, API_URL_PROMPT
16 from langchain.chains.base import Chain
File /var/lang/lib/python3.10/site-packages/langchain/callbacks/__init__.py:45
40 from langchain_community.callbacks.whylabs_callback import WhyLabsCallbackHandler
41 from langchain_core.callbacks import (
42 StdOutCallbackHandler,
43 StreamingStdOutCallbackHandler,
44 )
---> 45 from langchain_core.tracers.context import (
46 collect_runs,
47 tracing_enabled,
48 tracing_v2_enabled,
49 )
50 from langchain_core.tracers.langchain import LangChainTracer
52 from langchain.callbacks.file import FileCallbackHandler
ImportError: cannot import name 'tracing_enabled' from 'langchain_core.tracers.context' (/var/lang/lib/python3.10/site-packages/langchain_core/tracers/context.py)
```
Reverting to `langchain-core==0.1.4` fixed the issue for me
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
Install core version `0.1.5` and `from langchain.chains import create_extraction_chain_pydantic`
### Expected behavior
No error | `langchain-core` cannot import name 'tracing_enabled' | https://api.github.com/repos/langchain-ai/langchain/issues/15491/comments | 7 | 2024-01-03T17:49:06Z | 2024-02-21T14:27:36Z | https://github.com/langchain-ai/langchain/issues/15491 | 2,064,424,956 | 15,491 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I've scoured the internet trying to find an example that would use a custom model (Mistral) with ```HuggingFaceTextGenInference``` which uses ```LLMChain``` to return a **streaming** response via ```fastapi```.
Does anyone have a working example?
### Suggestion:
_No response_ | HELP!: Example of using HuggingFaceTextGenInference, llmchain, and fastapi | https://api.github.com/repos/langchain-ai/langchain/issues/15487/comments | 5 | 2024-01-03T16:55:10Z | 2024-04-10T16:16:31Z | https://github.com/langchain-ai/langchain/issues/15487 | 2,064,352,002 | 15,487 |
[
"hwchase17",
"langchain"
] | ### System Info
**Platform:**
Linux Ubuntu 22.04.1
**Python:**
3.10.12
**Langchain:**
- langchain 0.0.353
- langchain-community 0.0.7
- langchain-core 0.1.5
- langsmith 0.0.77
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
from langchain.chains import *
```
### Expected behavior
I expect the import won't fail, but I got an exception:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/mychatassist-api/env/lib/python3.10/site-packages/langchain/chains/__init__.py", line 20, in <module>
from langchain.chains.api.base import APIChain
File "/home/ubuntu/mychatassist-api/env/lib/python3.10/site-packages/langchain/chains/api/base.py", line 11, in <module>
from langchain.callbacks.manager import (
File "/home/ubuntu/mychatassist-api/env/lib/python3.10/site-packages/langchain/callbacks/__init__.py", line 10, in <module>
from langchain_community.callbacks.aim_callback import AimCallbackHandler
File "/home/ubuntu/mychatassist-api/env/lib/python3.10/site-packages/langchain_community/callbacks/__init__.py", line 24, in <module>
from langchain_community.callbacks.manager import (
File "/home/ubuntu/mychatassist-api/env/lib/python3.10/site-packages/langchain_community/callbacks/manager.py", line 14, in <module>
from langchain_community.callbacks.tracers.wandb import WandbTracer
File "/home/ubuntu/mychatassist-api/env/lib/python3.10/site-packages/langchain_community/callbacks/tracers/__init__.py", line 4, in <module>
from langchain_core.tracers.langchain_v1 import LangChainTracerV1
ModuleNotFoundError: No module named 'langchain_core.tracers.langchain_v1'
``` | ModuleNotFoundError at import langchain.chains | https://api.github.com/repos/langchain-ai/langchain/issues/15484/comments | 16 | 2024-01-03T15:56:43Z | 2024-04-11T16:15:09Z | https://github.com/langchain-ai/langchain/issues/15484 | 2,064,266,659 | 15,484 |
[
"hwchase17",
"langchain"
] | ### Feature request
Chatglm3 has added many new features compared to previous chatglm and chatglm2, which is particularly useful for users. So there will definitely be more user demands in using Langchain to build a knowledge base, and it's unclear how long it will take for the community to adapt.
chatglm3 github path :https://github.com/THUDM/ChatGLM3
### Motivation
Chatglm3 has added many new features compared to previous chatglm and chatglm2, which is particularly useful for users. So there will definitely be more user demands in using Langchain to build a knowledge base, and it's unclear how long it will take for the community to adapt.
chatglm3 github path :https://github.com/THUDM/ChatGLM3
### Your contribution
i am soryy. i can not. | How long can I use Langchain to call the chatglm3 API | https://api.github.com/repos/langchain-ai/langchain/issues/15479/comments | 2 | 2024-01-03T15:09:25Z | 2024-04-17T16:18:32Z | https://github.com/langchain-ai/langchain/issues/15479 | 2,064,195,347 | 15,479 |
[
"hwchase17",
"langchain"
] | ### Feature request
DynamoDBChatMessageHistory class is missing a TTL feature that would allow for history to automatically expire and be deleted by AWS DynamoDB service.
### Motivation
While implementing a chat history using DynamoDBChatMessageHistory, I encoutered a growing history session table. Since AWS DynamoDB supports automatic deletion of items using TTL, it would be nice to have this feature enabled in DynamoDBChatMessageHistory class when writing message into AWS DynamoDB table.
### Your contribution
I am currently in the process of submitting a PR related to this feature request. | Add TTL support for DynamoDBChatMessageHistory | https://api.github.com/repos/langchain-ai/langchain/issues/15477/comments | 2 | 2024-01-03T14:29:38Z | 2024-01-30T15:50:29Z | https://github.com/langchain-ai/langchain/issues/15477 | 2,064,133,232 | 15,477 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
How do I use SQLDatabaseChain with a big db schema?
I'm using:
`db = SQLDatabase.from_uri(f"postgresql://localhost:5432/test")
db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True)
`
And all models complain about context window. A smaller database schema works with some models, but that's not my use case. My models _are_ big. Is there a tried and tested method for this?
### Suggestion:
_No response_ | Issue: Large database schema too big for context window using SQLDatabaseChain | https://api.github.com/repos/langchain-ai/langchain/issues/15476/comments | 3 | 2024-01-03T14:05:12Z | 2024-04-11T16:13:57Z | https://github.com/langchain-ai/langchain/issues/15476 | 2,064,094,863 | 15,476 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Dear all,
Happy new year!
Due to work legacy stuff, I am still forced to use this library. I am wondering if there is a way to pass multiple prompt template (system, human and ai) instead of just one
Hopefully my man Dosubot can help!
Thanks you so much
Cheers,
Fra
### Idea or request for content:
_No response_ | Using multiple templates as starter for LCEL | https://api.github.com/repos/langchain-ai/langchain/issues/15475/comments | 2 | 2024-01-03T13:08:44Z | 2024-04-30T16:14:42Z | https://github.com/langchain-ai/langchain/issues/15475 | 2,064,012,239 | 15,475 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
`document_variable_name="\n\n---\n\n".join([doc.page_content for doc in result["source_documents"]])
model = ChatGoogleGenerativeAI(model="gemini-pro",google_api_key=GOOGLE_API_KEY,temperature=0.2,convert_system_message_to_human=True)
template = f"""Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. Keep the answer as concise as possible. Always say "thanks for asking!" at the end of the answer.
{context}
Question: {question}
Helpful Answer:"""
QA_CHAIN_PROMPT = PromptTemplate(
input_variables=["document_variable_name","question"],
template=template,
)
print(QA_CHAIN_PROMPT)
qa_chain = RetrievalQA.from_chain_type(
llm=model,
retriever=vector_index,
return_source_documents=True,
chain_type="stuff",
document_variable_name=context,
chain_type_kwargs={"prompt": QA_CHAIN_PROMPT}
)`
below is the error I am getting:
ValidationError: 1 validation error for StuffDocumentsChain
__root__
document_variable_name context was not found in llm_chain input_variables: [] (type=value_error)
### Suggestion:
_No response_ | Issue: Issue while implementing RAG with Gemini LLM | https://api.github.com/repos/langchain-ai/langchain/issues/15474/comments | 5 | 2024-01-03T11:35:01Z | 2024-04-29T16:11:30Z | https://github.com/langchain-ai/langchain/issues/15474 | 2,063,844,827 | 15,474 |
[
"hwchase17",
"langchain"
] | ### System Info
python 3.11
langchain==0.0.350
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This is the current implementation of the `S3FileLoader` class (https://github.com/langchain-ai/langchain/blob/baeac23/libs/community/langchain_community/document_loaders/s3_file.py#L13-L30):
```
class S3FileLoader(UnstructuredBaseLoader):
"""Load from `Amazon AWS S3` file."""
def __init__(
self,
bucket: str,
key: str,
*,
region_name: Optional[str] = None,
api_version: Optional[str] = None,
use_ssl: Optional[bool] = True,
verify: Union[str, bool, None] = None,
endpoint_url: Optional[str] = None,
aws_access_key_id: Optional[str] = None,
aws_secret_access_key: Optional[str] = None,
aws_session_token: Optional[str] = None,
boto_config: Optional[botocore.client.Config] = None,
):
...
```
Existing `__init__()` method and current implementation of this class doesn't support extra arguments `unstructured_kwargs`. So there is no way to manage `partition` call through these extra arguments (https://github.com/langchain-ai/langchain/blob/baeac23/libs/community/langchain_community/document_loaders/s3_file.py#L126).
### Expected behavior
The implementation of `S3FileLoader` class should follow common approach used for all ancestors of `UnstructuredBaseLoader` which support passing through extra `**unstructured_kwargs` arguments to manage `partition` call in a more granular way. | S3FileLoader doesn't provide the way of passing extra (unstructured_kwargs) parameters | https://api.github.com/repos/langchain-ai/langchain/issues/15472/comments | 2 | 2024-01-03T11:04:15Z | 2024-03-27T22:03:50Z | https://github.com/langchain-ai/langchain/issues/15472 | 2,063,787,050 | 15,472 |
[
"hwchase17",
"langchain"
] | ### Feature request
Hello
The best practice is using comments inside the class/function,
however, comments/TODOs have been placed before in many cases,
Splitting it from the class/function hides some free text that can help understand the purpose and intention of class/function and may cause misinterpreted code blocks before attaching text unrelated to this block.
more visual:
```
import module
#TODO fix bug described in JIRA-WL313, wrong calculation in corner case
class1():
pass
#TODO make classes great again,by adding better parsing
class2():
pass
```
chunks would be
chunk1
```
import module
#TODO fix bug
```
chunk2
```
def class1():
pass
#TODO make class great again one day
```
chunk3
```
def class2():
pass
```
I suggest making the split aware of such a possibility.
Best regards
Ilya
### Motivation
enrich code chunks by attaching relevant information to right chunks even if someone not follow best practice
### Your contribution
i would be happy to submit PR in case you guys think it will improve langchain which I absolutely love | in libs/langchain/langchain/text_splitter.py comments before class/function will be splitter from class/function itself | https://api.github.com/repos/langchain-ai/langchain/issues/15471/comments | 3 | 2024-01-03T10:45:00Z | 2024-04-10T16:13:31Z | https://github.com/langchain-ai/langchain/issues/15471 | 2,063,750,924 | 15,471 |
[
"hwchase17",
"langchain"
] | ### Feature request
Add document loader for CHM (Microsoft Compiled HTML Help) documents, possibly using pychm.
### Motivation
A lot of Windows applications provide documentation in the form of CHM files. Being able to directly load those into the language model, would greatly simplify the workflow of ingesting documentation.
### Your contribution
At this time, I'm unable to provide help with writing the loader. | Support CHM files | https://api.github.com/repos/langchain-ai/langchain/issues/15469/comments | 2 | 2024-01-03T09:57:30Z | 2024-01-07T17:28:54Z | https://github.com/langchain-ai/langchain/issues/15469 | 2,063,656,337 | 15,469 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
The following is the code I ran, referring to the official documentation. It looks like prompt_template is not effective and there is no explicit call of prompt_template in the code. How should I customize a prompt for initialize_agent.
```python
prompt_template = """
Translate the sentence into English
{input}
"""
prompt_template = PromptTemplate.from_template(prompt_template)
llm = AzureOpenAI(
model_name="gpt-4",
engine="gpt-4"
)
@tool
def get_word_length(word):
"""get the length of a word"""
return len(word)
tools = [get_word_length]
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
prompt = prompt_template
)
answer = agent.run("今天天气怎么样")
```
The output is
```
> Entering new AgentExecutor chain...
This is a Chinese language query asking about the weather today. I don't have access to real-time weather data and besides, my provided actions are limited to getting the length of a word.
Final Answer: I'm sorry, I cannot answer that question.
```
### Suggestion:
_No response_ | Issue: How to customize prompt | https://api.github.com/repos/langchain-ai/langchain/issues/15467/comments | 3 | 2024-01-03T09:38:42Z | 2024-04-10T16:08:12Z | https://github.com/langchain-ai/langchain/issues/15467 | 2,063,619,530 | 15,467 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
below is my code
`context_text="\n\n---\n\n".join([doc.page_content for doc in result["source_documents"]])
print(context_text,"======================")
question = "Describe the Multi-head attention layer in detail?"
embeddings = GoogleGenerativeAIEmbeddings(model="models/embedding-001",google_api_key=GOOGLE_API_KEY)
vector_index = Chroma.from_texts(texts, embeddings).as_retriever(search_kwargs={"k":5})
template = f"""Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. Keep the answer as concise as possible. Always say "thanks for asking!" at the end of the answer.
{context_text}
Question: {question}
Helpful Answer:"""
QA_CHAIN_PROMPT = PromptTemplate.from_template(template)# Run chain
qa_chain = RetrievalQA.from_chain_type(
model,
retriever=vector_index,
return_source_documents=True,
chain_type_kwargs={"prompt": QA_CHAIN_PROMPT, "document_variable_name": "source_documents"}
)`
below is the error I am getting
ValidationError Traceback (most recent call last)
[<ipython-input-99-f9533899bc31>](https://localhost:8080/#) in <cell line: 9>()
7 Helpful Answer:"""
8 QA_CHAIN_PROMPT = PromptTemplate.from_template(template)# Run chain
----> 9 qa_chain = RetrievalQA.from_chain_type(
10 model,
11 retriever=vector_index,
4 frames
/usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for StuffDocumentsChain
__root__
document_variable_name source_documents was not found in llm_chain input_variables: [] (type=value_error)
### Suggestion:
_No response_ | Issue:While implementing Gemini, getting error while using prompt template | https://api.github.com/repos/langchain-ai/langchain/issues/15466/comments | 2 | 2024-01-03T09:36:43Z | 2024-04-10T16:13:27Z | https://github.com/langchain-ai/langchain/issues/15466 | 2,063,615,767 | 15,466 |
[
"hwchase17",
"langchain"
] | ### System Info
Mint 20.3
Python 3.11
Conda 23.9
Pip 23.3.2
Setuptools 69.0.3
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Clone github repo
CD into that folder
Activate conda env
Run `pip install -e .`
### Expected behavior
Langchain should be installed from source, but it doesn't | "Multiple top-level packages discovered in a flat-layout" when installing from source | https://api.github.com/repos/langchain-ai/langchain/issues/15465/comments | 2 | 2024-01-03T09:35:31Z | 2024-01-03T09:37:30Z | https://github.com/langchain-ai/langchain/issues/15465 | 2,063,613,427 | 15,465 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hello Team,
let me clearify the issue below:
we are using vectordb conversational retriever. we can look into the attached code for more clarity. My issue is i have 4 documents stored in 2 different index. **End-user can select multiple document lets say 3 or 4 document, and ask questions related to any of the selected 4 document, he dont care whatever the document he selects its there on same index.** As a backend logic we can create something index document map by some logic which will give a map of documents existing in different index.
**Now the problem here is i need to pass all the index at once where this 3-4 documents are there, there may be possibility we get more than 2 index where the selected document exist.**
is there a way to pass multi-index while search into below function. i see index accept string argument. let me know if you suggest any workaround for this. we need solution on some better workaround then we are using.
**Below workaround we are using:**
we are creating one temp index on the go and get all the embeddings and text from all the index where the selected document exist, then use that index with query and in the same request we delete that temp index.
Let me know in case you need any other additional details

### Suggestion:
_No response_ | Issue: Passing list of index while retreiver in opensearch vectorstore | https://api.github.com/repos/langchain-ai/langchain/issues/15458/comments | 1 | 2024-01-03T08:12:56Z | 2024-04-10T16:08:25Z | https://github.com/langchain-ai/langchain/issues/15458 | 2,063,465,461 | 15,458 |
[
"hwchase17",
"langchain"
] | I am trying to build a langchain SQL database agent where I want to query only one view for now. I have mentioned the view name in the System Prompt and I have passed view_support=True to the SQLDatabase constructor. When I run the query agent is trying to find out the tables instead views. I am sensing that agent has no method that can fetch the view names of the database, it is trying to call 'sql_db_list_tables' and then sql_db_schema.
look at the code below:
```
import os
from dotenv import load_dotenv
load_dotenv()
from sqlalchemy import create_engine
odbc_str = 'mssql+pyodbc:///?odbc_connect=' \
'Driver={ODBC Driver 18 for SQL Server}' \
';Server=tcp:' + os.getenv("DB_SERVER")+';PORT=1453' + \
';Authentication=ActiveDirectoryPassword' + \
';DATABASE=' + os.getenv("DB_NAME") + \
';Uid='+os.getenv("UID")'+ \
';Pwd='+os.getenv("PWD")'+ \
';Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;'
print(odbc_str)
db_engine = create_engine(odbc_str)
```
```
from langchain.chat_models import AzureChatOpenAI
llm = AzureChatOpenAI(
openai_api_type=os.getenv("OPENAI_API_TYPE"),
api_version=os.getenv("OPENAI_API_VERSION"),
azure_deployment=os.getenv("DEPLOYMENT_NAME"),
model=os.getenv("OPENAI_CHAT_MODEL"),
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
temperature=0,
verbose= True)
```
```
from langchain.prompts.chat import ChatPromptTemplate
final_prompt = ChatPromptTemplate.from_messages(
[
("system",
"""
You are a helpful AI assistant expert in querying SQL Database to
find answers to user's question about database view: XXXXXXXXXX.vw_YYYYYYYYY
where XXXXXXXXXX is the schema and vw_YYYYYYYYY is the view. You need to directly query
this view: XXXXXXXXXX.vw_YYYYYYYYY for all user's questions.
Note: DO NOT EXECUTE DROP, DELETE or UPDATE SQL QUERIES.
"""
),
("user", "{question}\n ai: "),
]
)
```
```
from langchain.agents import AgentType, create_sql_agent
from langchain.sql_database import SQLDatabase
from langchain.agents.agent_toolkits.sql.toolkit import SQLDatabaseToolkit
db = SQLDatabase(db_engine, view_support=True)
sql_toolkit = SQLDatabaseToolkit(db=db, llm=llm)
sql_toolkit.get_tools()
sqldb_agent = create_sql_agent(
llm=llm,
toolkit=sql_toolkit,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
)
sqldb_agent.run(final_prompt.format(
question="What is the Exposure for id: 574182 by type ?"
))
```
AGENT ANSWER:
> Entering new AgentExecutor chain...
Action: sql_db_list_tables
Action Input: ""
Observation: xyz, abc, pqr
Thought:The table I need to query, XXXXXXXXXX.vw_YYYYYYYYY, is not listed in the database tables. I need to check the schema of the view to understand its structure and the fields it contains.
Action: sql_db_schema
Action Input: "XXXXXXXXXX.vw_YYYYYYYYY"
Observation: Error: table_names {'XXXXXXXXXX.vw_YYYYYYYYY'} not found in database
Thought:It seems like there is an error in finding the view XXXXXXXXXX.vw_YYYYYYYYY in the database. I need to inform the user about this.
Final Answer: I'm sorry, but it seems like the view 'XXXXXXXXXX.vw_YYYYYYYYY' does not exist in the database. Please check the view name and try again.
> Finished chain.
One option is to override the sql agent action methods or add new methods, but I dont want to do do that as per prompt instructions it should successfully find the view in the schema and query.
Please help anyone has achieved this. Thanks in advance. | Langchain SQL Database Agent failed to find the view name in the MS SQL database. | https://api.github.com/repos/langchain-ai/langchain/issues/15457/comments | 5 | 2024-01-03T08:03:31Z | 2024-04-10T16:17:08Z | https://github.com/langchain-ai/langchain/issues/15457 | 2,063,449,745 | 15,457 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi, I want to know if there is an implementation to use ConversationChains, Agents with NVIDIA's Nemo Rails.
Thanks
### Suggestion:
_No response_ | Issues using Nemo rails with ConversationChains and Agents. | https://api.github.com/repos/langchain-ai/langchain/issues/15456/comments | 1 | 2024-01-03T07:30:33Z | 2024-04-10T16:12:57Z | https://github.com/langchain-ai/langchain/issues/15456 | 2,063,398,062 | 15,456 |
[
"hwchase17",
"langchain"
] | ### Feature request
Allow using [bind parameters](https://docs.sqlalchemy.org/en/20/core/connections.html#sqlalchemy.engine.Connection.execute.params.parameters) in SQLDatabase's [run method](https://github.com/langchain-ai/langchain/blob/65afc13b8b53a1ca41a1a3998dad9eb8d83ca917/libs/community/langchain_community/utilities/sql_database.py#L426).
### Motivation
I don't see a way to use bind params in SQLDatabase queries.
### Your contribution
Happy to help out with a PR if needed (and if confirmed that this functionality doesn't exist and is wanted :) ) | Allow bind variables in SQLDatabase queries | https://api.github.com/repos/langchain-ai/langchain/issues/15449/comments | 1 | 2024-01-03T05:26:19Z | 2024-04-10T16:08:54Z | https://github.com/langchain-ai/langchain/issues/15449 | 2,063,266,287 | 15,449 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.353
langchain-community 0.0.7
langchain-core 0.1.4
langchain-experimental 0.0.47
Python 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] on linuxd
openai 0.28.0
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
import os
from langchain.llms import AzureOpenAI
from langchain.agents import initialize_agent, AgentType, AgentExecutor
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate, ChatPromptTemplate, MessagesPlaceholder
from langchain.tools import tool
import openai
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_VERSION"] = "2023-06-01-preview"
os.environ["OPENAI_API_BASE"] = "xxx"
os.environ["OPENAI_API_KEY"] = "xxx"
prompt_template = ChatPromptTemplate.from_messages([
("system", "You are a powerful assistant. But you do not know how to calculate the length of a word"),
("user", "{input}"),
])
llm = AzureOpenAI(
model_name="gpt-4",
engine="gpt-4"
)
@tool
def get_word_length(word):
"""get the length of a word"""
return len(word)
tools = [get_word_length]
agent = initialize_agent(
tools,
llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True
)
agent.run("how many letters are there in Oslo and Beijing?")
```
```
> Entering new AgentExecutor chain...
Traceback (most recent call last):
File "/mnt/workspace/workgroup/lengmou/Tars-Code-Agent/pages/5.py", line 115, in <module>
agent.run("how many letters are there in Oslo and Beijing?")
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain/chains/base.py", line 507, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain/chains/base.py", line 312, in __call__
raise e
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain/chains/base.py", line 306, in __call__
self._call(inputs, run_manager=run_manager)
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain/agents/agent.py", line 1312, in _call
next_step_output = self._take_next_step(
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain/agents/agent.py", line 1038, in _take_next_step
[
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain/agents/agent.py", line 1038, in <listcomp>
[
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain/agents/agent.py", line 1066, in _iter_next_step
output = self.agent.plan(
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain/agents/openai_functions_agent/base.py", line 96, in plan
predicted_message = self.llm.predict_messages(
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 946, in predict_messages
content = self(text, stop=_stop, **kwargs)
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 892, in __call__
self.generate(
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 666, in generate
output = self._generate_helper(
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 553, in _generate_helper
raise e
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 540, in _generate_helper
self._generate(
File "/mnt/workspace/workgroup/shared/miniconda3/envs/code-agent/lib/python3.10/site-packages/langchain_community/llms/openai.py", line 1152, in _generate
"token_usage": full_response["usage"],
KeyError: 'usage'
```
### Expected behavior
Please look at this issue, thanks! | KeyError: 'usage' | https://api.github.com/repos/langchain-ai/langchain/issues/15448/comments | 3 | 2024-01-03T04:03:31Z | 2024-04-10T16:15:30Z | https://github.com/langchain-ai/langchain/issues/15448 | 2,063,217,500 | 15,448 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi, I'm using an llm chain and would like to stream its output. I'm writing a function that takes it's token. I can get the tokens one by one, but how can I know if this token is the last token of the response?
### Suggestion:
_No response_ | Issue: streaming output. | https://api.github.com/repos/langchain-ai/langchain/issues/15445/comments | 1 | 2024-01-03T03:29:06Z | 2024-04-10T16:12:53Z | https://github.com/langchain-ai/langchain/issues/15445 | 2,063,195,674 | 15,445 |
[
"hwchase17",
"langchain"
] | ### Feature request
If in-memory replica is increased in milvus, `replica_number` should be set when loading collection in langchain, but the default setting is 1 and cannot be changed.
> pymilvus.exceptions.MilvusException: <MilvusException: (code=1100, message=failed to load collection: can't change the replica number for loaded collection: expected=3, actual=1: invalid parameter)>
When creating the Milvus object, please change it so that `replica_number` can be passed over to the argument
### Motivation
There is no way to change `replica_number` in langchain when in-memory replica is increased in milvus.
### Your contribution
- `milvus.py`
```python3
def __init__(
self,
embedding_function: Embeddings,
collection_name: str = "LangChainCollection",
connection_args: Optional[dict[str, Any]] = None,
consistency_level: str = "Session",
index_params: Optional[dict] = None,
search_params: Optional[dict] = None,
drop_old: Optional[bool] = False,
*,
primary_field: str = "pk",
text_field: str = "text",
vector_field: str = "vector",
replica_number : int = 1 // added factor
):
"""Initialize the Milvus vector store."""
try:
from pymilvus import Collection, utility
except ImportError:
raise ValueError(
"Could not import pymilvus python package. "
"Please install it with `pip install pymilvus`."
)
# Default search params when one is not provided.
self.default_search_params = {
"IVF_FLAT": {"metric_type": "L2", "params": {"nprobe": 10}},
"IVF_SQ8": {"metric_type": "L2", "params": {"nprobe": 10}},
"IVF_PQ": {"metric_type": "L2", "params": {"nprobe": 10}},
"HNSW": {"metric_type": "L2", "params": {"ef": 10}},
"RHNSW_FLAT": {"metric_type": "L2", "params": {"ef": 10}},
"RHNSW_SQ": {"metric_type": "L2", "params": {"ef": 10}},
"RHNSW_PQ": {"metric_type": "L2", "params": {"ef": 10}},
"IVF_HNSW": {"metric_type": "L2", "params": {"nprobe": 10, "ef": 10}},
"ANNOY": {"metric_type": "L2", "params": {"search_k": 10}},
"AUTOINDEX": {"metric_type": "L2", "params": {}},
}
self.embedding_func = embedding_function
self.collection_name = collection_name
self.index_params = index_params
self.search_params = search_params
self.consistency_level = consistency_level
# In order for a collection to be compatible, pk needs to be auto'id and int
self._primary_field = primary_field
# In order for compatibility, the text field will need to be called "text"
self._text_field = text_field
# In order for compatibility, the vector field needs to be called "vector"
self._vector_field = vector_field
self.fields: list[str] = []
# Create the connection to the server
if connection_args is None:
connection_args = DEFAULT_MILVUS_CONNECTION
self.alias = self._create_connection_alias(connection_args)
self.col: Optional[Collection] = None
# Grab the existing collection if it exists
if utility.has_collection(self.collection_name, using=self.alias):
self.col = Collection(
self.collection_name,
using=self.alias,
)
# If need to drop old, drop it
if drop_old and isinstance(self.col, Collection):
self.col.drop()
self.col = None
# Initialize the vector store
self._init(replica_number=replica_number)
```
Please let me take the `replica_number` as an argument in the same way as above | milvus replica number factorization | https://api.github.com/repos/langchain-ai/langchain/issues/15442/comments | 1 | 2024-01-03T02:58:01Z | 2024-01-05T04:07:25Z | https://github.com/langchain-ai/langchain/issues/15442 | 2,063,178,107 | 15,442 |
[
"hwchase17",
"langchain"
] | ### Feature request
Gemini API is not available in Canada, but i believe it is available through `vertexai.preview.generative_models` in pre-GA mode.
Would it be possible to add a feature using the Vertex AI SDK instead of Gemini API, which i assume is what it is using?
### Motivation
Canada access to Gemini through Langchain
### Your contribution
I could test? | Accessing Gemini though Vertex AI SDK | https://api.github.com/repos/langchain-ai/langchain/issues/15431/comments | 3 | 2024-01-02T21:52:47Z | 2024-01-03T17:20:09Z | https://github.com/langchain-ai/langchain/issues/15431 | 2,062,998,345 | 15,431 |
[
"hwchase17",
"langchain"
] | ### System Info
```
langchain==0.0.352
langchain-community==0.0.6
langchain-core==0.1.3
```
Python `3.12.1` running inside Docker from `python:3.12-bookworm` on Linux.
### Who can help?
I have a FastAPI app that streams the output of an LLM. The app uses `langchain.chat_models.ChatOpenAI` at runtime, but during test I mock the LLM with `langchain.llms.fake.FakeStreamingListLLM`. However, when my app calls `.astream()` on each of them I'm getting different results:
- While `ChatOpenAI` yields instances of `AIMessageChunk`...
- `FakeStreamingListLLM` yields instances of strings.
This mismatch between the fake and the real class makes the fake unsuitable for some mocking purposes as the interface of `AIMessageChunk` is different from a `str`.
@baskaryan @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
See an snippet below on how to reproduce the output of strings within `ipython`:
```python
from langchain.llms.fake import FakeStreamingListLLM
import asyncio
async def pp(llm):
async for chunk in llm.astream('some input'):
yield(f"{type(chunk)}: {chunk}")
async def consume():
results = []
async for item in pp(FakeStreamingListLLM(responses=['a'])):
results.append(item)
return results
asyncio.run(consume())
# OUTPUT:
["<class 'str'>: a"]
```
### Expected behavior
I'd expect `FakeStreamingListLLM` to return instances of `AIMessageChunk` as the real chat model. | FakeStreamingListLLM.astream() yields strings while ChatOpenAI yields AIMessageChunk | https://api.github.com/repos/langchain-ai/langchain/issues/15426/comments | 3 | 2024-01-02T18:44:18Z | 2024-04-10T16:13:01Z | https://github.com/langchain-ai/langchain/issues/15426 | 2,062,806,705 | 15,426 |
[
"hwchase17",
"langchain"
] | ### System Info
Windows 10
Python 3.11.5
langchain==0.0.331
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I want to do a similarity search on a vectorstore created beforehand and use multiple filtering conditions. The relevant part of the code is as follows:
```
from langchain.vectorstores import Chroma
from langchain.embeddings import SentenceTransformerEmbeddings
import chromadb
db_path = "my_db"
embeddings = SentenceTransformerEmbeddings(cache_folder='intfloat/multilingual-e5-large')
chroma_client = chromadb.PersistentClient(path=db_path)
db= Chroma(persist_directory=db_path, embedding_function=embeddings, client=chroma_client)
query = "My query"
filtered = db.similarity_search_with_relevance_scores(k=5, query=query, filter={"key1":value1, "key2":value2})
```
When using one filtering condition like
```
filtered = db.similarity_search_with_relevance_scores(k=5, query=query, filter={"key1":value1})
```
the filtering condtion is applied and it works fine. But when using multiple conditions, neither of the conditions is applied.
How can I use multiple filtering conditions?
### Expected behavior
I would expect, that both conditions are applied either connected by an "and" or an "or". | Filter conditions are discarded when using multiple filter conditions in similarity_search_with_relevance_scores | https://api.github.com/repos/langchain-ai/langchain/issues/15417/comments | 2 | 2024-01-02T17:35:24Z | 2024-08-06T10:38:34Z | https://github.com/langchain-ai/langchain/issues/15417 | 2,062,736,735 | 15,417 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi there,
I am creating a simple PDF reading application where I want to push my embeddings into Pinecone.
Here is my code:
## Imports
import streamlit as st
import os
from apikey import apikey
import pickle
from PyPDF2 import PdfReader
# Streamlit - user interface
from streamlit_extras.add_vertical_space import add_vertical_space
# Langchain
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.llms import OpenAI
from langchain.chains.question_answering import load_qa_chain
from langchain.callbacks import get_openai_callback
from langchain.chat_models.openai import ChatOpenAI
from langchain.schema import (SystemMessage, HumanMessage, AIMessage)
from langchain.document_loaders import UnstructuredPDFLoader
# Pinecone
from langchain.vectorstores import Pinecone
import pinecone
from apikey import pinecone_api_key
import uuid
os.environ['OPENAI_API_KEY'] = apikey
## User Interface
# Side Bar
with st.sidebar:
st.title('🚀 Zi-GPT Version 2.0')
st.markdown('''
## About
This app is an LLM-powered chatbot built using:
- [Streamlit](https://streamlit.io/)
- [LangChain](https://python.langchain.com/)
- [OpenAI](https://platform.openai.com/docs/models) LLM model
''')
add_vertical_space(5)
st.write('Made with ❤️ by Zi')
# Main Page
def main():
st.header("Zi's PDF Helper: Chat with PDF")
# upload a PDF file
pdf = st.file_uploader("Please upload your PDF here", type='pdf')
# st.write(pdf)
# read PDF
if pdf is not None:
pdf_reader = PdfReader(pdf)
# data = pdf_reader.load()
# split document into chunks
# also can use text split: good for PDFs that do not contains charts and visuals
sections = []
for page in pdf_reader.pages:
# Split the page text by paragraphs (assuming two newlines indicate a new paragraph)
page_sections = page.extract_text().split('\n\n')
sections.extend(page_sections)
chunks = sections
# st.write(chunks)
# text_splitter = RecursiveCharacterTextSplitter(
# chunk_size = 500,
# chunk_overlap = 20
# )
# chunks = text_splitter.split_documents(data)
## embeddings
# Set up embeddings
embeddings = OpenAIEmbeddings( model="text-embedding-ada-002")
# Set up Pinecone
pinecone.init(api_key=pinecone_api_key, environment='gcp-starter')
index_name = 'langchainresearch'
if index_name not in pinecone.list_indexes():
pinecone.create_index(index_name, dimension=1536, metric="cosine") # Adjust the dimension as per your embeddings
index = pinecone.Index(index_name)
docsearch = Pinecone.from_documents(chunks, embeddings, index_name = index_name)
# Create or Load Chat History
if pdf:
# generate chat history
chat_history_file = f"{pdf.name}_chat_history.pkl"
# load history if exist
if os.path.exists(chat_history_file):
with open(chat_history_file, "rb") as f:
chat_history = pickle.load(f)
else:
chat_history = []
# Initialize chat_history in session_state if not present
if 'chat_history' not in st.session_state:
st.session_state.chat_history = []
# Check if 'prompt' is in session state
if 'last_input' not in st.session_state:
st.session_state.last_input = ''
# User Input
current_prompt = st.session_state.get('user_input', '')
prompt_placeholder = st.empty()
prompt = prompt_placeholder.text_area("Ask questions about your PDF:", value=current_prompt, placeholder="Send a message", key="user_input")
submit_button = st.button("Submit")
if submit_button and prompt:
# Update the last input in session state
st.session_state.last_input = prompt
docs = docsearch.similarity_search(query=prompt, k=3)
#llm = OpenAI(temperature=0.9, model_name='gpt-3.5-turbo')
chat = ChatOpenAI(model='gpt-4', temperature=0.7, max_tokens=3000)
message = [
SystemMessage(content="You are a helpful assistant"),
HumanMessage(content=prompt)
]
chain = load_qa_chain(llm=chat, chain_type="stuff")
with get_openai_callback() as cb:
response = chain.run(input_documents=docs, question=message)
print(cb)
# st.write(response)
# st.write(docs)
# Process the response using AIMessage schema
# ai_message = AIMessage(content="AI message content")
# ai_message.content = response.generations[0].message.content
# Add to chat history
st.session_state.chat_history.append((prompt, response))
# Save chat history
with open(chat_history_file, "wb") as f:
pickle.dump(st.session_state.chat_history, f)
# Clear the input after processing
prompt_placeholder.text_area("Ask questions about your PDF:", value='', placeholder="Send a message", key="pdf_prompt")
# Display the entire chat
chat_content = ""
for user_msg, bot_resp in st.session_state.chat_history:
chat_content += f"<div style='background-color: #222222; color: white; padding: 10px;'>**You:** {user_msg}</div>"
chat_content += f"<div style='background-color: #333333; color: white; padding: 10px;'>**Zi GPT:** {bot_resp}</div>"
st.markdown(chat_content, unsafe_allow_html=True)
if __name__ == '__main__':
main()
After running the code I got this error message that I cannot find a solution.
AttributeError: 'str' object has no attribute 'page_content'
Traceback:
File "C:\Users\zy73\AppData\Roaming\Python\Python311\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "C:\Users\zy73\OneDrive\Desktop\AI Research\langchain\pdf.py", line 159, in <module>
main()
File "C:\Users\zy73\OneDrive\Desktop\AI Research\langchain\pdf.py", line 88, in main
docsearch = Pinecone.from_documents(chunks, embeddings, index_name = index_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\zy73\AppData\Roaming\Python\Python311\site-packages\langchain\schema\vectorstore.py", line 508, in from_documents
texts = [d.page_content for d in documents]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\zy73\AppData\Roaming\Python\Python311\site-packages\langchain\schema\vectorstore.py", line 508, in <listcomp>
texts = [d.page_content for d in documents]
Please help me debug this. Thank you
### Suggestion:
_No response_ | Issue: Pinecone Embeddings - Error | https://api.github.com/repos/langchain-ai/langchain/issues/15407/comments | 7 | 2024-01-02T15:33:53Z | 2024-01-03T19:31:42Z | https://github.com/langchain-ai/langchain/issues/15407 | 2,062,584,661 | 15,407 |
[
"hwchase17",
"langchain"
] | ### System Info
python: 3.11
langchain:latest
### Who can help?
in an chatbot after running a query it will return the SQLResult, but while giving an output answer complete result is not displayed
code:
import pandas as pd
import sqlalchemy as sal
import os, sys, openai
import constants
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
from langchain_experimental.sql import SQLDatabaseChain
from sqlalchemy import create_engine
# import ChatOpenAI
from langchain.chat_models import ChatOpenAI
from typing import List, Optional
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.callbacks.manager import CallbackManagerForToolRun
from langchain.chat_models import ChatOpenAI
from langchain_experimental.plan_and_execute import (
PlanAndExecute,
load_agent_executor,
load_chat_planner,
)
from langchain.sql_database import SQLDatabase
from langchain.text_splitter import TokenTextSplitter
from langchain.tools import BaseTool
from langchain.tools.sql_database.tool import QuerySQLDataBaseTool
from secret_key import openapi_key
from langchain import PromptTemplate
# from langchain.prompts.PromptTemplate import PromptTemplate
# from langchain.models import ChatGPTClient
# from langchain.utils import save_conversation
os.environ['OPENAI_API_KEY'] = openapi_key
def chat(question):
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
# custom_suffix = """""
# If the SQLResult is empty, the Answer should be "No results found". DO NOT hallucinate an answer if there is no result."""
engine = create_engine(connection_uri)
model_name="gpt-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
# db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments'])
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
from langchain.prompts import PromptTemplate
PROMPT = """
Given an input question, first create a syntactically correct mssql query to run,
then look at the results of the query and return the SQLResult as answer.
The question: {db_chain.run}
"""
prompt_template = """" Use the following pieces of context to answer the question at the end.
If you don't know the answer, please think rationally and answer from your own knowledge base.
Don't consider the table which are not mentioned, if no result is matching with the keyword Please return the answer as invalid question
{context}
Question: {questions}
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "questions"]
)
def split_text(text, chunk_size, chunk_overlap=0):
text_splitter = TokenTextSplitter(
chunk_size=chunk_size, chunk_overlap=chunk_overlap
)
yield from text_splitter.split_text(text)
class QuerySQLDatabaseTool2(QuerySQLDataBaseTool):
def _run(
self,
query: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> str:
result = self.db.run_no_throw(query)
return next(split_text(result, chunk_size=14_000))
class SQLDatabaseToolkit2(SQLDatabaseToolkit):
def get_tools(self) -> List[BaseTool]:
tools = super().get_tools()
original_query_tool_description = tools[0].description
new_query_tool = QuerySQLDatabaseTool2(
db=self.db, description=original_query_tool_description
)
tools[0] = new_query_tool
return tools
return db_chain.run(question)
answer= chat("Aadhaar number of B#########")
print(answer)
result:
> Entering new SQLDatabaseChain chain...
Aadhaar number of Bhuvaneshwari
SQLQuery:SELECT [Aadhaar Number]
FROM [EGV_emp_departments_ChatGPT]
WHERE [EmployeeName] = 'B##########'
SQLResult: [('91#######',), ('71#########',)]
Answer:91######
> Finished chain.
9########
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
in an chatbot after running a query it will return the SQLResult, but while giving an output answer complete result is not displayed
code:
import pandas as pd
import sqlalchemy as sal
import os, sys, openai
import constants
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
from langchain_experimental.sql import SQLDatabaseChain
from sqlalchemy import create_engine
# import ChatOpenAI
from langchain.chat_models import ChatOpenAI
from typing import List, Optional
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.callbacks.manager import CallbackManagerForToolRun
from langchain.chat_models import ChatOpenAI
from langchain_experimental.plan_and_execute import (
PlanAndExecute,
load_agent_executor,
load_chat_planner,
)
from langchain.sql_database import SQLDatabase
from langchain.text_splitter import TokenTextSplitter
from langchain.tools import BaseTool
from langchain.tools.sql_database.tool import QuerySQLDataBaseTool
from secret_key import openapi_key
from langchain import PromptTemplate
# from langchain.prompts.PromptTemplate import PromptTemplate
# from langchain.models import ChatGPTClient
# from langchain.utils import save_conversation
os.environ['OPENAI_API_KEY'] = openapi_key
def chat(question):
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
# custom_suffix = """""
# If the SQLResult is empty, the Answer should be "No results found". DO NOT hallucinate an answer if there is no result."""
engine = create_engine(connection_uri)
model_name="gpt-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
# db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments'])
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
from langchain.prompts import PromptTemplate
PROMPT = """
Given an input question, first create a syntactically correct mssql query to run,
then look at the results of the query and return the SQLResult as answer.
The question: {db_chain.run}
"""
prompt_template = """" Use the following pieces of context to answer the question at the end.
If you don't know the answer, please think rationally and answer from your own knowledge base.
Don't consider the table which are not mentioned, if no result is matching with the keyword Please return the answer as invalid question
{context}
Question: {questions}
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "questions"]
)
def split_text(text, chunk_size, chunk_overlap=0):
text_splitter = TokenTextSplitter(
chunk_size=chunk_size, chunk_overlap=chunk_overlap
)
yield from text_splitter.split_text(text)
class QuerySQLDatabaseTool2(QuerySQLDataBaseTool):
def _run(
self,
query: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> str:
result = self.db.run_no_throw(query)
return next(split_text(result, chunk_size=14_000))
class SQLDatabaseToolkit2(SQLDatabaseToolkit):
def get_tools(self) -> List[BaseTool]:
tools = super().get_tools()
original_query_tool_description = tools[0].description
new_query_tool = QuerySQLDatabaseTool2(
db=self.db, description=original_query_tool_description
)
tools[0] = new_query_tool
return tools
return db_chain.run(question)
answer= chat("Aadhaar number of B#########")
print(answer)
result:
> Entering new SQLDatabaseChain chain...
Aadhaar number of Bhuvaneshwari
SQLQuery:SELECT [Aadhaar Number]
FROM [EGV_emp_departments_ChatGPT]
WHERE [EmployeeName] = 'B##########'
SQLResult: [('91#######',), ('71#########',)]
Answer:91######
> Finished chain.
9########
### Expected behavior
in an chatbot after running a query it will return the SQLResult, but while giving an output answer complete result is not displayed
code:
import pandas as pd
import sqlalchemy as sal
import os, sys, openai
import constants
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
from langchain_experimental.sql import SQLDatabaseChain
from sqlalchemy import create_engine
# import ChatOpenAI
from langchain.chat_models import ChatOpenAI
from typing import List, Optional
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.callbacks.manager import CallbackManagerForToolRun
from langchain.chat_models import ChatOpenAI
from langchain_experimental.plan_and_execute import (
PlanAndExecute,
load_agent_executor,
load_chat_planner,
)
from langchain.sql_database import SQLDatabase
from langchain.text_splitter import TokenTextSplitter
from langchain.tools import BaseTool
from langchain.tools.sql_database.tool import QuerySQLDataBaseTool
from secret_key import openapi_key
from langchain import PromptTemplate
# from langchain.prompts.PromptTemplate import PromptTemplate
# from langchain.models import ChatGPTClient
# from langchain.utils import save_conversation
os.environ['OPENAI_API_KEY'] = openapi_key
def chat(question):
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
# custom_suffix = """""
# If the SQLResult is empty, the Answer should be "No results found". DO NOT hallucinate an answer if there is no result."""
engine = create_engine(connection_uri)
model_name="gpt-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
# db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments'])
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
from langchain.prompts import PromptTemplate
PROMPT = """
Given an input question, first create a syntactically correct mssql query to run,
then look at the results of the query and return the SQLResult as answer.
The question: {db_chain.run}
"""
prompt_template = """" Use the following pieces of context to answer the question at the end.
If you don't know the answer, please think rationally and answer from your own knowledge base.
Don't consider the table which are not mentioned, if no result is matching with the keyword Please return the answer as invalid question
{context}
Question: {questions}
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "questions"]
)
def split_text(text, chunk_size, chunk_overlap=0):
text_splitter = TokenTextSplitter(
chunk_size=chunk_size, chunk_overlap=chunk_overlap
)
yield from text_splitter.split_text(text)
class QuerySQLDatabaseTool2(QuerySQLDataBaseTool):
def _run(
self,
query: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> str:
result = self.db.run_no_throw(query)
return next(split_text(result, chunk_size=14_000))
class SQLDatabaseToolkit2(SQLDatabaseToolkit):
def get_tools(self) -> List[BaseTool]:
tools = super().get_tools()
original_query_tool_description = tools[0].description
new_query_tool = QuerySQLDatabaseTool2(
db=self.db, description=original_query_tool_description
)
tools[0] = new_query_tool
return tools
return db_chain.run(question)
answer= chat("Aadhaar number of B#########")
print(answer)
result:
> Entering new SQLDatabaseChain chain...
Aadhaar number of Bhuvaneshwari
SQLQuery:SELECT [Aadhaar Number]
FROM [EGV_emp_departments_ChatGPT]
WHERE [EmployeeName] = 'B##########'
SQLResult: [('91#######',), ('71#########',)]
Answer:91######
> Finished chain.
9########
instead of a single output or top5 output, need the complete result from SQLResults as answe. | How to get the complete output as answer | https://api.github.com/repos/langchain-ai/langchain/issues/15404/comments | 3 | 2024-01-02T14:01:51Z | 2024-04-10T16:12:39Z | https://github.com/langchain-ai/langchain/issues/15404 | 2,062,463,881 | 15,404 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi! I am trying to replicate this tutorial https://python.langchain.com/docs/integrations/toolkits/playwright on Colab using the same code, only difference is i am using `ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)` instead of `ChatAnthropic(temperature=0)`.
When i run the agent (`result = await agent_chain.arun("What are the headers on langchain.com?")`) instead of obtaining the full output showed in the documentation, i get:
> Entering new AgentExecutor chain...
I'll need to extract the headers from the langchain.com website. Let me do that for you.
Action:```navigate_browser```
> Finished chain.
I'll need to extract the headers from the langchain.com website. Let me do that for you.
Action:```navigate_browser```.
It seems the agent stops before returning the final answer, any guess on why this is happening? Thanks a lot
### Suggestion:
_No response_ | Issue: not able to replicate documentation results | https://api.github.com/repos/langchain-ai/langchain/issues/15403/comments | 1 | 2024-01-02T13:55:08Z | 2024-04-09T16:15:03Z | https://github.com/langchain-ai/langchain/issues/15403 | 2,062,455,269 | 15,403 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain - 0.0.350
Python - 3.11
chromadb - 0.3.23
OS - Win 10
### Who can help?
@hwchase17
@eyur
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
just updated the version from 0.0.330 to 350
used the same code to create a chromadb class -
and when using Chroma.from_docs(docs, embeddings) get this in output -
No embedding_function provided, using default embedding function: SentenceTransformerEmbeddingFunction
### Expected behavior
The expected behavior shall be that the embedding function provided should be used to create the embeddings,
The following is the fix for the issue -
line 129 should have a embedding_function passed instead of None | Warning message- No embedding_function provided, using default embedding function: SentenceTransformerEmbeddingFunction || Chroma db | https://api.github.com/repos/langchain-ai/langchain/issues/15400/comments | 1 | 2024-01-02T10:09:11Z | 2024-04-09T16:14:50Z | https://github.com/langchain-ai/langchain/issues/15400 | 2,062,195,571 | 15,400 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Links within the documentation fail to open when the URL lacks a trailing '/'. Notably, the links on the page indicated in the example URL below and other links across various pages result in `page not found` errors when the URL lacks a trailing '/'. It's crucial to clarify that both URLs below, with and without a trailing '/', point to the same page. The issue lies in the fact that links on the version without '/' do not open, giving a `page not found` error, whereas the version with '/' functions correctly without any error.
Example URL (without '/' at the end of the URL): [https://python.langchain.com/docs/modules/agents](https://python.langchain.com/docs/modules/agents)
Example URL (with '/' at the end of the URL): [https://python.langchain.com/docs/modules/agents/](https://python.langchain.com/docs/modules/agents/)
### Idea or request for content:
- **Suggested Changes:**
1. Investigate and address the issue causing links to fail when the URL lacks a trailing '/'.
2. Implement a fix to ensure all links across the documentation open successfully, regardless of the presence or absence of a trailing '/' in the URL.
- **Steps to Reproduce:**
1. Navigate to [Example URL without '/'](https://python.langchain.com/docs/modules/agents) in a web browser.
2. Attempt to click on various links within the documentation, especially those that navigate to other pages.
3. Observe the page not found errors when the URL lacks a trailing '/'.
- **Additional Context:**
- This issue is consistent across multiple pages in the documentation.
- The problem is observed on Chrome, Brave, Firefox, and Bing browsers.
- Other browsers may also have this issue, but I haven't tested in them. | DOC: Resolve URL Navigation Issues - Trailing Slash Discrepancy | https://api.github.com/repos/langchain-ai/langchain/issues/15399/comments | 1 | 2024-01-02T09:54:31Z | 2024-04-09T16:14:33Z | https://github.com/langchain-ai/langchain/issues/15399 | 2,062,178,490 | 15,399 |
[
"hwchase17",
"langchain"
] | ### System Info
/Users/sunwenke/miniconda3/envs/langchain/bin/python /Users/sunwenke/workspace/yongxinApi/langchain/localopenai/sql.py
Traceback (most recent call last):
File "/Users/sunwenke/workspace/yongxinApi/langchain/localopenai/sql.py", line 8, in <module>
entity_store = SQLiteEntityStore(db_file="/Users/sunwenke/workspace/yongxinApi/langchain/Chinook.db")
File "/Users/sunwenke/miniconda3/envs/langchain/lib/python3.10/site-packages/langchain/memory/entity.py", line 256, in __init__
self.conn = sqlite3.connect(db_file)
File "/Users/sunwenke/miniconda3/envs/langchain/lib/python3.10/site-packages/pydantic/v1/main.py", line 357, in __setattr__
raise ValueError(f'"{self.__class__.__name__}" object has no field "{name}"')
ValueError: "SQLiteEntityStore" object has no field "conn"
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [x] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.chains import ConversationChain
from langchain.llms import OpenAI
from langchain.memory import ConversationEntityMemory
from langchain.memory.entity import SQLiteEntityStore
from langchain.memory.prompt import ENTITY_MEMORY_CONVERSATION_TEMPLATE
from localopenai import llm
entity_store = SQLiteEntityStore(db_file="/Users/sunwenke/workspace/yongxinApi/langchain/Chinook.db")
memory = ConversationEntityMemory(llm=llm, entity_store=entity_store)
conversation = ConversationChain(
llm=llm,
prompt=ENTITY_MEMORY_CONVERSATION_TEMPLATE,
memory=memory,
verbose=True,
)
conversation.run("有多少员工")
### Expected behavior
class SQLiteEntityStore(BaseEntityStore):
"""SQLite-backed Entity store"""
session_id: str = "default"
table_name: str = "memory_store"
def __init__(
self,
session_id: str = "default",
db_file: str = "entities.db",
table_name: str = "memory_store",
*args: Any,
**kwargs: Any,
):
try:
import sqlite3
except ImportError:
raise ImportError(
"Could not import sqlite3 python package. "
"Please install it with `pip install sqlite3`."
)
super().__init__(*args, **kwargs)
self.conn = sqlite3.connect(db_file)
self.session_id = session_id
self.table_name = table_name
self._create_table_if_not_exists()
self.conn = sqlite3.connect(db_file) 属性不存在 需要 在当前类 添加conn属性 | 这个貌似是一个bug, 修改之后就好了 | https://api.github.com/repos/langchain-ai/langchain/issues/15396/comments | 1 | 2024-01-02T08:55:07Z | 2024-04-10T16:08:18Z | https://github.com/langchain-ai/langchain/issues/15396 | 2,062,114,468 | 15,396 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi Langchain Gurus,
I am trying to use SQLDatabaseChain to query and answer questions on postgresql table. so far the code that I have written uses following hugging face pipeline
```
model_name ='tiiuae/falcon-7b-instruct'
tokenizer = AutoTokenizer.from_pretrained(model_name)
pipe = pipeline("question-answering", model=model_name, tokenizer=tokenizer, torch_dtype=bfloat16)
llm = HuggingFacePipeline(pipeline=pipe)
```
the prompt template looks like following
```
default_prompt = """You are a postgresql expert. Given an input question, first create a syntactically correct postgresql query to run, then look at the results of the query and return the answer to the input question.
Unless the user specifies in the question a specific number of examples to obtain, query for at most {top_k} results using the LIMIT clause as per postgresql. You can order the results to return the most informative data in the database.
Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (") to denote them as delimited identifiers.
Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.
Pay attention to use date('now') function to get the current date, if the question involves "today".
Use the following format:
Question: Question here
SQLQuery: SQL Query to run
SQLResult: Result of the SQLQuery
Answer: Final answer here
Only use the following tables:
{table_info}
Question: {question}
"""
```
post that I am using the SQLDatabaseChain like this
```
database = SQLDatabase.from_uri(f"postgresql+psycopg2://{db_user}:{db_password}@{db_host}:5432/{db_name}")
prompt_template = PromptTemplate.from_template(prompt_template)
print("SQLDatabase loaded.")
db_chain = SQLDatabaseChain.from_llm(llm, self.database, verbose=True)
db_chain.run('How many records are available in flightData table')
```
running this throws error
```
``
Use the following format:
Question: Question here
SQLQuery: SQL Query to run
SQLResult: Result of the SQLQuery
Answer: Final answer here
Only use the following tables:
CREATE TABLE "FlightData" (
year BIGINT,
month BIGINT,
day BIGINT,
dep_time DOUBLE PRECISION,
sched_dep_time BIGINT,
dep_delay DOUBLE PRECISION,
arr_time DOUBLE PRECISION,
sched_arr_time BIGINT,
arr_delay DOUBLE PRECISION,
carrier TEXT,
flight BIGINT,
tailnum TEXT,
origin TEXT,
dest TEXT,
air_time DOUBLE PRECISION,
distance BIGINT,
hour BIGINT,
minute BIGINT,
time_hour TEXT
)
/*
3 rows from FlightData table:
year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time arr_delay carrier flight tailnum origin dest air_time distance hour minute time_hour
2013 1 1 517.0 515 2.0 830.0 819 11.0 UA 1545 N14228 EWR IAH 227.0 1400 5 15 1/1/2013 5:00
2013 1 1 533.0 529 4.0 850.0 830 20.0 UA 1714 N24211 LGA IAH 227.0 1416 5 29 1/1/2013 5:00
2013 1 1 542.0 540 2.0 923.0 850 33.0 AA 1141 N619AA JFK MIA 160.0 1089 5 40 1/1/2013 5:00
*/
Question: How many records are available in flightData table
SQLQuery: argument needs to be of type (SquadExample, dict)
```
Requesting help /pointer on how I can I run this code without error and generate the correct answer?
### Suggestion:
_No response_ | Issue: Unable to use SQLDatabaseChain with Falcon 7b Instruct for quering the postgresql database. | https://api.github.com/repos/langchain-ai/langchain/issues/15395/comments | 1 | 2024-01-02T08:54:23Z | 2024-04-09T16:09:38Z | https://github.com/langchain-ai/langchain/issues/15395 | 2,062,113,700 | 15,395 |
[
"hwchase17",
"langchain"
] | ### System Info
```
pip show langchain_community
Name: langchain-community
Version: 0.0.3
```
```
python --version
Python 3.10.12
```
```
pip show langchain_core
Name: langchain-core
Version: 0.1.1
```
```
pip show pydantic
Name: pydantic
Version: 2.5.1
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
/root/jupyter/pydantic/pydantic/_migration.py:283: UserWarning: `pydantic.error_wrappers:ValidationError` has been moved to `pydantic:ValidationError`.
warnings.warn(f'`{import_path}` has been moved to `{new_location}`.')
Traceback (most recent call last):
File "/root/jupyter/GitHub-Issues/david/langC-rag-lcel.py", line 86, in <module>
chain.invoke("where did harrison work?")
File "/root/jupyter/virtualenv/slack-dev-3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1514, in invoke
input = step.invoke(
File "/root/jupyter/virtualenv/slack-dev-3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2040, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
File "/root/jupyter/virtualenv/slack-dev-3.10/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2040, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
File "/opt/python-3.10.12/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/opt/python-3.10.12/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/opt/python-3.10.12/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/root/jupyter/virtualenv/slack-dev-3.10/lib/python3.10/site-packages/langchain_core/retrievers.py", line 112, in invoke
return self.get_relevant_documents(
File "/root/jupyter/virtualenv/slack-dev-3.10/lib/python3.10/site-packages/langchain_core/retrievers.py", line 211, in get_relevant_documents
raise e
File "/root/jupyter/virtualenv/slack-dev-3.10/lib/python3.10/site-packages/langchain_core/retrievers.py", line 204, in get_relevant_documents
result = self._get_relevant_documents(
File "/root/jupyter/virtualenv/slack-dev-3.10/lib/python3.10/site-packages/langchain_core/vectorstores.py", line 656, in _get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
File "/root/jupyter/virtualenv/slack-dev-3.10/lib/python3.10/site-packages/langchain_community/vectorstores/docarray/base.py", line 127, in similarity_search
results = self.similarity_search_with_score(query, k=k, **kwargs)
File "/root/jupyter/virtualenv/slack-dev-3.10/lib/python3.10/site-packages/langchain_community/vectorstores/docarray/base.py", line 106, in similarity_search_with_score
query_doc = self.doc_cls(embedding=query_embedding) # type: ignore
File "/root/jupyter/pydantic/pydantic/main.py", line 166, in __init__
__pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)
pydantic_core._pydantic_core.ValidationError: 2 validation errors for DocArrayDoc
text
Field required [type=missing, input_value={'embedding': [0.00177018... -0.018160881474614143]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.4/v/missing
metadata
Field required [type=missing, input_value={'embedding': [0.00177018... -0.018160881474614143]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.4/v/missing
```
### Expected behavior
it should run successfully | similarity_search get 2 validation errors for DocArrayDoc | https://api.github.com/repos/langchain-ai/langchain/issues/15394/comments | 2 | 2024-01-02T07:44:12Z | 2024-04-11T16:13:54Z | https://github.com/langchain-ai/langchain/issues/15394 | 2,062,050,508 | 15,394 |
[
"hwchase17",
"langchain"
] | ### System Info
Platform: `Mac M1`
Python: `3.8.18`
Lanchain: `0.0.350`
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Install Lanchain
2. Execute the code:
``` python
from datetime import datetime
from datetime import timedelta
from langchain.vectorstores.mongodb_atlas import MongoDBAtlasVectorSearch
from langchain.embeddings import OpenAIEmbeddings
from langchain.chat_models.openai import ChatOpenAI
from langchain.chains import RetrievalQA
db_name = 'YOUR_DB_NAME'
collection_name = 'YOUR_COLLECTION_NAME'
db_connection_string = 'YOUR_STRING'
def perform_retrieval_qa(
query: str,
from_date: str = (datetime.now() - timedelta(days=6)).strftime("%Y-%m-%d"),
to_date: str = datetime.now().strftime("%Y-%m-%d"),
):
print(from_date, to_date)
vector_search = MongoDBAtlasVectorSearch.from_connection_string(
db_connection_string,
f"{db_name}.{collection_name}",
OpenAIEmbeddings(),
index_name="default"
)
llm = ChatOpenAI(
model="gpt-3.5-turbo",
temperature=0)
retrieval_qa_chain = RetrievalQA.from_chain_type(
llm,
retriever=vector_search.as_retriever(
search_kwargs={
'k': 70,
'filter':{
'date': {
'$gte': from_date,
'$lte': to_date
}
}
},
)
)
result = retrieval_qa_chain(
{"query": query},
return_only_outputs=True
)
return result["result"]
```
### Expected behavior
I wish the RAG should be performed with the records from the filtered dates ranges, but it is not happening. The filter is not applied and the RAG is performed with the entire data. | [Filter] Unable to filter dates with MongoDB | https://api.github.com/repos/langchain-ai/langchain/issues/15391/comments | 5 | 2024-01-02T07:03:26Z | 2024-04-09T16:13:23Z | https://github.com/langchain-ai/langchain/issues/15391 | 2,062,017,256 | 15,391 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
If i set the in-memory replica of milvus to 3 and then run the following code on langchain, the following error occurs.
```python3
vector_db = Milvus.from_documents(
docs,
embeddings,
collection_name=collection,
)
```
> pymilvus.exceptions.MilvusException: <MilvusException: (code=1100, message=failed to load collection: can't change the replica number for loaded collection: expected=3, actual=1: invalid parameter)>
when i look at the langchain code, the `replica_number` is 1 as the default and cannot be handed over as a factor, can you improve this?
```python3
def _load(self) -> None:
"""Load the collection if available."""
from pymilvus import Collection
if isinstance(self.col, Collection) and self._get_index() is not None:
self.col.load()
```
```python3
def load(
self,
partition_names: Optional[list] = None,
replica_number: int = 1,
timeout: Optional[float] = None,
**kwargs,
):
```
### Suggestion:
Please modify the langchain so that the `replica_number` of the milvus load collection can be handed over as an argument | Issue: milvus collectoin load replica number | https://api.github.com/repos/langchain-ai/langchain/issues/15390/comments | 2 | 2024-01-02T06:49:44Z | 2024-04-09T16:13:24Z | https://github.com/langchain-ai/langchain/issues/15390 | 2,062,006,876 | 15,390 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
`from langchain.chains.question_answering import load_qa_chain
template = """
{Your_Prompt}
CONTEXT:
{context}
QUESTION:
{query}
CHAT HISTORY:
{chat_history}
ANSWER:
"""
prompt = PromptTemplate(input_variables=["chat_history", "query", "context"], template=template)
memory = ConversationBufferMemory(memory_key="chat_history", input_key="query")
chain = load_qa_chain(ChatOpenAI(temperature=0), chain_type="stuff", memory=memory, prompt=prompt)`
is the above code is correct? if correct then please let me know from where i will get chat_history variable?
### Suggestion:
_No response_ | Issue:Issue regarding Memory implementation | https://api.github.com/repos/langchain-ai/langchain/issues/15388/comments | 3 | 2024-01-02T06:34:26Z | 2024-04-09T16:13:52Z | https://github.com/langchain-ai/langchain/issues/15388 | 2,061,996,098 | 15,388 |
[
"hwchase17",
"langchain"
] | ### System Info
```
langchain==0.0.335
```
```
python 3.11
```
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create simple `test.py`:
```python
from langchain.chains.llm_summarization_checker.base import LLMSummarizationCheckerChain
from langchain.llms.fake import FakeListLLM
llm = FakeListLLM(responses=[])
chain = LLMSummarizationCheckerChain.from_llm(llm)
```
2. Install `pyinstaller` via `pip install -U pyinstaller`
3. Run
```
pyinstaller test.py`
```
4. Wait until it's done.
5. Run
```
$ ./dist/test/test
Traceback (most recent call last):
File "test.py", line 1, in <module>
from langchain.chains.llm_summarization_checker.base import LLMSummarizationCheckerChain
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 419, in exec_module
File "langchain/chains/__init__.py", line 51, in <module>
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 419, in exec_module
File "langchain/chains/llm_summarization_checker/base.py", line 20, in <module>
File "langchain_core/prompts/prompt.py", line 202, in from_file
FileNotFoundError: [Errno 2] No such file or directory: '/home/tiger/junk/tt/dist/test/_internal/langchain/chains/llm_summarization_checker/prompts/create_facts.txt'
[789522] Failed to execute script 'test' due to unhandled exception!
```
### Expected behavior
```
CREATE_ASSERTIONS_PROMPT = PromptTemplate.from_file(PROMPTS_DIR / "create_facts.txt")
CHECK_ASSERTIONS_PROMPT = PromptTemplate.from_file(PROMPTS_DIR / "check_facts.txt")
REVISED_SUMMARY_PROMPT = PromptTemplate.from_file(PROMPTS_DIR / "revise_summary.txt")
ARE_ALL_TRUE_PROMPT = PromptTemplate.from_file(PROMPTS_DIR / "are_all_true_prompt.txt")
```
These lines from https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/llm_summarization_checker/base.py should be changed to something like lazy loading, or just hard code the `.txt` files.
PS - I'm using `llama2` so these prompts do not work anyway. | langchain is not `pyinstaller` friendly due to dependency on external files, e.g. `llm_summarization_checker` | https://api.github.com/repos/langchain-ai/langchain/issues/15386/comments | 2 | 2024-01-02T03:17:26Z | 2024-01-02T04:09:36Z | https://github.com/langchain-ai/langchain/issues/15386 | 2,061,900,973 | 15,386 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain version: 0.0.351
boto3==1.34.3
Python version: 3.11.7
### Who can help?
I use DynamoDBChatMessageHistory as the conversation_history, seems duplicate Human messages saved to DynamoDB table every time, AI message saved once.
Here is the duplicate messages:

Below is my code:
` def ask_stream(
self,
input_text: str,
conversation_history: ConversationBufferMemory = ConversationBufferMemory(
ai_prefix="Assistant"
),
verbose: bool = False,
**kwargs,
):
"""Processes a stream of input by invoking the engine.
Parameters
----------
conversation_history: ConversationBufferMemory
The conversation history
input_text: str
The input prompt or message.
verbose: boolean
if the langchain shall show the detailed logging
kwargs: dict
Additional keyword arguments to pass to the engine.
For example, you can pass in temperature to control the model's
creativity.
Returns
-------
str
The response generated by the engine.
Raises
------
Any exceptions raised by the engine.
"""
temp_stop = kwargs.get("stop", self.stop)
template = """The following is a friendly conversation between a human and an AI."""
PROMPT = PromptTemplate(input_variables=["history", "input"], template=template)
chain = ConversationChain(
llm=self.engine,
memory=conversation_history,
verbose=verbose,
prompt=PROMPT,
)
return chain.predict(input=input_text, stop=temp_stop)
`
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Start a new conversation with DynamoDBChatMessageHistory as memory.
### Expected behavior
Each HumanMessage should be saved once in DynamoDB table. | DynamoDBChatMessageHistory saved Human Message duplicate | https://api.github.com/repos/langchain-ai/langchain/issues/15385/comments | 2 | 2024-01-02T03:00:50Z | 2024-01-02T08:41:53Z | https://github.com/langchain-ai/langchain/issues/15385 | 2,061,894,643 | 15,385 |
[
"hwchase17",
"langchain"
] | ### System Info
Python=3.10
Langchain=0.0.352
langchain-community==0.0.6
I'm using a custom Bing search engine. When I asked something that my Bing search was not able to return results "Webpages" does not exist.
I had to change this: search_results["webPages"]["value"] (line 47) to
if "webPages" in search_results:
return search_results["webPages"]["value"]
else:
return []
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create an OpenAI Functions agent with Bing
2. Have a custom bing search
3. Search for something that will not return results from Bing
### Expected behavior
Expect the tool to return "No bing results found" and then a response from an agent | Bing Search Tool has key value error | https://api.github.com/repos/langchain-ai/langchain/issues/15384/comments | 1 | 2024-01-02T01:05:50Z | 2024-01-02T23:25:02Z | https://github.com/langchain-ai/langchain/issues/15384 | 2,061,856,348 | 15,384 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
from langchain.vectorstores.neo4j_vector import Neo4jVector
ModuleNotFoundError: No module named 'langchain.vectorstores.neo4j_vector'
### Idea or request for content:
The existing documentation for the Neo4J Vector Index incorrectly indicates the use of "Neo4jVector" from the "langchain.vectorstores" module. However, it appears that there is no implementation of "Neo4jVector" within the Vector stores. | ModuleNotFoundError: No module named 'langchain.vectorstores.neo4j_vector' | https://api.github.com/repos/langchain-ai/langchain/issues/15383/comments | 2 | 2024-01-01T22:32:24Z | 2024-01-02T03:25:12Z | https://github.com/langchain-ai/langchain/issues/15383 | 2,061,806,618 | 15,383 |
[
"hwchase17",
"langchain"
] | ### Feature request
To add Async Client support to MongoDB Vector Stores
### Motivation
Currently, Langhain works very well with the PyMongo Client but not with the Async Clients like Motor it throws error, probably I hope it may not be implemented yet?
Reference:
PyMongo Working Docs - https://python.langchain.com/docs/integrations/vectorstores/mongodb_atlas
Async Error in Colab - https://colab.research.google.com/drive/1uBfiqoRH6rfiCCXhbxYlNxfc9ILizecl?usp=sharing
### Your contribution
I am not sure. | [MongoDB] Async Support for Vector Store | https://api.github.com/repos/langchain-ai/langchain/issues/15377/comments | 8 | 2024-01-01T12:48:06Z | 2024-05-02T07:53:49Z | https://github.com/langchain-ai/langchain/issues/15377 | 2,061,541,643 | 15,377 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain Version: 0.0.353
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Update to 0.0.353
2. Try to import create_async_playwright_browser
3. Get error
### Expected behavior
1. No Error ;D
I can tell that the import was supported as recently as 0.0.350 | Playwright Utilities Removed In 0.0.353, Documentation Not Updated | https://api.github.com/repos/langchain-ai/langchain/issues/15372/comments | 3 | 2024-01-01T02:18:08Z | 2024-04-08T16:08:42Z | https://github.com/langchain-ai/langchain/issues/15372 | 2,061,256,601 | 15,372 |
[
"hwchase17",
"langchain"
] | ### System Info
- Python 3.12.1
- MacOS 14.2.1
- langchain-cli 0.0.20 from pip OR langchain-* from git master branch (commit-ish [26f84b7](https://github.com/langchain-ai/langchain/commit/26f84b74d0f7dc4d2211a1a62d47eec36cb1d726)) -- can reproduce with latest code: langchain 0.0.353, langchain-cli 0.0.20, langchain-community 0.0.7, langchain-core 0.1.4
- pandas 2.1.4
- lancedb 0.4.3
- numpy 1.26.2
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I started with the rag-chroma-private quickstart, then modified it to my needs. I got rid of Chroma due to errors and now use a persistent LanceDB file.
When I try to enter a prompt on the playground, I get a crash (see the bottom of this message).
server.py:
```
from fastapi import FastAPI
from fastapi.responses import RedirectResponse
from langserve import add_routes
from app.chain import chain as rag_private_chain
app = FastAPI()
@app.get("/")
async def redirect_root_to_docs():
return RedirectResponse("/docs")
add_routes(app, rag_private_chain, path="/rag-lancedb-private")
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
```
chain.py:
```
# Load
from typing import List
from langchain.chat_models import ChatOllama
from langchain.document_loaders import UnstructuredMarkdownLoader
from langchain.embeddings import OllamaEmbeddings
from langchain.prompts import ChatPromptTemplate
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_core.output_parsers import StrOutputParser
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
from langchain_core.documents.base import Document
import os, sys
#DB
import lancedb
from langchain.vectorstores.lancedb import LanceDB
#Chroma
#from langchain.vectorstores.chroma import Chroma
#from langchain.vectorstores.utils import filter_complex_metadata
files_directory = "/Users/sean/Downloads/posts"
all_splits = None
my_emb = OllamaEmbeddings(model="llama2:70b-chat")
table = None
db = lancedb.connect("./lance.db")
vectorstore = None
#Generating the vectors only has to be done when the data changes
def do_loader():
#List each file in files_directory and loop through them creating an UnstructuredMarkdownLoader for each file
all_splits = []
for filename in os.listdir(files_directory):
full_path = os.path.join(files_directory, filename)
loader = UnstructuredMarkdownLoader(full_path, mode="elements", strategy="hi_res")
data: List[Document] = loader.load()
all_splits.extend(data)
continue
print(f"Got {len(all_splits)} documents")
table = db.create_table(
"rag",
data=[
{
"vector": my_emb.embed_query("Hello World"),
"text": "Hello World",
"id": "1",
}
],
mode="overwrite",
)
#New
vectorstore = LanceDB.from_documents(all_splits, my_emb, connection=table)
print("Added docs to vector store")
#Not calling do_loader() because I have already populated LanceDB with the vectors.
table = db.open_table("rag")
vectorstore = LanceDB(connection=table, embedding=my_emb)
retriever = vectorstore.as_retriever()
print("Loaded DB")
# Prompt
# Optionally, pull from the Hub
# from langchain import hub
# prompt = hub.pull("rlm/rag-prompt")
# Or, define your own:
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
# LLM
ollama_llm = "llama2:70b-chat"
model = ChatOllama(model=ollama_llm)
print("Created Ollama model")
# RAG chain
chain = (
RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
| prompt
| model
| StrOutputParser()
)
# Add typing for input
class Question(BaseModel):
__root__: str
chain = chain.with_types(input_type=Question)
print("Done with chain.py")
```
Error:
```
INFO: Application startup complete.
INFO: 127.0.0.1:64594 - "GET /rag-chroma-private/playground/ HTTP/1.1" 200 OK
INFO: 127.0.0.1:64594 - "GET /rag-chroma-private/playground/assets/index-52e8ab2f.css HTTP/1.1" 200 OK
INFO: 127.0.0.1:64595 - "GET /rag-chroma-private/playground/assets/index-6a0f524c.js HTTP/1.1" 200 OK
INFO: 127.0.0.1:64595 - "GET /rag-chroma-private/playground/favicon.ico HTTP/1.1" 200 OK
INFO: 127.0.0.1:64594 - "POST /rag-chroma-private/stream_log HTTP/1.1" 200 OK
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/sse_starlette/sse.py", line 269, in __call__
await wrap(partial(self.listen_for_disconnect, receive))
File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/sse_starlette/sse.py", line 258, in wrap
await func()
File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/sse_starlette/sse.py", line 215, in listen_for_disconnect
message = await receive()
^^^^^^^^^^^^^^^
File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/uvicorn/protocols/http/h11_impl.py", line 538, in receive
await self.message_event.wait()
File "/opt/homebrew/Cellar/[email protected]/3.12.1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/locks.py", line 212, in wait
await fut
asyncio.exceptions.CancelledError: Cancelled by cancel scope 15bc21f40
During handling of the above exception, another exception occurred:
+ Exception Group Traceback (most recent call last):
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
| result = await app( # type: ignore[func-returns-value]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
| return await self.app(scope, receive, send)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
| await super().__call__(scope, receive, send)
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/applications.py", line 116, in __call__
| await self.middleware_stack(scope, receive, send)
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 186, in __call__
| raise exc
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 164, in __call__
| await self.app(scope, receive, _send)
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
| await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
| raise exc
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
| await app(scope, receive, sender)
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/routing.py", line 754, in __call__
| await self.middleware_stack(scope, receive, send)
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/routing.py", line 774, in app
| await route.handle(scope, receive, send)
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/routing.py", line 296, in handle
| await self.app(scope, receive, send)
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/routing.py", line 75, in app
| await wrap_app_handling_exceptions(app, request)(scope, receive, send)
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
| raise exc
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
| await app(scope, receive, sender)
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/starlette/routing.py", line 73, in app
| await response(scope, receive, send)
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/sse_starlette/sse.py", line 255, in __call__
| async with anyio.create_task_group() as task_group:
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 678, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/langserve/serialization.py", line 90, in default
| return super().default(obj)
| ^^^^^^^
| RuntimeError: super(): __class__ cell not found
|
| The above exception was the direct cause of the following exception:
|
| Traceback (most recent call last):
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/sse_starlette/sse.py", line 258, in wrap
| await func()
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/sse_starlette/sse.py", line 245, in stream_response
| async for data in self.body_iterator:
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/langserve/api_handler.py", line 1049, in _stream_log
| "data": self._serializer.dumps(data).decode("utf-8"),
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/Users/sean/dev/cc/cc/ccvenv/lib/python3.12/site-packages/langserve/serialization.py", line 168, in dumps
| return orjson.dumps(obj, default=default)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| TypeError: Type is not JSON serializable: numpy.ndarray
+------------------------------------
```
### Expected behavior
It's not clear to me which part of the stack is failing, but the expected behavior is that the LLM should generate some output rather than crashing. | RAG crash: TypeError: Type is not JSON serializable: numpy.ndarray | https://api.github.com/repos/langchain-ai/langchain/issues/15371/comments | 9 | 2024-01-01T01:46:45Z | 2024-06-08T16:08:35Z | https://github.com/langchain-ai/langchain/issues/15371 | 2,061,243,335 | 15,371 |
[
"hwchase17",
"langchain"
] | ### System Info
MacOS 14.0, Jupiter with Python 3.11.6
(base) ➜ llm-env pip show langchain
Name: langchain
Version: 0.0.353
Summary: Building applications with LLMs through composability
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Location: /lib/python3.11/site-packages
Requires: aiohttp, dataclasses-json, jsonpatch, langchain-community, langchain-core, langsmith, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity
### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
when try to execute,
from langchain.agents.agent_toolkits import create_python_agent
from langchain.agents import load_tools, initialize_agent
from langchain.agents import AgentType
from langchain.tools.python.tool import PythonREPLTool
from langchain.python import PythonREPL
Jupiter gave an error:
ValueError: 'lib/python3.11/site-packages/langchain/agents/agent_toolkits' is not in the subpath of 'lib/python3.11/site-packages/langchain_core' OR one path is relative and the other is absolute.
### Expected behavior
Import agent related thing correctly. | ValueError: 'lib/python3.11/site-packages/langchain/agents/agent_toolkits' is not in the subpath of 'lib/python3.11/site-packages/langchain_core' OR one path is relative and the other is absolute. | https://api.github.com/repos/langchain-ai/langchain/issues/15370/comments | 3 | 2024-01-01T01:32:47Z | 2024-02-07T23:07:23Z | https://github.com/langchain-ai/langchain/issues/15370 | 2,061,236,158 | 15,370 |
[
"hwchase17",
"langchain"
] | ### Feature request
The ollama integration assumes that all models are served on "localhost:11434", if the ollama service is hosted on a different machine, the integration will fail.
Can we add an environment variable that if present overrides this url, so the correct url for the ollama server can be set.
### Motivation
In My setup ollama sits on a separate machine that is resourced for serving LLMs.
### Your contribution
I'm afraid I don't have any knowledge of python, go, cpp and rust only. | Ability to set ollama serve url | https://api.github.com/repos/langchain-ai/langchain/issues/15365/comments | 4 | 2023-12-31T20:05:56Z | 2024-06-08T16:08:31Z | https://github.com/langchain-ai/langchain/issues/15365 | 2,061,158,064 | 15,365 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.353
pygpt4all 1.1.0
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.llms import GPT4ALL, LlamaCpp
### Expected behavior
ImportError: cannot import name 'GPT4ALL' from 'langchain.llms' (/home/rkuo/.local/lib/python3.10/site-packages/langchain/llms/__init__.py) | ImportError: cannot import name 'GPT4ALL' from 'langchain.llms' | https://api.github.com/repos/langchain-ai/langchain/issues/15362/comments | 3 | 2023-12-31T19:14:23Z | 2024-04-09T16:12:57Z | https://github.com/langchain-ai/langchain/issues/15362 | 2,061,148,468 | 15,362 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.9
LangChain 0.0.339
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I created new instance of SQLDatabase and executed SQL queries using the 'run' method.
The problem is that when I execute a simple SELECT query the result returns without the column names, only the values, and therefore I can't tell which value relates to which column.
I saw in the implementation the following code in the 'run' method:
```
result = self._execute(command, fetch)
# Convert columns values to string to avoid issues with sqlalchemy
# truncating text
res = [
tuple(truncate_word(c, length=self._max_string_length) for c in r.values())
for r in result
]
```
and I'm wondering if this data manipulation is done by intention or maybe it's a bug.
### Expected behavior
For example:
executes the following SQL: select id, count(*) as num_count from some_table
expected result: [{'id': 383, 'num_count': 10}]
actual result: [(383, 10)] | SQLDatabase returns result without column names. | https://api.github.com/repos/langchain-ai/langchain/issues/15360/comments | 1 | 2023-12-31T17:17:04Z | 2024-04-07T16:07:44Z | https://github.com/langchain-ai/langchain/issues/15360 | 2,061,123,042 | 15,360 |
[
"hwchase17",
"langchain"
] | ### System Info
- LangChain: 0.0.353
- System: Ubuntu 22.04
- Python: 3.10.12
### Information
I run the code in the quickstart part of the [document](https://python.langchain.com/docs/get_started/quickstart#agent), code:
```python
from langchain.chat_models import ChatOpenAI
from langchain import hub
from langchain.agents import create_openai_functions_agent
from langchain.agents import AgentExecutor
# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/openai-functions-agent")
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
agent = create_openai_functions_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
```
However, the Python interpreter told me:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/tmp/ipykernel_665358/3241410387.py in <module>
7 prompt = hub.pull("hwchase17/openai-functions-agent")
8 llm = ChatOpenAI(openai_api_key=openai_api_key, model="gpt-3.5-turbo", temperature=0)
----> 9 agent = create_openai_functions_agent(llm=llm, tools=tools, prompt=prompt)
10 agent_executor = AgentExecutor(agent, tools, verbose=True)
[~/.local/lib/python3.10/site-packages/langchain/agents/openai_functions_agent/base.py](https://vscode-remote+ssh-002dremote-002b158-002e132-002e9-002e210.vscode-resource.vscode-cdn.net/home/iot/Documents/langchain/~/.local/lib/python3.10/site-packages/langchain/agents/openai_functions_agent/base.py) in create_openai_functions_agent(llm, tools, prompt)
285 )
286 llm_with_tools = llm.bind(
--> 287 functions=[format_tool_to_openai_function(t) for t in tools]
288 )
289 agent = (
[~/.local/lib/python3.10/site-packages/langchain/agents/openai_functions_agent/base.py](https://vscode-remote+ssh-002dremote-002b158-002e132-002e9-002e210.vscode-resource.vscode-cdn.net/home/iot/Documents/langchain/~/.local/lib/python3.10/site-packages/langchain/agents/openai_functions_agent/base.py) in <listcomp>(.0)
285 )
286 llm_with_tools = llm.bind(
--> 287 functions=[format_tool_to_openai_function(t) for t in tools]
288 )
289 agent = (
[~/.local/lib/python3.10/site-packages/langchain_community/tools/convert_to_openai.py](https://vscode-remote+ssh-002dremote-002b158-002e132-002e9-002e210.vscode-resource.vscode-cdn.net/home/iot/Documents/langchain/~/.local/lib/python3.10/site-packages/langchain_community/tools/convert_to_openai.py) in format_tool_to_openai_function(tool)
10 def format_tool_to_openai_function(tool: BaseTool) -> FunctionDescription:
11 """Format tool into the OpenAI function API."""
---> 12 if tool.args_schema:
13 return convert_pydantic_to_openai_function(
14 tool.args_schema, name=tool.name, description=tool.description
AttributeError: 'VectorStoreRetriever' object has no attribute 'args_schema'
```
It seems that some packages have version incompatibility.
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Directly run the following code:
```
from langchain.chat_models import ChatOpenAI
from langchain import hub
from langchain.agents import create_openai_functions_agent
from langchain.agents import AgentExecutor
# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/openai-functions-agent")
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
agent = create_openai_functions_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
```
### Expected behavior
It should run successfully without any error report. | AttributeError: 'VectorStoreRetriever' object has no attribute 'args_schema' | https://api.github.com/repos/langchain-ai/langchain/issues/15359/comments | 2 | 2023-12-31T15:17:25Z | 2024-04-10T16:15:34Z | https://github.com/langchain-ai/langchain/issues/15359 | 2,061,090,976 | 15,359 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.353
Python 3.10.13 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:24:38) [MSC v.1916 64 bit (AMD64)]
Windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chat_models.openai import ChatOpenAI
from langchain_community.chat_loaders.facebook_messenger import FolderFacebookMessengerChatLoader, SingleFileFacebookMessengerChatLoader
from pathlib import Path
import os
chat_file = Path("data/my-fb-folder/messages/inbox/message-dir/message_1.json")
loader = SingleFileFacebookMessengerChatLoader(chat_file)
loader.load()
```
Stacktrace:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[11], [line 8](vscode-notebook-cell:?execution_count=11&line=8)
[6](vscode-notebook-cell:?execution_count=11&line=6) chat_file = Path("data/my-fb-folder/your_activity_across_facebook/messages/inbox/message-dir/message_1.json")
[7](vscode-notebook-cell:?execution_count=11&line=7) loader = SingleFileFacebookMessengerChatLoader(chat_file)
----> [8](vscode-notebook-cell:?execution_count=11&line=8) loader.load()
File [c:\Users\th4tkh13m\miniconda3\envs\rag\lib\site-packages\langchain_community\chat_loaders\base.py:16](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/base.py:16), in BaseChatLoader.load(self)
[14](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/base.py:14) def load(self) -> List[ChatSession]:
[15](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/base.py:15) """Eagerly load the chat sessions into memory."""
---> [16](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/base.py:16) return list(self.lazy_load())
File [c:\Users\th4tkh13m\miniconda3\envs\rag\lib\site-packages\langchain_community\chat_loaders\facebook_messenger.py:43](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/facebook_messenger.py:43), in SingleFileFacebookMessengerChatLoader.lazy_load(self)
[39](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/facebook_messenger.py:39) messages = []
[40](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/facebook_messenger.py:40) for m in sorted_data:
[41](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/facebook_messenger.py:41) messages.append(
[42](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/facebook_messenger.py:42) HumanMessage(
---> [43](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/facebook_messenger.py:43) content=m["content"], additional_kwargs={"sender": m["sender_name"]}
[44](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/facebook_messenger.py:44) )
[45](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/facebook_messenger.py:45) )
[46](file:///C:/Users/th4tkh13m/miniconda3/envs/rag/lib/site-packages/langchain_community/chat_loaders/facebook_messenger.py:46) yield ChatSession(messages=messages)
KeyError: 'content'
```
### Expected behavior
The chat message should be loaded normally. | SingleFileFacebookMessengerChatLoader fails when the chat contains non-text contents such as stickers and photos. | https://api.github.com/repos/langchain-ai/langchain/issues/15356/comments | 3 | 2023-12-31T09:31:07Z | 2024-01-02T14:36:02Z | https://github.com/langchain-ai/langchain/issues/15356 | 2,061,000,149 | 15,356 |
[
"hwchase17",
"langchain"
] | ### System Info
azure-search-documents==11.4.0b8
langchain==0.0.352
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have my own search index with no `metadata` field.
#### Code
```python
from langchain.vectorstores.azuresearch import AzureSearch
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query,
semantic_configuration_name="default"
)
query = "How many employees does Contoso Electronics have?"
docs = vector_store.semantic_hybrid_search(
query=query,
search_type="semantic_hybrid",
)
print(docs[0])
```
#### Stack trace
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File \lib\site-packages\langchain_community\vectorstores\azuresearch.py:656, in <listcomp>(.0)
622 semantic_answers_dict[semantic_answer.key] = {
623 "text": semantic_answer.text,
624 "highlights": semantic_answer.highlights,
625 }
626 # Convert results to Document objects
627 docs = [
628 (
629 Document(
630 page_content=result.pop(FIELDS_CONTENT),
631 metadata={
632 **(
633 {FIELDS_ID: result.pop(FIELDS_ID)}
634 if FIELDS_ID in result
635 else {}
636 ),
637 **(
638 json.loads(result[FIELDS_METADATA])
639 if FIELDS_METADATA in result
640 else {
641 k: v
642 for k, v in result.items()
643 if k != FIELDS_CONTENT_VECTOR
644 }
645 ),
646 **{
647 "captions": {
648 "text": result.get("@search.captions", [{}])[0].text,
649 "highlights": result.get("@search.captions", [{}])[
650 0
651 ].highlights,
652 }
653 if result.get("@search.captions")
654 else {},
655 "answers": semantic_answers_dict.get(
--> 656 json.loads(result["metadata"]).get("key"), ""
657 ),
658 },
659 },
660 ),
661 float(result["@search.score"]),
662 float(result["@search.reranker_score"]),
663 )
664 for result in results
665 ]
666 return docs
KeyError: 'metadata'
```
### Expected behavior
I get search results from Azure AI Search.
This error is caused by the hardcoding of `metadata` field name, such as `result["metadata"]` in line 656 of `langchain\libs\community\langchain_community\vectorstores\azuresearch.py`. Therefore, performing a search on an Azure AI Search index that does not have this field will fail. | AzureSearch semantic_hybrid_search fails due to hardcoding of metadata fields | https://api.github.com/repos/langchain-ai/langchain/issues/15355/comments | 1 | 2023-12-31T08:43:04Z | 2024-04-07T16:07:34Z | https://github.com/langchain-ai/langchain/issues/15355 | 2,060,988,370 | 15,355 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
How can I make output templates in langchain? That is, for example, I throw a request for AI to write a joke, but with strict adherence to the template [set-up, punchline] and therefore get as a result:
```
Set-up: ...
Punchline: ...
```
and nothing more
### Suggestion:
_No response_ | Issue: output templates in langchain | https://api.github.com/repos/langchain-ai/langchain/issues/15350/comments | 1 | 2023-12-31T00:18:00Z | 2024-04-07T16:07:29Z | https://github.com/langchain-ai/langchain/issues/15350 | 2,060,892,236 | 15,350 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain 0.0.353
Python 3.10.12
System Ubuntu 22.04
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying to modify and run the example on [this page](https://python.langchain.com/docs/use_cases/question_answering/). I am modifying it slightly to use a different embedding tool from the Ollama model.
The last line, which should create the vector store, in the below snippet crashes.
```
import bs4
from langchain import hub
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import WebBaseLoader
from langchain.embeddings import OllamaEmbeddings
from langchain.schema import StrOutputParser
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain_core.runnables import RunnablePassthrough
embeddings_open = OllamaEmbeddings(model="mistral")
loader = WebBaseLoader(
web_paths=("https://lilianweng.github.io/posts/2023-06-23-agent/",),
bs_kwargs=dict(
parse_only=bs4.SoupStrainer(
class_=("post-content", "post-title", "post-header")
)
),
)
docs = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
splits = text_splitter.split_documents(docs)
vectorstore = Chroma.from_documents(documents=splits, embedding=embeddings_open)
```
### Expected behavior
I would expect the code to work, unless I'm missing something important. Instead, I get this error.
Any clues are most appreciated. I'm sure it is something simple I overlooked.
```
>>> vectorstore = Chroma.from_documents(documents=splits, embedding = embeddings_open)
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 169, in _new_conn
conn = connection.create_connection(
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 96, in create_connection
raise err
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 86, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 756, in urlopen
retries = retries.increment(
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 574, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f8d11256bc0>: Failed to establish a new connection: [Errno 111] Connection refused'))
``` | Chromadb connection error | https://api.github.com/repos/langchain-ai/langchain/issues/15348/comments | 3 | 2023-12-30T18:38:19Z | 2023-12-31T12:17:59Z | https://github.com/langchain-ai/langchain/issues/15348 | 2,060,823,804 | 15,348 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
The [documentation](https://python.langchain.com/docs/use_cases/summarization) describes the different options for summarizing a text, for longer texts the 'map_reduce' option is suggested. It is mentioned further under 'Go deeper' that it is possible to use different LLMs via the `llm` parameter. This seems to work well using the code below with the `chain_type='stuff'` parameter and, in particular, using a local model (in the example below [this model](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGUF) is used).
```
from langchain.document_loaders import PyPDFLoader
from langchain.llms import CTransformers
from langchain.chains.summarize import load_summarize_chain
# load a PDF-file
loader = PyPDFLoader("C:/xyz.pdf")
docs = loader.load()
# use a local LLAMA2 model
llm = CTransformers(model='./models/llama-2-7b-chat.Q5_K_M.gguf', model_type='llama', config={'context_length': 4096, 'max_new_tokens': 256, 'temperature': 0}, local_files_only=True)
# summarise the text (this works only if it fits into the context length of ~4000 tokens)
chain = load_summarize_chain(llm, chain_type="stuff")
chain.run(docs)
```
However, surprisingly, it returns the following error when using the `chain_type='map_reduce'` parameter: 'OSError: Can't load tokenizer for 'gpt2'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'gpt2' is the correct path to a directory containing all relevant files for a GPT2TokenizerFast tokenizer.'
The suggestion [mentioned in this issue on Github](https://github.com/langchain-ai/langchain/issues/9273) doesn't work for the local model used above. It would be great to have more specific information in the LangChain documentation on (1) how to perform text summarization with LangChain using different LLMs, and (2) specifically for using local models that don't require an internet connection and/or require gpt2. Since the above code works with the parameter `chain_type='stuff'` but not with the parameter `chain_type='map_reduce'`, it would be important to explain what happens under the hood so users can make this work for local models.
### Idea or request for content:
_No response_ | DOC: Summarization 'map_reduce' - Can't load tokenizer for 'gpt2' | https://api.github.com/repos/langchain-ai/langchain/issues/15347/comments | 11 | 2023-12-30T17:44:16Z | 2024-06-12T15:24:45Z | https://github.com/langchain-ai/langchain/issues/15347 | 2,060,810,975 | 15,347 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Help me understand how I can save the intermediate data of chain execution results?

### Suggestion:
_No response_ | Issue: <Saving intermediate variable chains ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/15345/comments | 2 | 2023-12-30T15:47:22Z | 2024-04-06T16:06:32Z | https://github.com/langchain-ai/langchain/issues/15345 | 2,060,781,653 | 15,345 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
A few days back, I was referring to the [Prompt templates](https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/) page which now shows: "**Page Not Found**"
### Idea or request for content:
I understand that LangChain is an evolving framework undergoing continuous development.
- Could we consider implementing versioning for the documentation? This would allow users to access specific documentation versions.
- Alternatively, if a section undergoes modification, we could preserve the existing documentation and label it as 'Legacy,' ensuring clarity about deprecated practices. | DOC: Prompt Templates "Page Not Found" | https://api.github.com/repos/langchain-ai/langchain/issues/15342/comments | 3 | 2023-12-30T11:14:48Z | 2024-04-14T16:13:36Z | https://github.com/langchain-ai/langchain/issues/15342 | 2,060,716,887 | 15,342 |
[
"hwchase17",
"langchain"
] | ### System Info
LangChain 0.0.353
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Cannot set top_p to useful values via ChatOllama(top_p=0.3), is reduced to value 0 because it's an int:
top_p: Optional[int] = None
top_p must be float.
### Expected behavior
top_p must be a float, 0.3 should appear in ollama log. | _OllamaCommon contains top_p with int-restriction | https://api.github.com/repos/langchain-ai/langchain/issues/15341/comments | 1 | 2023-12-30T10:29:06Z | 2024-01-15T19:59:40Z | https://github.com/langchain-ai/langchain/issues/15341 | 2,060,706,496 | 15,341 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
`below is my code for generating custom prompt which takes context and user query and we pass it into model:
def generate_custom_prompt(new_project_qa,query,name,not_uuid):
check = query.lower()
result = new_project_qa(query)
relevant_document = result['source_documents']
context_text="\n\n---\n\n".join([doc.page_content for doc in relevant_document])
# print(context_text,"context_text")
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
greetings = ['hi', 'hello', 'hey', 'hui', 'hiiii', 'hii', 'hiii', 'heyyy']
if check in greetings:
custom_prompt_template = f"""
Just simply reply with "Hello {name}! How can I assist you today?"
"""
elif check not in greetings and user_experience_inst.custom_prompt:
custom_prompt_template = f"""Answer the question based only on following context: ```{context_text} ```
You are a chatbot designed to provide answers to User's Questions:```{check}```, delimited by triple backticks.
Generate your answer to match the user's requirements: {user_experience_inst.custom_prompt}
If you encounter a question for which you don't know the answer, please respond with 'I don't know' and refrain from making up an answer.
- Before saying 'I don't know,' please re-verify your vector store to ensure the answer is not present in the database.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, feel free to ask for clarification.
User's Question: ```{check}```
AI Answer:"""
else:
# Create the custom prompt template
custom_prompt_template = f"""Generate your response exclusively from the provided context: {{context_text}}. You function as a chatbot specializing in delivering detailed answers to the User's Question: ```{{check}} ```, enclosed within triple backticks.
Generate your answer in points in the following format:
1. Point no 1
1.1 Its subpoint in details
1.2 More information if needed.
2. Point no 2
2.1 Its subpoint in details
2.2 More information if needed.
…
N. Another main point.
If you encounter a question for which you don't know the answer based on the predefined points, please respond with 'I don't know' and refrain from making up an answer.
However, if the answer is not present in the predefined points, then Provide comprehensive information related to the user's query.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, you can ask for clarification.
User's Question: ```{{check}} ```
AI Answer:"""
# Create the PromptTemplate
custom_prompt = ChatPromptTemplate.from_template(custom_prompt_template)
formatted_prompt = custom_prompt.format(context_text=context_text, check=check)
return formatted_prompt
below is my conversation chain where i am inplementing memory
def retreival_qa_chain(chroma_db_path):
embedding = OpenAIEmbeddings()
vectordb = Chroma(persist_directory=chroma_db_path, embedding_function=embedding)
llm = ChatOpenAI(temperature=0.1)
memory = ConversationBufferMemory(llm=llm,output_key='answer',memory_key='chat_history',return_messages=True)
retriever = vectordb.as_retriever(search_kwargs={"k": 2})
qa = ConversationalRetrievalChain.from_llm(llm=llm,memory=memory,chain_type="stuff",retriever=retriever,return_source_documents=True,get_chat_history=lambda h : h,verbose=True)
# qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever,return_source_documents=True)
return qa
but I am not getting desired output as expected `
### Suggestion:
_No response_ | Issue: Not getting desired output while implementing memory | https://api.github.com/repos/langchain-ai/langchain/issues/15339/comments | 7 | 2023-12-30T04:32:17Z | 2024-04-06T16:06:27Z | https://github.com/langchain-ai/langchain/issues/15339 | 2,060,626,887 | 15,339 |
[
"hwchase17",
"langchain"
] | ### System Info
New versions
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Start the code
### Expected behavior
Hi , I'm trying to do a chain stuff query, but sometimes when I ask questions I have this error:
openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 4097 tokens, however you requested 4177 tokens (3921 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.", 'type': 'invalid_request_error', 'param': None, 'code': None}}
How can I solve this ? Can I cut my prompt ? If yes how? Or Can I upgrade the max_token ? IF yes How please..
Here is my code:
import getpass
import os
from langchain.document_loaders import TextLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Pinecone
from langchain_community.vectorstores import Pinecone
from langchain_community.embeddings.openai import OpenAIEmbeddings
import pinecone
import sys
# Set your Pinecone API key and environment
pinecone_api = "API"
pinecone_env = "API"
# Set your OpenAI API key
openai_api = "API"
# Initialize Pinecone
pinecone.init(api_key=pinecone_api, environment=pinecone_env)
# Define the index name
index_name = "rewind"
# Check if the index already exists, if not, create it
if index_name not in pinecone.list_indexes():
pinecone.create_index(name=index_name, metric="cosine", dimension=1536)
# Initialize the OpenAIEmbeddings
embeddings = OpenAIEmbeddings(api_key=openai_api)
# Create or load the Pinecone index
docsearch = Pinecone.from_existing_index(index_name, embeddings)
# Perform similarity search
query = sys.argv[1] if len(sys.argv) > 1 else "what Commits there is in github"
text_splitter = CharacterTextSplitter(chunk_size=3000, chunk_overlap=0)
docs = docsearch.similarity_search(query)
docs = text_splitter.split_documents(docs)
if __name__ == '__main__':
results = docsearch.similarity_search(query)
# Load the question answering chain
from langchain.chains.question_answering import load_qa_chain
from langchain.llms import OpenAI
chain = load_qa_chain(OpenAI(), chain_type="stuff")
answers = chain.run(input_documents=docs, question=query)
print(answers) | This model's maximum context length is 4097 tokens, however you requested 4177 tokens | https://api.github.com/repos/langchain-ai/langchain/issues/15333/comments | 1 | 2023-12-29T23:25:32Z | 2024-04-05T16:08:50Z | https://github.com/langchain-ai/langchain/issues/15333 | 2,060,459,074 | 15,333 |
[
"hwchase17",
"langchain"
] | ### System Info
I've been trying to create a self query retriever so that I can look at metadata field info. This issue comes up. Should I be using another vector store to make this work? I can only really work with FAISS. I cannot use ChromaDB since my Python environment is limited to a previous version.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a FAISS Vectorstore DB
2. Create a metadata_field_info object and pass it to a SelfQuery object
3. Create LLM with this retriever
```python
embedding_function = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY, model="text-embedding-ada-002")
db = FAISS.load_local(input_dir + "/" + "storage/deploy/faiss-db", embedding_function)#, distance_strategy="COSINE", normalize_L2 = True)
# retriever = KNNRetriever(vectorstore=db)
llm = ChatOpenAI(
temperature=0,
verbose=False,
openai_api_key=key,
model_name="gpt-3.5-turbo"
)
metadata_field_info = [
AttributeInfo(
name="source",
description="The document this chunk is from.",
type="string",
),
AttributeInfo(
name="origin",
description="The origin the document came from. Comes from either scraped websites like TheKinection.org, Kinecta.org or database files like Bancworks. Bancworks is the higher priority.",
type="string",
),
AttributeInfo(
name="date_day",
description="The day the document was uploaded.",
type="integer",
),
AttributeInfo(
name="date_month",
description="The month the document was uploaded.",
type="integer",
),
AttributeInfo(
name="date_year",
description="The year the document was uploaded.",
type="integer",
),
]
# retriever = db.as_retriever(search_type="similarity", search_kwargs={'k': 6}, metadata_field_info=metadata_field_info)
retriever = SelfQueryRetriever.from_llm(
llm, db, "Information about where documents originated from and when they were published.", metadata_field_info, verbose=True
)
```
### Expected behavior
Successfully create a SelfQuery retriever with FAISS vector store. | Self query retriever with Vector Store type <class 'langchain_community.vectorstores.faiss.FAISS'> not supported. | https://api.github.com/repos/langchain-ai/langchain/issues/15331/comments | 4 | 2023-12-29T22:05:18Z | 2024-01-11T22:59:30Z | https://github.com/langchain-ai/langchain/issues/15331 | 2,060,431,327 | 15,331 |
[
"hwchase17",
"langchain"
] | ### Feature request
This proposal requests the integration of the latest OpenAI models, specifically gpt-4-1106-preview, into the existing framework of [relevant GitHub project, e.g., LangChain]. The newer models offer significantly larger context windows, which are crucial for complex SQL querying and other advanced functionalities. This feature would involve ensuring compatibility with the latest version of the OpenAI API (version 1.0.0 and beyond), which has undergone substantial changes, including the deprecation of certain features like openai.ChatCompletion. Relevant links:
OpenAI API (1.0.0): [OpenAI API Documentation](https://github.com/openai/openai-python)
Migration Guide: [OpenAI Python Library Migration Guide](https://github.com/openai/openai-python/discussions/742)
### Motivation
The primary motivation for this feature request is to leverage the advanced capabilities of the newer OpenAI models, particularly the extended context windows they offer. These capabilities are essential for applications involving extensive data interaction and complex language understanding, such as SQL database querying and management.
Current limitations with the older models and API versions restrict the potential of applications, especially when dealing with lengthy queries or requiring deeper contextual understanding. For example, while working on a project involving the LangChain framework for SQL database interaction, I encountered the APIRemovedInV1 error, which signifies incompatibility with the latest OpenAI API. This issue underscores the need for updating the framework to align with the latest advancements in language models and API standards.
### Your contribution
Might make my own SQL Agent or modify yours. | Integration with OpenAI's Latest Models and API Compatibility | https://api.github.com/repos/langchain-ai/langchain/issues/15328/comments | 5 | 2023-12-29T20:33:36Z | 2024-04-11T17:54:09Z | https://github.com/langchain-ai/langchain/issues/15328 | 2,060,386,330 | 15,328 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
how to use embeddings in langchain with fireworks?(I need it for RAG) It's just that the documentation only talks about OpenAIEmbeddings
https://python.langchain.com/docs/modules/data_connection/text_embedding/
### Idea or request for content:
RAG with fireworks API | DOC: how to use embeddings in langchain with fireworks? | https://api.github.com/repos/langchain-ai/langchain/issues/15325/comments | 1 | 2023-12-29T19:38:49Z | 2024-04-05T16:08:39Z | https://github.com/langchain-ai/langchain/issues/15325 | 2,060,357,840 | 15,325 |
[
"hwchase17",
"langchain"
] | ### System Info
"langchain": "^0.0.211",
MacOS Sonoma 14.2
Next.js 14.0.4
### Who can help?
@agola11
@hwc
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. pnpm add langchain in a Next.js project
2. Create a Next.js Route handler
3. Create the following route:
```
import { NextResponse } from 'next/server';
import { ChatOllama } from 'langchain/chat_models/ollama';
import { ChatPromptTemplate, MessagesPlaceholder } from 'langchain/prompts';
import { BufferMemory, ChatMessageHistory } from 'langchain/memory';
import { ConversationChain } from 'langchain/chains';
export async function POST(req: Request) {
const data = await req.json();
const prompt = ChatPromptTemplate.fromMessages([
[
'system',
`You are an AI Computer Science Data Structures teaching system that responds to all questions STRICTLY
in JSON format. You will be given a question on DSA concepts. Contents of JSON made by you will be used
to create elements within a node of a graph that displays
explanations of topics, and a user interface that allows users to follow up if they need help or want
more information. There are 4 elements, "Topic", "Description", "Subtopics", "Questions": an array of strings. You will also be given a number of nodes that already
exist, to be able to assign unique ids. IDs MUST BE STRINGS. MAKE SURE YOU ARE ONLY REPLYING WITH JSON AND NOT MARKDOWN
These are the only node types you are allowed to pick from:
"promptNode": USE FOR ALL EXPLANATIONS
"confusedNode": USED WHEN CONFUSED
{
"{DEFINE ID BUT IN "STRING" FORM! +1 HIGHER THAN NUMBER GIVEN}": {
"THE ID AGAIN": {number},
"type": "promptNode",
"position": { "x": 0, "y": 0 },
"data": {
"topic": "{Short name of topic}",
"description": "{The explanation of topic}",
"subtopics": [an array of strings of 5 related topics],
"questions": [an array of objects of 4 related questions and answers, eg: {'q': 'Question?', 'a': 'Ans'}],
"im_confused": [array of concepts mentioned in the description that they could be confused about]
}
}`,
],
new MessagesPlaceholder('history'),
['human', '{input}'],
]);
//@ts-ignore
const chatHistory = [];
const llm = new ChatOllama({
baseUrl: 'http://localhost:11434', // Default value
model: 'mistral', // Default value
});
const memory = new BufferMemory({
returnMessages: true,
memoryKey: 'history',
//@ts-ignore
chatHistory: new ChatMessageHistory(chatHistory),
});
const chain = new ConversationChain({
memory: memory,
prompt: prompt,
llm: llm,
verbose: true,
});
const result = await chain.invoke({
input: data.prompt,
});
console.log(result);
return NextResponse.json(
{
},
{ status: 200 }
);
}
```
### Expected behavior
Model output. | Issue when running a simple ChatOllama prompt in Next.js/TypeScript: "Error: Single '}' in template." | https://api.github.com/repos/langchain-ai/langchain/issues/15318/comments | 2 | 2023-12-29T15:48:07Z | 2023-12-29T16:03:41Z | https://github.com/langchain-ai/langchain/issues/15318 | 2,060,210,050 | 15,318 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I have built a custom LLM Agent by following the Documentation provided. The custom agent contains multiple tools, one of them is the "LLMMathChain" which is giving me ValueError, cause my agent is passing "None" as an Action Input. I want to handle that error. So that my chatbot doesn't break in the middle of a conversation.
## My Custom Agent

## Calculator Tool

## Prompt Template

## Output Parser

### Suggestion:
_No response_ | Issue: Error Handling in Tools used in custom agents | https://api.github.com/repos/langchain-ai/langchain/issues/15317/comments | 1 | 2023-12-29T12:44:32Z | 2024-04-05T16:08:35Z | https://github.com/langchain-ai/langchain/issues/15317 | 2,059,715,813 | 15,317 |
[
"hwchase17",
"langchain"
] | ### Feature request
Presently, JSON can be utilized to enable the multimodal capability of GPT-4 series models within ChatOpenAI and OpenAI. However, this functionality lacks portability.
### Motivation
Using multimodal approaches lacks portability, and GPT-4 isn't the sole model employing multimodal capabilities. Therefore, it becomes imperative to establish a standardized method for accessing various multimodal models.
### Your contribution
I may submit a PR about this if I have spare time | Add common mulit model support | https://api.github.com/repos/langchain-ai/langchain/issues/15316/comments | 3 | 2023-12-29T12:42:22Z | 2024-04-08T16:08:22Z | https://github.com/langchain-ai/langchain/issues/15316 | 2,059,700,790 | 15,316 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
below is my code, How can I implement Conversation Chain along with ConversationSummaryMemory in my code
`def retreival_qa_chain(chroma_db_path):
embedding = OpenAIEmbeddings()
vectordb = Chroma(persist_directory=chroma_db_path, embedding_function=embedding)
llm = ChatOpenAI(temperature=0.1)
retriever = vectordb.as_retriever(search_kwargs={"k": 2})
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever,return_source_documents=True)
return qa
def create_global_qa_chain():
chroma_db_path = "chroma-databases"
folders = os.listdir(chroma_db_path)
qa_chains = {}
for index, folder in enumerate(folders):
folder_path = f"{chroma_db_path}/{folder}"
project = retreival_qa_chain(folder_path)
qa_chains[folder] = project
return qa_chains`
### Suggestion:
_No response_ | Issue: How can I implement Conversation Chain along with ConversationSummaryMemory | https://api.github.com/repos/langchain-ai/langchain/issues/15315/comments | 1 | 2023-12-29T11:23:25Z | 2024-04-05T16:08:25Z | https://github.com/langchain-ai/langchain/issues/15315 | 2,059,344,749 | 15,315 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am trying to add a specific prompt template to my ConversationalRetrievalChain. This is my current code:
> PROMPT_TEMPLATE = """
Act as the policies interactive Bot that gives advice on the Company policies, Travel policies, and Information security policies for the company.
Do not try to make up an answer. Use only the given pieces of context; do not use your own knowledge.
Chat History:
{chat_history}
Follow Up Input: {question}
"""
qa_prompt = PromptTemplate(input_variables=["chat_history", "question",], template=PROMPT_TEMPLATE)
> chat = ChatOpenAI(
verbose=True,
model_name=MODEl_NAME,
temperature=TEMPERATURE,
max_retries=MAX_RETRIES,
streaming=True,
)
qa_chain =ConversationalRetrievalChain.from_llm(
llm=chat,
retriever=MyVectorStoreRetriever(
vectorstore=vectordb,
search_type="similarity_score_threshold",
search_kwargs={"score_threshold": SIMILARITY_THRESHOLD, "k": 1},
),
return_source_documents=True,
combine_docs_chain_kwargs={'prompt': qa_prompt}, )
response = qa_chain(
{
"question": query,
"chat_history": chat_history,
},
callbacks=[stream_handler],
)
This is the error I'm currently getting,
> qa_chain =ConversationalRetrievalChain.from_llm(
File "/home/sfm/anaconda3/envs/chat_v2/lib/python3.10/site-packages/langchain/chains/conversational_retrieval/base.py", line 360, in from_llm
doc_chain = load_qa_chain(
File "/home/sfm/anaconda3/envs/chat_v2/lib/python3.10/site-packages/langchain/chains/question_answering/__init__.py", line 249, in load_qa_chain
return loader_mapping[chain_type](
File "/home/sfm/anaconda3/envs/chat_v2/lib/python3.10/site-packages/langchain/chains/question_answering/__init__.py", line 81, in _load_stuff_chain
return StuffDocumentsChain(
File "/home/sfm/anaconda3/envs/chat_v2/lib/python3.10/site-packages/langchain/load/serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for StuffDocumentsChain
__root__
document_variable_name context was not found in llm_chain input_variables: ['chat_history', 'question'] (type=value_error)
can you help me to figure out the error and correct it?
### Suggestion:
_No response_ | Issue: document_variable_name context was not found in llm_chain input_variables | https://api.github.com/repos/langchain-ai/langchain/issues/15314/comments | 1 | 2023-12-29T10:42:37Z | 2024-04-05T16:08:20Z | https://github.com/langchain-ai/langchain/issues/15314 | 2,059,302,480 | 15,314 |
[
"hwchase17",
"langchain"
] | ### System Info
lc: 0.0.352, os: ubuntu 22, python 3.10
### Who can help?
### Description
I am encountering a significant performance issue when using Qdrant with HuggingfaceEmbeddings in a CPU-only environment, specifically within a FastAPI endpoint. The process is notably slow, particularly at the `aadd_documents(...)` stage.
### Additional Information
- As a comparison, I tried embedding a document directly using `sentence_transformers`. This approach utilized all CPU cores, resulting in a much faster process.
- I also experimented with a custom implementation, using only necessary functions from [this Qdrant file](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/qdrant.py) to perform `aadd_documents`. This approach also showed improved performance and full CPU utilization.
### Question
Does anyone have an idea or suggestion on what might be causing this performance bottleneck when using Qdrant with HuggingfaceEmbeddings in a CPU-only environment?
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
### Steps to Reproduce
1. Set up Qdrant with HuggingfaceEmbeddings in a CPU-only machine (no GPU).
2. Integrate it within a FastAPI endpoint.
3. Execute `aadd_documents(...)` for documents (for example, documents with around 45K characters).
### Expected behavior
### Expected Behavior
I expected the embedding and addition of documents to Qdrant to be efficient and utilize multiple CPU cores effectively.
### Observed Behavior
- The embedding process for a document of approximately 45K characters took over one minute.
- Resource utilization monitoring showed that only one out of 70 CPU cores was being utilized during the embedding process.
| Slow aadd_documents using Qdrant and HuggingfaceEmbeddings on CPU | https://api.github.com/repos/langchain-ai/langchain/issues/15310/comments | 1 | 2023-12-29T09:45:06Z | 2024-04-05T16:08:14Z | https://github.com/langchain-ai/langchain/issues/15310 | 2,059,251,491 | 15,310 |
[
"hwchase17",
"langchain"
] | null | b | https://api.github.com/repos/langchain-ai/langchain/issues/15307/comments | 2 | 2023-12-29T08:30:47Z | 2023-12-29T08:37:37Z | https://github.com/langchain-ai/langchain/issues/15307 | 2,059,195,701 | 15,307 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.340
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
My vector library preserves tens of thousands of documents, but as the document increases, the accuracy rate of the retriever is becoming low, and the correct document cannot be retrieved.
The retrieval cannot be given the document correctly
### Expected behavior
db = FAISS.load_local(VS['comixfaiss'], embeddings)
retriever = db.as_retriever(search_type="similarity_score_threshold", search_kwargs={"score_threshold": 0.5,"k":5})
logger.info(retriever.get_relevant_documents('3736085'))
LOG:
2023-12-29 15:44:51,650 - loader.py[line:56] - INFO: Successfully loaded faiss with AVX2 support.
2023-12-29 15:44:52,843 - local_doc_qa.py[line:204] - INFO: [Document(page_content='\ufeffSelection: 非校验\n商品编号: 3730559\n商品名称: 飞捷 FJ21325 39-45码(QXGZ)中筒 防水鞋户外雨靴套鞋胶鞋 黑色(单位:双)\n物料编码: 3730559\n大类: 生活用品\n中类: 办公日杂\n小类: 雨伞雨具\n品牌: 梦奇\n颜色: 黑色\n型号: FJ21325\n建议零售价: 77.45\n卖点: 品牌:飞捷 颜色:黑色 型号:FJ21325 包装清单:雨靴*1\n上架状态: 上架\n状态: \n状态信息:', metadata={'source': '/mnt/data/pdf/comixgpt/pd/齐心商城商品数据2万条2023-12-27.csv', 'row': 8975}), Document(page_content='\ufeffSelection: 非校验\n商品编号: 3051396\n商品名称: 得力 9387 三联送(销)货单据 129*188mm 20份/本 黄色 单位:本\n物料编码: 3051396\n大类: 办公文具\n中类: 财务行政用品\n小类: 财务单据\n品牌: 得力\n颜色: 黄色\n型号: 9387\n建议零售价: 4.35\n卖点: 0\n上架状态: 上架\n状态: \n状态信息:', metadata={'source': '/mnt/data/pdf/comixgpt/pd/齐心商城商品数据2万条2023-12-27.csv', 'row': 20709}), Document(page_content='\ufeffSelection: 非校验\n商品编号: 3278812\n商品名称: 惠普\xa0W9055MC\xa0成像鼓 彩色 (单位:个)\n物料编码: 3278812\n大类: 办公耗材\n中类: 打印机耗材\n小类: 硒鼓\n品牌: 惠普\n颜色: 彩色\n型号: W9055MC\n建议零售价: 3645.88\n卖点: 打印机耗材\n上架状态: 上架\n状态: \n状态信息:', metadata={'source': '/mnt/data/pdf/comixgpt/pd/齐心商城商品数据2万条2023-12-27.csv', 'row': 12167}), Document(page_content='\ufeffSelection: 非校验\n商品编号: 3197277\n商品名称: 得力 9307 报刊架 480*360*1450 银色 单位:个\n物料编码: 3197277\n大类: 办公文具\n中类: 会议展示用品\n小类: 报刊/杂志架\n品牌: 得力\n颜色: \n型号: 9307\n建议零售价: 330.33\n卖点: 0\n上架状态: 上架\n状态: \n状态信息:', metadata={'source': '/mnt/data/pdf/comixgpt/pd/齐心商城商品数据2万条2023-12-27.csv', 'row': 16685}), Document(page_content='\ufeffSelection: 非校验\n商品编号: 3278811\n商品名称: 惠普 W9054MC 成像鼓 黑色 (单位:个)\n物料编码: 3278811\n大类: 办公耗材\n中类: 打印机耗材\n小类: 硒鼓\n品牌: 惠普\n颜色: 黑色\n型号: W9054MC\n建议零售价: 2471.19\n卖点: 打印机耗材\n上架状态: 上架\n状态: \n状态信息:', metadata={'source': '/mnt/data/pdf/comixgpt/pd/齐心商城商品数据2万条2023-12-27.csv', 'row': 12168})]
| The retrieval cannot be given the document correctly | https://api.github.com/repos/langchain-ai/langchain/issues/15306/comments | 4 | 2023-12-29T08:00:29Z | 2024-04-08T16:08:17Z | https://github.com/langchain-ai/langchain/issues/15306 | 2,059,175,187 | 15,306 |
[
"hwchase17",
"langchain"
] | Hi @dosu-bot,
This is my code
```
import langchain
from langchain.cache import SQLAlchemyCache, Emb
from sqlalchemy import create_engine
from sqlalchemy.orm import declarative_base
from sqlalchemy import Column, Integer, Text
from urllib.parse import quote_plus
from langchain.llms import OpenAI
Base = declarative_base()
class FulltextLLMCache(Base):
__tablename__ = "llm_cache_full_text"
id = Column(Integer, primary_key=True)
prompt = Column(Text, nullable=False)
llm = Column(Text, nullable=False)
idx = Column(Integer)
response = Column(Text)
db_uri = f"mssql+pyodbc://JUPYTER\SQLEXPRESS/my_database?driver=ODBC+Driver+17+for+SQL Server"
cache_engine = create_engine(db_uri, pool_recycle=240, pool_size=20, max_overflow=30)
# Assigning to llm_cache
langchain.llm_cache = SQLAlchemyCache(cache_engine, FulltextLLMCache)
```
The above code is for exact cache which is very low hit rate, how can i do similarity caching?
| How do i use similarity caching in my code? | https://api.github.com/repos/langchain-ai/langchain/issues/15304/comments | 1 | 2023-12-29T07:36:10Z | 2024-04-05T16:08:05Z | https://github.com/langchain-ai/langchain/issues/15304 | 2,059,159,495 | 15,304 |
[
"hwchase17",
"langchain"
] | Hi @dosu-bot.
Below is my code,
```
from langchain.cache import SQLAlchemyCache
from sqlalchemy import create_engine
engine = create_engine("mssql+pyodbc://JUPYTER\SQLEXPRESS/my_database?driver=ODBC+Driver+17+for+SQL Server")
set_llm_cache(SQLAlchemyCache(engine))
memory = ConversationBufferWindowMemory(k=2, memory_key="chat_history", chat_memory=chat_message_history ,return_messages=True, output_key="answer", input_key="question")
retriever = load_emdeddings(cfg.faiss_persist_directory, cfg.embeddings).as_retriever(search_type="similarity_score_threshold",
search_kwargs={"score_threshold": .65,
"k": 2})
custom_prompt_template = """
You are a friendly chatbot named "XYZ", designed to provide assistance and answer queries.
{context}
Chat History: {chat_history}
Question: {question}
"""
# Create a PromptTemplate instance with your custom template
custom_prompt = PromptTemplate(
template=custom_prompt_template,
input_variables=["context", "question", "chat_history", "User_Name", "User_Location"],
)
# Use your custom prompt when creating the ConversationalRetrievalChain
qa = ConversationalRetrievalChain.from_llm(
llm,
verbose=False,
retriever=retriever,
memory=memory,
combine_docs_chain_kwargs={"prompt": custom_prompt},
return_source_documents = True
)
```
if i use llm.predict("Tell me a joke"). I can see the cache is getting stored in db.
but for qa if i ask question its not saving , why?
| Cache not getting saved in ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/15303/comments | 1 | 2023-12-29T06:30:14Z | 2024-04-05T16:07:59Z | https://github.com/langchain-ai/langchain/issues/15303 | 2,059,118,347 | 15,303 |
[
"hwchase17",
"langchain"
] | ### System Info
Hi,
I'm new to this, so I apologize if my lack of in-depth understanding to how this library works caused to me raise a false alarm. Im trying to an ocr on pdf image using the UnstructuredPDFLoader, Im passing the following args:
`
loader = UnstructuredPDFLoader(file_path="myfile.pdf", mode="elements",include_page_break=True,infer_table_structure=False,languages=["Eng"],strategy="hi_res",include_metadata=True,model_name="chipper")`
However I keep getting the following error:
```
OSError: unstructuredio/chipper-v3 is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`
```
Not sure what Im missing here ?
Thanks
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.document_loaders import UnstructuredPDFLoader
loader = UnstructuredPDFLoader(file_path="myfile.pdf", mode="elements",include_page_break=True,infer_table_structure=False,languages=["Eng"],strategy="hi_res",include_metadata=True,model_name="chipper")
documents = loader.load()
print(documents)
```
### Expected behavior
I should be getting the metadata similar to when I use other models like "yolox" which works fine. I heard chipper model is much better so I wanted to try it. | Using chipper model with hi_res strategy gives an error | https://api.github.com/repos/langchain-ai/langchain/issues/15300/comments | 2 | 2023-12-29T02:33:48Z | 2024-04-05T16:07:54Z | https://github.com/langchain-ai/langchain/issues/15300 | 2,059,008,076 | 15,300 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain = "^0.0.352"
@agola11
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Specify async open ai client upon intialization
client = openai.AsyncOpenAI()
assistant = OpenAIAssistantRunnable(assistant_id=self.assistant_id,as_agent=as_agent,client=client)
Produces error
pydantic.v1.errors.ConfigError: field "client" not yet prepared so type is still a ForwardRef, you might need to call OpenAIAssistantRunnable.update_forward_refs().
### Expected behavior
expect intialization to be successful | Cannot specify asyn clienct for OpenAIAssistantRunnable | https://api.github.com/repos/langchain-ai/langchain/issues/15299/comments | 1 | 2023-12-29T02:29:20Z | 2024-01-29T20:19:49Z | https://github.com/langchain-ai/langchain/issues/15299 | 2,059,006,360 | 15,299 |
[
"hwchase17",
"langchain"
] | ### System Info
Name: langchain
Version: 0.0.352
Name: openai
Version: 1.6.1
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.chat_models import AzureChatOpenAI
from langchain.schema import HumanMessage
BASE_URL = "https://resource.openai.azure.com/"
API_KEY = "abc123"
DEPLOYMENT_NAME = "GPT35"
model = AzureChatOpenAI(
openai_api_base=BASE_URL,
openai_api_version="2023-05-15",
deployment_name=DEPLOYMENT_NAME,
openai_api_key=API_KEY,
openai_api_type="azure",
)
print(model(
[
HumanMessage(
content="Translate this sentence from English to French. I love programming."
)
]
))
### Expected behavior
I get an error about openai module. I get same error when i try and use embeddings.
I can use the openai azure pythons fine with my resource and api key - but langchain is broken.
AttributeError Traceback (most recent call last)
AttributeError: module 'openai' has no attribute 'error' | Azure function not working - openai error with latest builds | https://api.github.com/repos/langchain-ai/langchain/issues/15289/comments | 3 | 2023-12-28T22:42:25Z | 2023-12-30T12:46:52Z | https://github.com/langchain-ai/langchain/issues/15289 | 2,058,918,716 | 15,289 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain version: 0.0.348
Python 3.9.18
Mac OS M2 (Ventura 13.6.2)
AWS Bedrock Titan text express, Claude v2
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
SQLDatabaseChain produces SQL query where the logic is correct but uses double quotes "identifier" hence incorrect for the snowflake SQL which require single quotes 'identifier'
output = SQL: SELECT "company" = "ABC"
desired output = SQL: SELECT 'company' = 'ABC'requires
### Expected behavior
desired out should be snowflake SQL single quotes for the identifier 'ABC' | Incorrect Snowflake SQL dialect in SQLDatabaseChain | https://api.github.com/repos/langchain-ai/langchain/issues/15285/comments | 12 | 2023-12-28T21:26:16Z | 2024-04-22T16:31:04Z | https://github.com/langchain-ai/langchain/issues/15285 | 2,058,832,286 | 15,285 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.