issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### System Info
Traceback (most recent call last):
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_community\vectorstores\azuresearch.py", line 97, in _get_search_client
from azure.search.documents.indexes.models import (
ImportError: cannot import name 'HnswAlgorithmConfiguration' from 'azure.search.documents.indexes.models' (C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\azure\search\documents\indexes\models\__init__.py)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Users\vivek\OneDrive\Desktop\Hackathon\userdoc1.py", line 47, in <module>
vector_store: AzureSearch = AzureSearch(
^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_community\vectorstores\azuresearch.py", line 299, in __init__
self.client = _get_search_client(
^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_community\vectorstores\azuresearch.py", line 105, in _get_search_client
from azure.search.documents.indexes.models import (
ImportError: cannot import name 'HnswVectorSearchAlgorithmConfiguration' from 'azure.search.documents.indexes.models' (C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\azure\search\documents\indexes\models\__init__.py)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
embedding_model="BAAI/bge-small-en-v1.5"
embeddings= HuggingFaceEmbeddings(model_name=embedding_model)
# embeddings = AzureOpenAIEmbeddings(deployment=model, chunk_size=100)
# embeddings = AzureOpenAIEmbeddings(azure_deployment="T-ada-002",openai_api_version="2023-05-15")
index_name: str = "i17"
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=os.environ.get("AZURE_COGNITIVE_SEARCH_API_KEY"),
index_name=index_name,
embedding_function=embeddings.embed_query,
)
loader = AzureBlobStorageContainerLoader(
conn_str=os.environ.get("AZURE_CONN_STRING"),
container=os.environ.get("CONTAINER_NAME"),
)
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=150, chunk_overlap=20)
docs = text_splitter.split_documents(documents)
vector_store.add_documents(documents=docs)
### Expected behavior
data from azure cognitive service | Azure | https://api.github.com/repos/langchain-ai/langchain/issues/15007/comments | 1 | 2023-12-21T11:43:02Z | 2024-03-28T16:08:28Z | https://github.com/langchain-ai/langchain/issues/15007 | 2,052,264,900 | 15,007 |
[
"hwchase17",
"langchain"
] | ### System Info
aiohttp==3.9.1
aiosignal==1.3.1
annotated-types==0.6.0
anyio==4.2.0
attrs==23.1.0
cachetools==5.3.2
certifi==2023.11.17
charset-normalizer==3.3.2
colorama==0.4.6
dataclasses-json==0.6.3
distro==1.8.0
frozenlist==1.4.1
google-ai-generativelanguage==0.4.0
google-api-core==2.15.0
google-auth==2.25.2
google-generativeai==0.3.2
googleapis-common-protos==1.62.0
greenlet==3.0.2
grpcio==1.60.0
grpcio-status==1.60.0
h11==0.14.0
httpcore==1.0.2
httpx==0.26.0
idna==3.6
jsonpatch==1.33
jsonpointer==2.4
langchain==0.0.352
langchain-community==0.0.5
langchain-core==0.1.2
langchain-google-genai==0.0.5
langsmith==0.0.72
marshmallow==3.20.1
multidict==6.0.4
mypy-extensions==1.0.0
numpy==1.26.2
openai==1.6.0
packaging==23.2
Pillow==10.1.0
proto-plus==1.23.0
protobuf==4.25.1
pyasn1==0.5.1
pyasn1-modules==0.3.0
pydantic==2.5.2
pydantic_core==2.14.5
PyYAML==6.0.1
requests==2.31.0
rsa==4.9
sniffio==1.3.0
SQLAlchemy==2.0.23
tenacity==8.2.3
tqdm==4.66.1
typing-inspect==0.9.0
typing_extensions==4.9.0
urllib3==2.1.0
yarl==1.9.4
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
class StreamingLLMCallbackHandler(BaseCallbackHandler):
"""Callback handler for streaming LLM responses to a queue."""
def __init__(self):
"""
Initialize the StreamingLLMCallbackHandler.
"""
self._is_done = False
self._queue = Queue()
def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
"""
Run on new LLM token. Only available when streaming is enabled.
Args:
token (str): The new LLM token.
**kwargs (Any): Additional keyword arguments.
"""
if is_dev_mode():
print(token, end='')
self._queue.put(EventData(content=token))
def on_llm_end(self, response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any):
"""
Run when LLM ends running.
Args:
response (LLMResult): The LLM processing result.
run_id (UUID): The unique identifier for the current run.
parent_run_id (Optional[UUID]): The unique identifier for the parent run.
**kwargs (Any): Additional keyword arguments.
"""
self._queue.put(EventData(content=None, finish_reason='done'))
self._is_done = True
class LangChainChatService(ChatBaseService):
"""
Service class for handling chat operations using LangChain models.
This class extends the ChatBaseService and provides methods for streaming chat and one-time HTTP chat using
LangChain models.
Methods:
stream_chat_async: Stream chat using similarity search and AI.
http_chat_async: Perform one-time HTTP chat using similarity search and AI.
_qa_task: Helper method to execute the QA task in a separate thread.
_get_qa_chain: Get the conversational retrieval chain for handling chat.
_publish_chat_history: Publish chat history to Kafka.
Attributes:
model (LangchainChatModel): The LangChain chat model.
request_manager: The API request manager.
tool_query: The tool query for chat operations.
"""
def __init__(self, model: LangchainChatModel, tool_query=None):
"""
Initializes the LangChainChatService.
Args:
model (LangchainChatModel): The LangChain chat model.
tool_query (Optional[str]): The tool query for chat operations.
"""
super().__init__()
self.model = model
self.request_manager = api_request_manager_var.get()
self.tool_query = tool_query
async def http_chat_async(self) -> dict:
"""
Perform one-time HTTP chat using similarity search and AI.
Returns:
dict: The response from the chat operation.
"""
formatted_chat_history = [] if self.tool_query else self.get_formatted_chat_history(self.model.chat_history)
qa = self._get_qa_chain(callbacks=[streaming_handler])
qa.return_source_documents = self.model.return_source_documents
qa.return_generated_question = True
query_start = time.time()
question = self.tool_query or self.model.query
qa_response = await qa.ainvoke({"question": question, "chat_history": formatted_chat_history})
query_end = time.time()
result = {
'query_result': qa_response.get("answer"),
'query_time': int((query_end - query_start) * 1000),
'generated_question': qa_response.get('generated_question'),
'source_documents': [document.__dict__ for document in qa_response.get("source_documents", [])],
}
self._publish_chat_history(result)
return result
def _get_qa_chain(self, callbacks: Callbacks = None) -> BaseConversationalRetrievalChain:
"""
Get the conversational retrieval chain for handling chat.
Args:
callbacks (Callbacks): The callbacks to be used.
Returns:
BaseConversationalRetrievalChain: The conversational retrieval chain.
"""
collection_name = get_langchain_collection_name(self.model.client_id)
connection_args = {"host": AppConfig.vector_db_host, "port": AppConfig.vector_db_port}
embeddings = LLMSelector(self.model).get_embeddings()
vector_store = Milvus(embeddings, collection_name=collection_name, connection_args=connection_args)
expression = get_expression_to_fetch_db_text_from_ids(**self.model.model_dump())
# this LLMSelector class return ChatGoogleGenerativeAI instance with streaming
qa_llm = LLMSelector(self.model).get_language_model(streaming=self.model.stream_response, callbacks=callbacks)
condense_question_llm = LLMSelector(self.model).get_language_model()
prompt_selector = get_prompt_selector(human_context=self.model.human_context, system_context=self.model.system_context)
qa = ConversationalRetrievalChain.from_llm(
llm=qa_llm,
retriever=vector_store.as_retriever(search_type="similarity", search_kwargs={"k": self.model.similarity_top_k, 'expr': expression}),
condense_question_llm=condense_question_llm,
combine_docs_chain_kwargs={"prompt": prompt_selector.get_prompt(qa_llm)}
)
return qa
```
### Expected behavior
Chain should trigger `on_llm_new_token` method of callback handler when streaming true for ChatGoogleGenerativeAI. | ConversationalRetrievalChain not working with stream callback handler for ChatGoogleGenerativeAI | https://api.github.com/repos/langchain-ai/langchain/issues/15006/comments | 1 | 2023-12-21T11:26:22Z | 2024-02-27T04:48:21Z | https://github.com/langchain-ai/langchain/issues/15006 | 2,052,241,949 | 15,006 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Traceback (most recent call last):
File "c:\Users\vivek\OneDrive\Desktop\SOPPOC\version.py", line 57, in <module>
openAIEmbedd = FAISS.from_documents(texts, embeddings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\vectorstores.py", line 510, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_community\vectorstores\faiss.py", line 914, in from_texts
embeddings = embedding.embed_documents(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_community\embeddings\openai.py", line 667, in embed_documents
return self._get_len_safe_embeddings(texts, engine=engine)
^^^^^^^^^^^
ValueError: not enough values to unpack (expected 2, got 1)
why i am getting this error
### Suggestion:
_No response_ | ValueError: not enough values to unpack (expected 2, got 1) | https://api.github.com/repos/langchain-ai/langchain/issues/15005/comments | 2 | 2023-12-21T11:13:46Z | 2024-04-03T16:08:09Z | https://github.com/langchain-ai/langchain/issues/15005 | 2,052,224,049 | 15,005 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.10.10
langchain 0.0.350
langchain-community 0.0.3
langchain-core 0.1.1
Windows Machine
VSCode
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Here I am pretty much following the official tutorial with a modification where I used Pinecone Vector DB instead of Chroma.
Tutorial link- https://python.langchain.com/docs/use_cases/question_answering/conversational_retrieval_agents#the-prompt-template
Code
```
loader = DirectoryLoader('/home/azureuser/cloudfiles/code/Users/shamus/Chat With Your Data /docs/CPG', glob="./*.pdf", loader_cls=PyPDFLoader)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
texts = text_splitter.split_documents(documents)
pinecone.init(
api_key = PINECONE_API_KEY,
environment = PINECONE_ENV
)
#Use Pinecone
index_name = 'openai-ada'
from langchain.embeddings import AzureOpenAIEmbeddings
embeddings = AzureOpenAIEmbeddings(
azure_deployment="embedding",
openai_api_version="2023-05-15", #or 2023-12-01-preview
)
docsearch = Pinecone.from_documents(texts, embeddings, index_name = index_name)
retriever = docsearch.as_retriever(include_metadata=True, metadata_key = 'source', search_type="mmr")
from langchain.agents.agent_toolkits.conversational_retrieval.tool import create_retriever_tool
tool = create_retriever_tool(
retriever,
name="Cardiology_CPG", #has some naming convention:1 to 64 characters long,only contain alphanumeric characters, (_), (-)
description="Searches and returns documents regarding CPG Cardiology in Malaysia.",
)
tools = [tool]
from langchain.agents.agent_toolkits import create_conversational_retrieval_agent
chat_llm = AzureChatOpenAI(
openai_api_version="2023-12-01-preview",
azure_deployment="Test1",
temperature=0
)
agent_executor = create_conversational_retrieval_agent(chat_llm, tools=tools, verbose=True)
result = agent_executor({"input": "Hi, what are the key latest updates of Clinical Practice Guidelines in Cardiology this year?"})
```
As part of my sanity check. I ran the following which all returned intelligible outputs.
```
docs = retriever.get_relevant_documents("What are the latest guidelines in CPG Cardiology in Malaysia?")
print(docs)
# Check the properties of the first tool (assuming only one tool is added)
if tools:
first_tool = tools[0]
print(f"Tool Name: {first_tool.name}")
print(f"Tool Description: {first_tool.description}")
chat_llm.predict("Hello world!")
```
Error
```
> Entering new AgentExecutor chain...
---------------------------------------------------------------------------
NotFoundError Traceback (most recent call last)
Cell In[129], [line 1](vscode-notebook-cell:?execution_count=129&line=1)
----> [1](vscode-notebook-cell:?execution_count=129&line=1) result = agent_executor({"input": "Hi, what are the key latest updates of Clinical Practice Guidelines in Cardiology this year?"})
File /anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:312, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
[310](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:310) except BaseException as e:
[311](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:311) run_manager.on_chain_error(e)
--> [312](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:312) raise e
[313](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:313) run_manager.on_chain_end(outputs)
[314](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:314) final_outputs: Dict[str, Any] = self.prep_outputs(
[315](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:315) inputs, outputs, return_only_outputs
[316](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:316) )
File /anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:306, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
[299](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:299) run_manager = callback_manager.on_chain_start(
[300](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:300) dumpd(self),
[301](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:301) inputs,
[302](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:302) name=run_name,
[303](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:303) )
[304](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:304) try:
[305](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:305) outputs = (
--> [306](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:306) self._call(inputs, run_manager=run_manager)
[307](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:307) if new_arg_supported
[308](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:308) else self._call(inputs)
...
(...)
[937](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/openai/_base_client.py:937) stream_cls=stream_cls,
[938](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/openai/_base_client.py:938) )
NotFoundError: Error code: 404 - {'error': {'message': 'Unrecognized request argument supplied: functions', 'type': 'invalid_request_error', 'param': None, 'code': None}}
```
My understanding of the error is that something is not working with the 'tools'. Although I cant work out why because the retriever is working just fine. Appreciate any advise on this.
### Expected behavior
Expect 'normal' chatbot output | Agent Executor Error code: 404 | https://api.github.com/repos/langchain-ai/langchain/issues/15004/comments | 2 | 2023-12-21T09:52:59Z | 2023-12-22T07:08:41Z | https://github.com/langchain-ai/langchain/issues/15004 | 2,052,095,894 | 15,004 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
_No response_
### Suggestion:
_No response_ | Does ConversationalRetrievalChain Support Streaming Replies?how to use this streaming with custom pre-trained models? | https://api.github.com/repos/langchain-ai/langchain/issues/15002/comments | 1 | 2023-12-21T08:49:32Z | 2024-03-28T16:08:17Z | https://github.com/langchain-ai/langchain/issues/15002 | 2,051,982,197 | 15,002 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
i passed my vectorestore as an retriver on the context but in rag_chai response will shows an openai response not from my vectorstore data so give some support to fix it
| Issue: prompt template response will be come's from openai not from my vectorestore | https://api.github.com/repos/langchain-ai/langchain/issues/15001/comments | 1 | 2023-12-21T08:43:50Z | 2023-12-22T04:58:46Z | https://github.com/langchain-ai/langchain/issues/15001 | 2,051,972,351 | 15,001 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am not getting output as per prompt, what is neccessary modification or changes i need to do.
def chat_langchain(new_project_qa, query, not_uuid):
check = query.lower()
relevant_document = result['source_documents']
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
custom_prompt_template = f"""You are a Chatbot answering questions. Use the following pieces of context to answer the question at the end. If you don't know the answer, say that you don't know, don't try to make up an answer.
{relevant_document}
Question: {check}
Helpful Answer:"""
CUSTOMPROMPT = PromptTemplate(
template=custom_prompt_template, input_variables=["context", "question"]
)
print(CUSTOMPROMPT,"------------------")
new_project_qa.combine_documents_chain.llm_chain.prompt = CUSTOMPROMPT
result = new_project_qa(query)
if relevant_document:
source = relevant_document[0].metadata.get('source', '')
# Check if the file extension is ".pdf"
file_extension = os.path.splitext(source)[1]
if file_extension.lower() == ".pdf":
source = os.path.basename(source)
# Retrieve the UserExperience instance using the provided not_uuid
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
bot_ending = user_experience_inst.bot_ending_msg if user_experience_inst.bot_ending_msg is not None else ""
# Create the list_json dictionary
if bot_ending != '':
list_json = {
'bot_message': result['result'] + '\n\n' + str(bot_ending),
"citation": source
}
else:
list_json = {
'bot_message': result['result'] + str(bot_ending),
"citation": source
}
else:
# Handle the case when relevant_document is empty
list_json = {
'bot_message': result['result'],
'citation': ''
}
# Return the list_json dictionary
return list_json
### Suggestion:
_No response_ | Issue: not getting output as per prompt,what is neccessary changes i need to do? | https://api.github.com/repos/langchain-ai/langchain/issues/15000/comments | 3 | 2023-12-21T08:28:42Z | 2024-03-28T16:08:13Z | https://github.com/langchain-ai/langchain/issues/15000 | 2,051,951,382 | 15,000 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
404 page: https://python.langchain.com/docs/contributing/integration
referenced by: https://python.langchain.com/docs/contributing/

### Idea or request for content:
_No response_ | DOC: Document PAGE NOT FOUND | https://api.github.com/repos/langchain-ai/langchain/issues/14998/comments | 5 | 2023-12-21T07:49:36Z | 2023-12-24T09:09:50Z | https://github.com/langchain-ai/langchain/issues/14998 | 2,051,897,510 | 14,998 |
[
"hwchase17",
"langchain"
] | ### Feature request
Add the implementation of '_create_chat_result()' method for MiniMax's current implementation to allow it being accepted as one of the chat models
### Motivation
Currently MiniMax's chat functionality does not work properly with Langchain, as described in this issue:
https://github.com/langchain-ai/langchain/issues/14796
The investigation to this bug suggests a missing implementation of method '_create_chat_result()'. With a proper implementation of this method, the `_generate` method will be able to return `ChatResult` objects instead of unaccepted `str`.
### Your contribution
I am currently investigating on how to implement it myself, and I am happy to provide any support, including discussion, testing, etc. | Add implementation of '_create_chat_result()' method for MiniMax's current implementation | https://api.github.com/repos/langchain-ai/langchain/issues/14996/comments | 2 | 2023-12-21T05:02:10Z | 2024-03-28T16:08:07Z | https://github.com/langchain-ai/langchain/issues/14996 | 2,051,702,953 | 14,996 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am having a problem adding function calling to a chain of type ConversationalRetrievalChain. I need help finding a solution.
Here is my code, which creates a ConversationalRetrievalChain to retrieve local knowledge and generate chat history information in a summary format. It works fine. However, when I try to add a call to weather_function, I don't know where to add it. I have browsed most of the documentation and couldn't find a solution. Can anyone help me? Thank you!
```python
documents = TextLoader("./file/text.txt").load()
text_splitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=50)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings(openai_api_key=APIKEY, openai_api_base=OPENAI_API_BASE)
db = FAISS.from_documents(docs, embeddings)
retriever = db.as_retriever()
Template = """You are a good man and happy to chat with everyone:
{context}
history chat information in summary:
{chat_history}
Question: {question}
"""
prompt = PromptTemplate(
input_variables=["context", "chat_history", "question"], template=Template
)
output_parser = StrOutputParser()
model = ChatOpenAI(
model_name=DEFAULT_MODEL,
openai_api_key=APIKEY,
openai_api_base=OPENAI_API_BASE,
temperature=0.9,
)
memory = ConversationSummaryMemory(
llm=model, memory_key="chat_history", return_messages=True
)
conversation_chain = ConversationalRetrievalChain.from_llm(
llm=model,
retriever=db.as_retriever(),
memory=memory,
combine_docs_chain_kwargs={'prompt': prompt},
verbose=False,
)
```
function calling : weather_function
```python
class WeatherSearch(BaseModel):
"""Call this with an airport code to get the weather at that airport"""
airport_code: str = Field(description="airport code to get weather for")
weather_function = convert_pydantic_to_openai_function(WeatherSearch)
```
### Suggestion:
_No response_ | Issue: Adding function calling to ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/14988/comments | 1 | 2023-12-21T02:57:09Z | 2024-03-28T16:08:02Z | https://github.com/langchain-ai/langchain/issues/14988 | 2,051,606,338 | 14,988 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I'm looking to use a HuggingFace pipeline using Mistral 7b. I am attempting to pass this into an AgentExectutor and use a retriever based tool.
```python
from langchain.agents.agent_toolkits import create_retriever_tool
from langchain_core.pydantic_v1 import BaseModel, Field
class RetrieverInput(BaseModel):
query: str = Field(description="query to look up in retriever")
fantasy_football_tool = Tool(
name="search_fantasy_football_articles",
description="Searches and returns documents regarding fantasy football.",
func=retriever.get_relevant_documents,
# coroutine=retriever.aget_relevant_documents,
args_schema=RetrieverInput,
)
fantasy_football_tool.run("how is trevor lawrence doing?")
[Document(page_content='Trevor Lawrence\n\nStill in concussion protocol Wednesday\n\nC.J. Stroud', metadata={'source': 'https://www.fantasypros.com/2023/11/rival-fantasy-nfl-week-10/'}),
Document(page_content='Trevor Lawrence\n\nStill in concussion protocol Wednesday\n\nC.J. Stroud', metadata={'source': 'https://www.fantasypros.com/2023/11/nfl-week-10-sleeper-picks-player-predictions-2023/'}),
```
This shows that my tool is working as expected. Now to construct the the agent.
```oython
prompt_template = """
### [INST]
Assistant is a large language model trained by Mistral.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Context:
------
Assistant has access to the following tools:
{tools}
To use a tool, please use the following format:
```
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
```
When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:
```
Thought: Do I need to use a tool? No
Final Answer: [your response here]
```
Begin!
Previous conversation history:
{chat_history}
New input: {input}
Current Scratchpad:
{agent_scratchpad}
[/INST]
"""
# Create prompt from prompt template
prompt = PromptTemplate(
input_variables=['agent_scratchpad', 'chat_history', 'input', 'tool_names', 'tools'],
template=prompt_template,
)
prompt = prompt.partial(
tools=render_text_description(tools),
tool_names=", ".join([t.name for t in tools]),
)
# Create llm chain
# This is a hugging face pipeline.
llm_chain = LLMChain(llm=mistral_llm, prompt=prompt)
from langchain.agents.conversational.output_parser import ConvoOutputParser
from langchain.output_parsers.json import parse_json_markdown
from langchain_core.exceptions import OutputParserException
class CustomOutputParser(ConvoOutputParser):
def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
"""Attempts to parse the given text into an AgentAction or AgentFinish.
Raises:
OutputParserException if parsing fails.
"""
try:
# If the response contains an 'action' and 'action_input'
print(text)
if "Action" in text or "Action Input" in text:
# If the action indicates a final answer, return an AgentFinish
if "Final Answer" in text:
return AgentFinish({"output": text.split('Final Answer:')[1]}, text)
else:
# Otherwise, return an AgentAction with the specified action and
# input
return AgentAction(action, action_input, text)
else:
# If the necessary keys aren't present in the response, raise an
# exception
raise OutputParserException(
f"Missing 'action' or 'action_input' in LLM output: {text}"
)
except Exception as e:
# If any other exception is raised during parsing, also raise an
# OutputParserException
raise OutputParserException(f"Could not parse LLM output: {text}") from e
output_parser = CustomOutputParser()
# Create an agent with your LLMChain
agent = ConversationalAgent(llm_chain=llm_chain, output_parser=output_parser)
memory = ConversationBufferMemory(memory_key="chat_history")
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory)
```
I've tested my `agent_executor` using the same question and get this:
```python
Thought: Do I need to use a tool? Yes
Action: search_fantasy_football_articles
Action Input: "trevor lawrence"
Observation: The search returned several articles discussing Trevor Lawrence's performance in fantasy football this week.
Final Answer: According to the articles I found, Trevor Lawrence had a strong performance in fantasy football this week.
```
So it seems like it is pinging the tool but its not actually grabbing or using the documents. Any ideas on what I need to change?
### Suggestion:
_No response_ | Issue: Unable to return documents in my custom llm / agent executor implementation | https://api.github.com/repos/langchain-ai/langchain/issues/14987/comments | 2 | 2023-12-21T02:56:10Z | 2024-04-30T16:37:55Z | https://github.com/langchain-ai/langchain/issues/14987 | 2,051,605,621 | 14,987 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I used AsyncCallbackHandler for callback. When I pushed the content to the front end through on_llm_new_token, I found that the markdown code block was missing a newline character, which caused the front end to be unable to render the markdown format normally. However, when I retrieved the final response and returned the overall answer content, I found that this newline character existed.
<img width="155" alt="image" src="https://github.com/langchain-ai/langchain/assets/14210962/214581c0-f427-4e83-b06e-f1ded11efe20">
I want to ask for help, how should I solve the problem I am currently facing?
### Suggestion:
_No response_ | Issue: Obtain the content output by AsyncCallbackHandler on_llm_new_token and send it to the front end to find that the newline character is missing. | https://api.github.com/repos/langchain-ai/langchain/issues/14986/comments | 1 | 2023-12-21T02:13:25Z | 2023-12-25T09:16:45Z | https://github.com/langchain-ai/langchain/issues/14986 | 2,051,569,869 | 14,986 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain version: v0.0.352
python version: 3.11
Hi there! After that PR https://github.com/langchain-ai/langchain/pull/14713 was merged, I started getting errors in stream() method:
```
File .../lib/python3.11/site-packages/langchain_core/_api/deprecation.py:295, in warn_deprecated(since, message, name, alternative, pending, obj_type, addendum, removal)
293 if not removal:
294 removal = f"in {removal}" if removal else "within ?? minor releases"
--> 295 raise NotImplementedError(
296 f"Need to determine which default deprecation schedule to use. "
297 f"{removal}"
298 )
299 else:
300 removal = f"in {removal}"
NotImplementedError: Need to determine which default deprecation schedule to use. within ?? minor releases
```
I guess this decorator must have a `pending=True` argument.
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.chat_models import ChatOllama
llm = ChatOllama(
model="openchat:7b-v3.5-1210-q4_K_M",
)
for chunk in llm.stream("Where were the Olympics held?"):
print(chunk, end="", flush=True)
```
### Expected behavior
successful streaming output from llm | ChatOllama stream method raises warn_deprecated NotImplementedError | https://api.github.com/repos/langchain-ai/langchain/issues/14980/comments | 5 | 2023-12-20T23:51:39Z | 2024-04-26T16:13:31Z | https://github.com/langchain-ai/langchain/issues/14980 | 2,051,462,141 | 14,980 |
[
"hwchase17",
"langchain"
] | ### Feature request
## Context:
I am currently developing a custom scraper using the LangChain tools, following the provided documentation. The core functionality involves extracting paragraphs from a list of URLs using the AsyncHtmlLoader and the Beautiful Soup transformer:
loader = AsyncHtmlLoader(urls)
docs = loader.load()
docs_transformed = self.bs_transformer.transform_documents(docs, tags_to_extract=["p"])
return docs_transformed
## Problem:
The code successfully extracts all paragraphs from the provided URLs. However, in the case of web pages like https://www.aha.org/news/chairpersons-file/2023-12-18-chair-file-leadership-dialogue-reflecting-whats-next-health-care-joanne-conroy-md-dartmouth, there is a recurring issue. At the end of each blog or news article, there is a disclaimer message paragraph:
"Noncommercial use of original content on www.aha.org is granted to AHA Institutional Members, their employees and State, Regional and Metro Hospital Associations unless otherwise indicated. AHA does not claim ownership of any content, including content incorporated by permission into AHA produced materials, created by any third party and cannot grant permission to use, distribute or otherwise reproduce such third party content. To request permission to reproduce AHA content, please [click here](https://askrc.libraryresearch.info/reft100.aspx?key=ExtPerm).
"
## Proposed Solution:
To address this, I explored options and realized that excluding specific parts of the HTML could be a viable solution. Typically, using Beautiful Soup, I can delete specific paragraphs within a div by targeting the class parameter, as demonstrated here:
soup.find('div', class_='aha-footer')
## Issue with LangChain Implementation:
Upon inspecting the beautiful_soup_transformer.py in the LangChain repository, particularly the remove_unwanted_tags method, I observed that it is currently implemented to remove unwanted tags in a general sense:
soup = BeautifulSoup(html_content, "html.parser")
for tag in unwanted_tags:
for element in soup.find_all(tag):
element.decompose()
return str(soup)
This implementation makes it impossible to selectively eliminate specific divs from the HTML.
## Request for Guidance:
I seek guidance on how to ignore specific paragraphs or divs during web scraping with LangChain, particularly to exclude the recurring disclaimer paragraph mentioned above. I would appreciate any recommendations on the recommended approach or if there are plans to enhance the beautiful_soup_transformer.py to accommodate more granular exclusion of HTML elements.
### Motivation
I am performing web scrapping over this specific web page:
https://www.aha.org/news/chairpersons-file/2023-12-18-chair-file-leadership-dialogue-reflecting-whats-next-health-care-joanne-conroy-md-dartmouth
I am extracting all the paragraphs , but at the end of all blogs, news there is a disclaimer message paragraph:
Noncommercial use of original content on www.aha.org is granted to AHA Institutional Members, their employees and State, Regional and Metro Hospital Associations unless otherwise indicated. AHA does not claim ownership of any content, including content incorporated by permission into AHA produced materials, created by any third party and cannot grant permission to use, distribute or otherwise reproduce such third party content. To request permission to reproduce AHA content, please [click here](https://askrc.libraryresearch.info/reft100.aspx?key=ExtPerm).
so I want to ignore that specific paragraph
### Your contribution
not yet | Ignoring Specific Paragraphs or Divs in Web Scraping with LangChain | https://api.github.com/repos/langchain-ai/langchain/issues/14979/comments | 2 | 2023-12-20T23:46:42Z | 2023-12-21T00:27:04Z | https://github.com/langchain-ai/langchain/issues/14979 | 2,051,459,162 | 14,979 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Proposing updates to the SyntheticDataGenerator interface to create a cleaner foundation for building community integrations for synthetic tabular models like [Gretel](https://gretel.ai/tabular-llm) [[docs](https://docs.gretel.ai/reference/tabular-llm)]
### Suggestion:
# Existing Interface
The current `SyntheticDataGenerator` interface requires:
```python
def generate(
subject: str,
runs: int,
extra: Optional[str] = None
) -> List[str]
```
Where:
* `subject`: Subject the synthetic data is about
* `runs`: Number of times to generate the data
* `extra`: Extra instructions for steering
## Proposed Update
I propose changing this to the following:
```python
def generate(
prompt: str,
num_records: int,
optional_dataset: Optional[str, Path, DataFrame]
) -> List[str]
```
## Where:
* `prompt`: User prompt to create synthetic data
* `num_records`: Number of rows to generate
* `optional_dataset`: Dataset to edit/augment
I believe this creates a cleaner interface for synthetic tabular data flows, by combining the `subject` and `extra` parameters into a single field, and allowing the user to clearly specify the number of results they want `num_records`, vs the number of `runs` of the LLM which could generate more than one record each time. The `optional_dataset` arg lets the user prompt the model with a dataset to edit or augment with new synthetic data.
# Requesting Feedback
I would appreciate any thoughts on this proposed update, and happy to open a PR! Before I get started, please let me know:
* If you see any issues with changing the interface
* If an alternative integration approach would be better
* Any other API details to consider
https://python.langchain.com/docs/use_cases/data_generation
My goal is have an intuitive integration for Gretel and future synthetic data models | Issue: Requesting Feedback on Integrating Gretel for Synthetic Tabular Generation | https://api.github.com/repos/langchain-ai/langchain/issues/14975/comments | 1 | 2023-12-20T22:29:45Z | 2024-03-27T16:09:32Z | https://github.com/langchain-ai/langchain/issues/14975 | 2,051,385,822 | 14,975 |
[
"hwchase17",
"langchain"
] | ### System Info
Currently, I am using OpenAI LLM and Gemini Pro all being used my LangChain. I am also using Google's embedding-001 model and Cohere base model (tested each embedding and both either reply back in english first then another language or straight to another language).
Here is my prompt template:
<code>
def doc_question_prompt_template():
template = """
You are a helpful assistant that has the ability to answer all users questions to the best of your ability.
Your answers should come from the context you are provided. Provide an answer with detail and not short answers.
Your only response should be in the English langeuage.
Context:
{context}
User: {question}
"""
return PromptTemplate(
input_variables=["question"],
template=template
)
def doc_question_command(body, conversation_contexts):
llmlibrary = LLMLibrary()
channel_id = body['channel_id']
user_id = body['user_id']
context_key = f"{channel_id}-{user_id}"
prompt = ChatPromptTemplate.doc_question_prompt_template()
if context_key not in conversation_contexts:
conversation_contexts[context_key] = {
"memory": ConversationBufferMemory(memory_key="chat_history", output_key="answer", return_messages=True, max_token_limit=1024),
"history": "",
}
user_memory = conversation_contexts[context_key]["memory"]
question = body['text']
conversation = llmlibrary.doc_question(user_memory, prompt, question)
#print(f"Conversation: {conversation}")
return question, conversation
def doc_question(self, user_memory, prompt, question)
llm = ChatGoogleGenerativeAI(model="gemini-pro",temperature=0.0, convert_system_message_to_human=True)
vectordb = self.vectorstore.get_vectordb()
print(f"Vector DB: {vectordb}\n")
retriever = vectordb.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={'score_threshold': 0.8}
)
docs = retriever.get_relevant_documents(question)
print(f"Docs: {docs}\n")
print(f"Initiating chat conversation memory\n")
#print(f"Conversation Memory: {memory}\n")
conversation_chain= ConversationalRetrievalChain.from_llm(
llm,
retriever=retriever,
memory=user_memory,
combine_docs_chain_kwargs={'prompt': prompt},
return_source_documents=True,
verbose=False,
)
#print(f"Conversation chain: {conversation_chain}\n")
return conversation_chain
@app.command("/doc_question")
def handle_doc_question_command(ack, body, say):
# Acknowledge the command request
ack()
print(body)
say(f"🤨 {body['text']}")
question, conversation = ChatHandler.doc_question_command(body, conversation_contexts)
response = conversation({'question': question})
print(f"(INFO) Doc Question Response: {response} {time.time()}")
print(f"(INFO) Doc Question Response answer: {response['answer']} {time.time()}")
say(f"🤖 {response['answer']}")
</code>
Logs:
[output.txt](https://github.com/langchain-ai/langchain/files/13732990/output.txt)

### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. user sends a message through Slack
2. The message is received by @app.command("/doc_question)
3. ChatHandler.doc_question_command gets called passing the body and conversation_contexts
4. doc_question_command gets information about the message that was sent and gets the doc_question_prompt_template from ChatPromptTemplate module
5. conversation_contexts gets a context key of memory and history
6. llmlibrary.doc question is then called passing user_member, prompt, question
7. the doc_question function uses the ChatGoogleGenerativeAI module and gets the vectordb which is Pinecone
8. uses the ConversationRetrievalChain.from_llm and passes it back to the handler and the handler passes question and conversation back to @app.command("/doc_question")
9. The question is then submitted to the LLM and the response is spat out within Slack (sometimes english, Spanish, or other)
### Expected behavior
Only reply in english | LLMs start replying in other languages | https://api.github.com/repos/langchain-ai/langchain/issues/14974/comments | 3 | 2023-12-20T22:14:57Z | 2024-03-29T16:07:40Z | https://github.com/langchain-ai/langchain/issues/14974 | 2,051,361,571 | 14,974 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain Version: langchain 0.0.350, langchain-community 0.0.3, langchain-core 0.1.1
Python Version: 3.10.6
Operating System: macOs
Additional Libraries: boto 2.49.0, boto3 1.34.1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to Reproduce:
- Create an instance of `DynamoDBChatMessageHistory` with a specified table name and session ID.
- Initialize `ConversationTokenBufferMemory` with a `max_token_limit`.
- Attach the memory to a `ConversationChain`.
- Call predict on the `ConversationChain` with some input.
Code sample:
```
import boto3
from langchain.llms import Bedrock
from langchain.memory.chat_message_histories import DynamoDBChatMessageHistory
from langchain.memory import ConversationTokenBufferMemory
session = boto3.Session(
aws_access_key_id=os.environ.get('AWS_ACCESS_KEY_ID'),
aws_secret_access_key=os.environ.get('AWS_SECRET_ACCESS_KEY'),
aws_session_token=os.environ.get('AWS_SESSION_TOKEN'),
region_name='us-east-1'
)
dynamodb = session.resource('dynamodb')
chat_sessions_table = dynamodb.Table('SessionTable')
boto3_bedrock = session.client(service_name="bedrock-runtime")
max_tokens_to_sample = 100
temperature = 0
modelId = "anthropic.claude-instant-v1"
top_k = 250
top_p = 0.999
model_kwargs = {
"temperature": temperature,
"max_tokens_to_sample": max_tokens_to_sample,
"top_k": top_k,
"top_p": top_p
}
llm = Bedrock(
client=boto3_bedrock,
model_id=modelId,
region_name='us-east-1',
model_kwargs=model_kwargs,
streaming=True,callbacks=[StreamingStdOutCallbackHandler()]
)
message_history = DynamoDBChatMessageHistory(table_name="SessionTable", session_id="10", boto3_session=session)
memory = ConversationTokenBufferMemory(
llm=llm, # Use the Bedrock instance
max_token_limit=100,
return_messages=True,
chat_memory=message_history,
ai_prefix="A",
human_prefix="H"
)
#add the memory to the Chain
conversation = ConversationChain(
llm=llm, verbose=True, memory=memory
)
conversation.predict(input="Hello!")
memory.load_memory_variables({})
```
### Expected behavior
Expected Behavior:
- The `DynamoDBChatMessageHistory` should respect the `max_token_limit` set in `ConversationTokenBufferMemory`, limiting the token count accordingly.
Actual Behavior:
The `DynamoDBChatMessageHistory` does not limit the token count as per the `max_token_limit` set in `ConversationTokenBufferMemory` and keeps saving all the items in memory on the DynamoDB table. | Issue with DynamoDBChatMessageHistory Not Respecting max_token_limit in ConversationTokenBufferMemory | https://api.github.com/repos/langchain-ai/langchain/issues/14957/comments | 8 | 2023-12-20T14:15:02Z | 2024-06-08T16:08:05Z | https://github.com/langchain-ai/langchain/issues/14957 | 2,050,634,701 | 14,957 |
[
"hwchase17",
"langchain"
] | ### Feature request
**Issue Title:** Enhance NLATool Authentication in ChatGPT OpenAPI
**Description:**
I have identified a feature gap in the current implementation of ChatGPT OpenAPI when using NLATool as a proxy for authentication. The existing logic does not fully meet downstream requirements, and I intend to propose a modification to address this issue.
**Proposed Modification:**
I suggest adding a new attribute within the NLATool implementation. During initialization, this attribute should be passed to `NLAToolkit.from_llm_and_ai_plugin`. The subsequent call chain is as follows: `from_llm_and_spec -> _get_http_operation_tools -> NLATool.from_llm_and_method`. The responsibility of `NLATool.from_llm_and_method` is to construct the NLATool component, which includes an underlying `OpenAPIEndpointChain` base package.
The challenge lies in the fact that the `OpenAPIEndpointChain` base package currently lacks support for authentication. To address this, it is essential to load the created attribute into the `OpenAPIEndpointChain`. During the execution of the `_call` method, the authentication logic should be executed.
**Implementation Steps:**
1. Modify the initialization and execution code of the `OpenAPIEndpointChain` class to support authentication.
2. Ensure that the newly added attribute is properly integrated into the `OpenAPIEndpointChain` during its initialization.
3. Implement the authentication logic in the `_call` method of the `OpenAPIEndpointChain`.
Thank you for your consideration.
### Motivation
The current implementation of ChatGPT OpenAPI using NLATool as a proxy for authentication falls short of meeting downstream requirements. By enhancing the NLATool authentication logic, we aim to improve the overall functionality and responsiveness of the system, ensuring it aligns more effectively with user needs. This modification seeks to bridge the existing feature gap and enhance the usability and versatility of the ChatGPT OpenAPI.
### Your contribution
**Expected Challenges:**
While I have not yet started the implementation, I anticipate challenges during the process. One notable challenge is that the `langchain` module does not currently define the core for the authentication class. Consequently, addressing this issue may require changes across multiple modules. A pull request spanning multiple modules may encounter challenges during the review process.
**Request for Feedback:**
Before I commence with the implementation, I would appreciate your insights and guidance on the proposed modification. Your feedback on potential challenges and recommendations for an effective implementation would be invaluable.
| Feature Request - ChatGPT OpenAPI NLATool Authentication Implementation Logic | https://api.github.com/repos/langchain-ai/langchain/issues/14956/comments | 2 | 2023-12-20T13:50:27Z | 2024-03-20T09:55:40Z | https://github.com/langchain-ai/langchain/issues/14956 | 2,050,590,779 | 14,956 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain: 0.0.351
langchain-community: 0.0.4
langchain-core: 0.1.1
langchain-experimental: 0.0.47
python: 3.10.4
### Who can help?
@hwchase17 , @agola11 , @eyurtsev
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create an agent and which is working as expected.
2. create an agent_executor by using above agent.
3. If i try to use agent_execuor, getting error "TypeError: Agent.plan() got multiple values for argument 'intermediate_steps'"
**Below is the code:**
Create an agent:-
```
agent = initialize_agent(llm=llm,
tools=tools,
agent = AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
agent_kwargs=agent_kwargs,
output_parser = output_parser,
output_key = "result",
handle_parsing_errors = True,
max_iterations=3,
early_stopping_method="generate",
memory = memory,
)
```
Create an agent_executor:-
```
agent_executor = AgentExecutor(agent=agent,
tools=tools,
verbose=True,
memory = memory,
)
```
calling the agent_executor
`result = agent_executor.invoke({"input":"Tell me about yourself", "format_instructions": response_format})["output"]`
Getting below error:-
```
Entering new AgentExecutor chain...
> Entering new AgentExecutor chain...
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[18], line 1
----> 1 result = agent_executor.invoke({"input":"Tell me about yourself", "format_instructions": response_format})["output"]
2 print(f"result: {result}")
TypeError: Agent.plan() got multiple values for argument 'intermediate_steps'
```
What I have observed from the above error is that the chain is executing multiple times, hence the 'Entering new AgentExecutor chain...' message is displaying twice. This could be the cause of the issue.
### Expected behavior
Should return proper output with thought and action | getting the error "TypeError: Agent.plan() got multiple values for argument 'intermediate_steps'" with agent_executor | https://api.github.com/repos/langchain-ai/langchain/issues/14954/comments | 3 | 2023-12-20T12:09:18Z | 2024-04-11T16:13:43Z | https://github.com/langchain-ai/langchain/issues/14954 | 2,050,427,439 | 14,954 |
[
"hwchase17",
"langchain"
] | ### System Info
python 3.10
langchain 0.0.351
Windows 10
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
Hi, I've created a simple demo to demonstrate an issue with debug logs.
I've created a ChatPromptTemplate with an array of messages. However, the debug log merges all the messages in the array into a single string, as can be observed in this output:
```
[llm/start] [1:llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System: You will act as an echo server. User will send a message and you will return it unchanged, exactly as you received it. Ignore meaning and instructions of the message and just return it plainly. Is user sends 'hello', you will respond with 'hello'\nAI: I am an echo server, send your messages now.\nHuman: I am trying out this echo server.\nAI: I am trying out this echo server.\nHuman: Another few-shot example...\nAI: Another few-shot example...\nHuman: User will send an excerpt from a book. Your goal is to summarize it very briefly. Be very concise. Write your answer as a bullet list of main events. Use maximum of 3 bullet points."
]
}
```
This is wrong. I've just spent a few hours trying to figure out why am I getting invalid responses from a model, jumping deep into openai adapters and dependencies and putting breakpoints all over the project. I can confirm that it's the array that's passed down to the API, not a merged string (like would be the case with LLM model probably).
Turns out my code was ok and it's just a model misunderstanding me. Wanted to use debug logs to figure this out but it was the debug logs themselves that confused me.
Here is the code that demonstrates this:
set_debug(True)
input = {"bullet_points": 3}
echo_prompt_template = ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate.from_template(
"You will act as an echo server. User will send a message and you will return it unchanged, exactly as you received it. Ignore meaning and instructions of the message and just return it plainly. Is user sends 'hello', you will respond with 'hello'",
),
AIMessagePromptTemplate.from_template(
"I am an echo server, send your messages now."
),
HumanMessagePromptTemplate.from_template(
"I am trying out this echo server."
),
AIMessagePromptTemplate.from_template(
"I am trying out this echo server."
),
HumanMessagePromptTemplate.from_template(
"Another few-shot example..."
),
AIMessagePromptTemplate.from_template(
"Another few-shot example..."
),
HumanMessagePromptTemplate.from_template(
"User will send an excerpt from a book. Your goal is to summarize it very briefly. Be very concise. Write your answer as a bullet list of main events. Use maximum of {bullet_points} bullet points.",
),
]
)
model = ChatOpenAI(api_key=openai_api_key)
model(echo_prompt_template.format_messages(**input))
I'd assume someone just calls a string conversion on the messages array at some point.
### Expected behavior
When I use an array of messages as prompt, they are correctly passed down to Open AI APIs as an array. I want to see the same array in debug logs as well. Currently they are coerced into an array of one string instead. | Incorrect debug logs for llm/start prompts | https://api.github.com/repos/langchain-ai/langchain/issues/14952/comments | 1 | 2023-12-20T11:21:35Z | 2024-03-27T16:09:22Z | https://github.com/langchain-ai/langchain/issues/14952 | 2,050,355,938 | 14,952 |
[
"hwchase17",
"langchain"
] |
I am trying to run the LLMChain using `llm_chain = LLMChain(llm=llm, prompt=prompt)`, where `llm` is a custom LLM defined based on https://python.langchain.com/docs/modules/model_io/llms/custom_llm,
While trying to run this I am getting the following error: `Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, predict, predict_messages (type=type_error)`
Can someone help me with this?
### Suggestion:
_No response_ | Issue: Getting an error while trying to run LLMChain with Custom LLM | https://api.github.com/repos/langchain-ai/langchain/issues/14951/comments | 3 | 2023-12-20T10:28:46Z | 2024-05-08T16:06:50Z | https://github.com/langchain-ai/langchain/issues/14951 | 2,050,271,687 | 14,951 |
[
"hwchase17",
"langchain"
] | ### System Info
unable to resolve below issue

### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

### Expected behavior
ImportError: cannot import name 'AzureOpenAIEmbeddings' from 'langchain.embeddings' (/opt/conda/lib/python3.10/site-packages/langchain/embeddings/__init__.py) | ImportError: cannot import name 'AzureOpenAIEmbeddings' from 'langchain.embeddings' (/opt/conda/lib/python3.10/site-packages/langchain/embeddings/__init__.py) | https://api.github.com/repos/langchain-ai/langchain/issues/14950/comments | 7 | 2023-12-20T10:26:52Z | 2024-02-01T18:40:51Z | https://github.com/langchain-ai/langchain/issues/14950 | 2,050,268,609 | 14,950 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain: 0.0.338
python: 3.9
### Who can help?
@hwchase17
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
In the following code:
``` python
if self.distance_strategy == DistanceStrategy.MAX_INNER_PRODUCT:
return self._max_inner_product_relevance_score_fn
elif self.distance_strategy == DistanceStrategy.EUCLIDEAN_DISTANCE:
# Default behavior is to use Euclidean distance for relevancy
return self._euclidean_relevance_score_fn
elif self.distance_strategy == DistanceStrategy.COSINE:
return self._cosine_relevance_score_fn
```
When I use MAX_INNER_PRODUCT, the score calculation method is `_max_inner_product_relevance_score_fn`:
``` python
def _max_inner_product_relevance_score_fn(distance: float) -> float:
"""Normalize the distance to a score on a scale of [0, 1]."""
if distance > 0:
return 1.0 - distance
return -1.0 * distance
```
However, if I use MAX_INNER_PRODUCT, the index must be FlatIP:
``` python
if distance_strategy == DistanceStrategy.MAX_INNER_PRODUCT:
index = faiss.IndexFlatIP(len(embeddings[0]))
else:
# Default to L2, currently other metric types not initialized.
index = faiss.IndexFlatL2(len(embeddings[0]))
```
Thus, the distance represents the cosine similarity, meaning the distance should be equivalent to similarity. However, in the method `_max_inner_product_relevance_score_fn`, a larger distance results in a lower score.
Is this a bug?
### Expected behavior
I think the distance sholud be equivalent to similarity。 | The calculated score is wrong when using DistanceStrategy.MAX_INNER_PRODUCT (Faiss) | https://api.github.com/repos/langchain-ai/langchain/issues/14948/comments | 3 | 2023-12-20T09:29:25Z | 2024-03-27T16:09:17Z | https://github.com/langchain-ai/langchain/issues/14948 | 2,050,173,519 | 14,948 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am using langchain with gpt 4 model. Im using the create pandas dataframe agent for my use case. For 60 % of the time i run the code, im getting below error-
An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse LLM output: `Thought: To answer the question about what the "EffectiveDate" column represents, I need to use common sense based on the column name and the data provided. For the "exists" part, I need to check if there are any standardization issues in the "EffectiveDate" column. I will look at the data provided and think about the possible issues listed.
Now i have already passed the argument- `handle_parsing_errors=True' while creating the agent, but still it gives me the above error, suggesting me to pass this argument.
I have also tried giving other values to the handle_parsing_errors like - passing custom error message or function, still im stuck with this error most of the time.
### Suggestion:
_No response_ | Issue: Getting 'An output parsing error occurred' error even after passing 'handle_parsing_errors=True' to the agent | https://api.github.com/repos/langchain-ai/langchain/issues/14947/comments | 6 | 2023-12-20T09:28:03Z | 2024-07-03T16:04:51Z | https://github.com/langchain-ai/langchain/issues/14947 | 2,050,171,291 | 14,947 |
[
"hwchase17",
"langchain"
] | ### System Info
loader1 = CSVLoader(file_path='/home/calvin/下载/test.csv')
Doc = loader1.load()
text_splitter = CharacterTextSplitter(chunk_size=100,chunk_overlap=0)
texts = text_splitter.split_documents(Doc)
embeddings = OpenAIEmbeddings()
db = Chroma.from_documents(texts, embeddings)
retriever = db.as_retriever()
qa = RetrievalQA.from_chain_type(llm=OpenAI(mode="gpt-3.5-turbo"), chain_type="stuff", retriever=retriever)
query = "1501475820"
print(qa.run(query))
I run this code,but i can not use chat-gpt-3.5-turbo,so i try to opanAI MIGRATE,but i exit it ,then i found

then always tell me

@hwcha
### Who can help?
from langchain.chains import RetrievalQA
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
loader1 = CSVLoader(file_path='/home/calvin/下载/test.csv')
Doc = loader1.load()
text_splitter = CharacterTextSplitter(chunk_size=100,chunk_overlap=0)
texts = text_splitter.split_documents(Doc)
embeddings = OpenAIEmbeddings()
db = Chroma.from_documents(texts, embeddings)
retriever = db.as_retriever()
qa = RetrievalQA.from_chain_type(llm=OpenAI(mode="gpt-3.5-turbo"), chain_type="stuff", retriever=retriever)
query = "1501475820"
print(qa.run(query))
### Expected behavior
i want use gpt-3.5-turbo to query | openai migrate | https://api.github.com/repos/langchain-ai/langchain/issues/14946/comments | 2 | 2023-12-20T08:08:13Z | 2024-03-27T16:09:12Z | https://github.com/langchain-ai/langchain/issues/14946 | 2,050,054,174 | 14,946 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi, I can't find from how to deploy gpt-4-turbo in langchain.
Could anyone please tell me throuhg which module gpt-4-turbo can be deployed?
Seems that the langchain.llm has already been removed from new version of langchain.
### Suggestion:
_No response_ | Issue: how to deploy gpt-4-turbo through langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14945/comments | 3 | 2023-12-20T05:36:23Z | 2024-05-04T14:21:13Z | https://github.com/langchain-ai/langchain/issues/14945 | 2,049,869,786 | 14,945 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain == 0.0.351
Python == 3.10.6
Running in AWS sagemaker notebook, issue occurred on multiple kernels.
Code worked perfectly yesterday, error occurred upon starting up this morning (12/19/23)
Code worked again upon reverting to 0.0.349
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
pip install -U langchain
from langchain.llms.sagemaker_endpoint import LLMContentHandler
### Expected behavior
Expected behavior is that the import works | ImportError: cannot import name 'LLMContentHandler' from 'langchain.llms.sagemaker_endpoint' occurring with 0.0.351 | https://api.github.com/repos/langchain-ai/langchain/issues/14944/comments | 1 | 2023-12-20T05:20:21Z | 2024-03-27T16:09:07Z | https://github.com/langchain-ai/langchain/issues/14944 | 2,049,856,660 | 14,944 |
[
"hwchase17",
"langchain"
] | How does AgentExecutor make LLM on_llm_new_token most streaming output instead of AgentExecutorIterator?
The current effect is that it will stream out AgentExecutorIterator
Desired effect: streams LLM on_llm_new_token | How does AgentExecutor make LLM on_llm_new_token most streaming output instead of AgentExecutorIterator? | https://api.github.com/repos/langchain-ai/langchain/issues/14943/comments | 5 | 2023-12-20T03:56:17Z | 2024-03-27T16:09:02Z | https://github.com/langchain-ai/langchain/issues/14943 | 2,049,786,926 | 14,943 |
[
"hwchase17",
"langchain"
] | ### Feature request
There is a new implementation of function call which I think isn't supported by langchain yet.
https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/function-calling
### Motivation
AzureChatOpenAI models can't be used by OpenAIFunctionAgent due to the implementation issue.
### Your contribution
I've implemented a workaround here. Hoping for a full solution.
```python
from langchain.chat_models import AzureChatOpenAI
class AzureChatOpenAIWithTooling(AzureChatOpenAI):
"""AzureChatOpenAI with a patch to support functions.
Function calling: https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/function-calling
Currently only a single function call is supported.
If multiple function calls are returned by the model, only the first one is used.
"""
def _generate(self, messages, stop=None, run_manager=None, stream=None, **kwargs):
if "functions" in kwargs:
kwargs["tools"] = [
{"type": "function", "function": f} for f in kwargs.pop("functions")
]
return super()._generate(messages, stop, run_manager, stream, **kwargs)
def _create_message_dicts(self, messages, stop):
dicts, params = super()._create_message_dicts(messages, stop)
latest_call_id = {}
for d in dicts:
if "function_call" in d:
# Record the ID for future use
latest_call_id[d["function_call"]["name"]] = d["function_call"]["id"]
# Convert back to tool call
d["tool_calls"] = [
{
"id": d["function_call"]["id"],
"function": {
k: v for k, v in d["function_call"].items() if k != "id"
},
"type": "function",
}
]
d.pop("function_call")
if d["role"] == "function":
# Renaming as tool
d["role"] = "tool"
d["tool_call_id"] = latest_call_id[d["name"]]
return dicts, params
def _create_chat_result(self, response):
result = super()._create_chat_result(response)
for generation in result.generations:
if generation.message.additional_kwargs.get("tool_calls"):
function_calls = [
{**t["function"], "id": t["id"]}
for t in generation.message.additional_kwargs.pop("tool_calls")
]
# Only consider the first one.
generation.message.additional_kwargs["function_call"] = function_calls[
0
]
return result
```
Test code:
```python
def test_azure_chat_openai():
from scripts.aoai_llm import AzureChatOpenAIWithTooling
agent = OpenAIFunctionsAgent.from_llm_and_tools(
llm=AzureChatOpenAIWithTooling(azure_deployment="gpt-35-turbo", api_version="2023-12-01-preview", temperature=0.),
tools=[
StructuredTool.from_function(get_current_weather)
],
)
action = agent.plan([], input="What's the weather like in San Francisco?")
print(action)
tool_output = get_current_weather(**action.tool_input)
result = agent.plan([
(action, tool_output)
], input="What's the weather like in San Francisco?")
print(result)
# Example function hard coded to return the same weather
# In production, this could be your backend API or an external API
def get_current_weather(location: str, unit: str = "fahrenheit"):
"""Get the current weather in a given location"""
if "tokyo" in location.lower():
return json.dumps({"location": "Tokyo", "temperature": "10", "unit": unit})
elif "san francisco" in location.lower():
return json.dumps(
{"location": "San Francisco", "temperature": "72", "unit": unit}
)
elif "paris" in location.lower():
return json.dumps({"location": "Paris", "temperature": "22", "unit": unit})
else:
return json.dumps({"location": location, "temperature": "unknown"})
```
(Note: the original example to ask about weather in three countries simultaneously doesn't work here.)
| Support tool for AzureChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/14941/comments | 1 | 2023-12-20T03:10:10Z | 2024-03-27T16:08:57Z | https://github.com/langchain-ai/langchain/issues/14941 | 2,049,755,836 | 14,941 |
[
"hwchase17",
"langchain"
] | ### System Info
Traceback (most recent call last):
File "c:\Users\vivek\OneDrive\Desktop\Hackathon\doc.py", line 43, in <module>
db = FAISS.from_documents(documents=pages, embedding=embeddings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\schema\vectorstore.py", line 510, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\vectorstores\faiss.py", line 911, in from_texts
embeddings = embedding.embed_documents(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\embeddings\openai.py", line 549, in embed_documents
return self._get_len_safe_embeddings(texts, engine=engine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\embeddings\openai.py", line 392, in _get_len_safe_embeddings
encoding = tiktoken.encoding_for_model(model_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\model.py", line 75, in encoding_for_model
return get_encoding(encoding_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\registry.py", line 63, in get_encoding
enc = Encoding(**constructor())
^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken_ext\openai_public.py", line 64, in cl100k_base
mergeable_ranks = load_tiktoken_bpe(
^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\load.py", line 115, in load_tiktoken_bpe
return {
^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\load.py", line 117, in <dictcomp>
for token, rank in (line.split() for line in contents.splitlines() if line)
^^^^^^^^^^^
ValueError: not enough values to unpack (expected 2, got 1)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import PyPDFLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.embeddings import AzureOpenAIEmbeddings
from langchain.vectorstores import FAISS
from dotenv import load_dotenv
import openai
import os
#load environment variables
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
OPENAI_DEPLOYMENT_ENDPOINT = os.getenv("OPENAI_DEPLOYMENT_ENDPOINT")
OPENAI_DEPLOYMENT_NAME = os.getenv("OPENAI_DEPLOYMENT_NAME")
OPENAI_MODEL_NAME = os.getenv("OPENAI_MODEL_NAME")
OPENAI_DEPLOYMENT_VERSION = os.getenv("OPENAI_DEPLOYMENT_VERSION")
OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME = os.getenv("OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME")
OPENAI_ADA_EMBEDDING_MODEL_NAME = os.getenv("OPENAI_ADA_EMBEDDING_MODEL_NAME")
#init Azure OpenAI
openai.api_type = "azure"
openai.api_version = OPENAI_DEPLOYMENT_VERSION
openai.api_base = OPENAI_DEPLOYMENT_ENDPOINT
openai.api_key = OPENAI_API_KEY
# if __name__ == "__main__":
embeddings=AzureOpenAIEmbeddings(deployment=OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME,
model=OPENAI_ADA_EMBEDDING_MODEL_NAME,
azure_endpoint=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_type="azure",
chunk_size=100)
# dataPath = "./data/documentation/"
fileName = r'C:\Users\vivek\OneDrive\Desktop\Hackathon\data\FAQ For LTO Hotels.pdf'
#use langchain PDF loader
loader = PyPDFLoader(fileName)
#split the document into chunks
pages = loader.load_and_split()
#Use Langchain to create the embeddings using text-embedding-ada-002
db = FAISS.from_documents(documents=pages, embedding=embeddings)
#save the embeddings into FAISS vector store
db.save_local(r"C:\Users\vivek\OneDrive\Desktop\Hackathon\index")
from dotenv import load_dotenv
import os
import openai
from langchain.chains import RetrievalQA
from langchain.vectorstores import FAISS
from langchain.chains.question_answering import load_qa_chain
from langchain.chat_models import AzureChatOpenAI
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.chains import ConversationalRetrievalChain
from langchain.prompts import PromptTemplate
#load environment variables
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
OPENAI_DEPLOYMENT_ENDPOINT = os.getenv("OPENAI_DEPLOYMENT_ENDPOINT")
OPENAI_DEPLOYMENT_NAME = os.getenv("OPENAI_DEPLOYMENT_NAME")
OPENAI_MODEL_NAME = os.getenv("OPENAI_MODEL_NAME")
OPENAI_DEPLOYMENT_VERSION = os.getenv("OPENAI_DEPLOYMENT_VERSION")
OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME = os.getenv("OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME")
OPENAI_ADA_EMBEDDING_MODEL_NAME = os.getenv("OPENAI_ADA_EMBEDDING_MODEL_NAME")
def ask_question(qa, question):
result = qa({"query": question})
print("Question:", question)
print("Answer:", result["result"])
def ask_question_with_context(qa, question, chat_history):
query = "what is Azure OpenAI Service?"
result = qa({"question": question, "chat_history": chat_history})
print("answer:", result["answer"])
chat_history = [(query, result["answer"])]
return chat_history
if __name__ == "__main__":
# Configure OpenAI API
openai.api_type = "azure"
openai.api_base = os.getenv('OPENAI_API_BASE')
openai.api_key = os.getenv("OPENAI_API_KEY")
openai.api_version = os.getenv('OPENAI_API_VERSION')
llm = AzureChatOpenAI(deployment_name=OPENAI_DEPLOYMENT_NAME,
model_name=OPENAI_MODEL_NAME,
openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_version=OPENAI_DEPLOYMENT_VERSION,
openai_api_key=OPENAI_API_KEY,
openai_api_type="azure")
embeddings=AzureOpenAIEmbeddings(deployment=OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME,
model=OPENAI_ADA_EMBEDDING_MODEL_NAME,
azure_endpoint=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_type="azure",
chunk_size=100)
# Initialize gpt-35-turbo and our embedding model
#load the faiss vector store we saved into memory
vectorStore = FAISS.load_local(r"C:\Users\vivek\OneDrive\Desktop\Hackathon\index", embeddings)
#use the faiss vector store we saved to search the local document
retriever = vectorStore.as_retriever(search_type="similarity", search_kwargs={"k":2})
QUESTION_PROMPT = PromptTemplate.from_template("""Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:""")
qa = ConversationalRetrievalChain.from_llm(llm=llm,
retriever=retriever,
condense_question_prompt=QUESTION_PROMPT,
return_source_documents=True,
verbose=False)
chat_history = []
while True:
query = input('you: ')
if query == 'q':
break
chat_history = ask_question_with_context(qa, query, chat_history)
### Expected behavior
QA | Embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/14939/comments | 1 | 2023-12-20T02:50:23Z | 2024-03-27T16:08:52Z | https://github.com/langchain-ai/langchain/issues/14939 | 2,049,741,707 | 14,939 |
[
"hwchase17",
"langchain"
] | 假设我基于langchain分别实现了用于数据库查询的mysqlagent、用于访问外部链接apichain、以及用于rag的知识查询agent,我如何通过用户输入,将用户的请求分发到不同的agent
| agent意图识别 | https://api.github.com/repos/langchain-ai/langchain/issues/14937/comments | 1 | 2023-12-20T02:23:37Z | 2024-03-27T16:08:47Z | https://github.com/langchain-ai/langchain/issues/14937 | 2,049,722,601 | 14,937 |
[
"hwchase17",
"langchain"
] | ### System Info
C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\embeddings\azure_openai.py:101: UserWarning: As of openai>=1.0.0, Azure endpoints should be specified via the `azure_endpoint` param not `openai_api_base` (or alias `base_url`). Updating `openai_api_base` from <your openai endpoint> to <your openai endpoint>/openai.
warnings.warn(
C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\embeddings\azure_openai.py:108: UserWarning: As of openai>=1.0.0, if `deployment` (or alias `azure_deployment`) is specified then `openai_api_base` (or alias `base_url`) should not be. Instead use `deployment` (or alias `azure_deployment`) and `azure_endpoint`.
warnings.warn(
C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\embeddings\azure_openai.py:116: UserWarning: As of openai>=1.0.0, if `openai_api_base` (or alias `base_url`) is specified it is expected to be of the form https://example-resource.azure.openai.com/openai/deployments/example-deployment. Updating <your openai endpoint> to <your openai endpoint>/openai.
warnings.warn(
Traceback (most recent call last):
File "c:\Users\vivek\OneDrive\Desktop\Hackathon\doc.py", line 28, in <module>
embeddings=AzureOpenAIEmbeddings(deployment=OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for AzureOpenAIEmbeddings
__root__
base_url and azure_endpoint are mutually exclusive (type=value_error)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import PyPDFLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.embeddings import AzureOpenAIEmbeddings
from langchain.vectorstores import FAISS
from dotenv import load_dotenv
import openai
import os
#load environment variables
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
OPENAI_DEPLOYMENT_ENDPOINT = os.getenv("OPENAI_DEPLOYMENT_ENDPOINT")
OPENAI_DEPLOYMENT_NAME = os.getenv("OPENAI_DEPLOYMENT_NAME")
OPENAI_MODEL_NAME = os.getenv("OPENAI_MODEL_NAME")
OPENAI_DEPLOYMENT_VERSION = os.getenv("OPENAI_DEPLOYMENT_VERSION")
OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME = os.getenv("OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME")
OPENAI_ADA_EMBEDDING_MODEL_NAME = os.getenv("OPENAI_ADA_EMBEDDING_MODEL_NAME")
#init Azure OpenAI
openai.api_type = "azure"
openai.api_version = OPENAI_DEPLOYMENT_VERSION
openai.api_base = OPENAI_DEPLOYMENT_ENDPOINT
openai.api_key = OPENAI_API_KEY
# if __name__ == "__main__":
embeddings=AzureOpenAIEmbeddings(deployment=OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME,
model=OPENAI_ADA_EMBEDDING_MODEL_NAME,
openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_type="azure",
chunk_size=100)
# dataPath = "./data/documentation/"
fileName = r'C:\Users\vivek\OneDrive\Desktop\Hackathon\data\FAQ For LTO Hotels.pdf'
#use langchain PDF loader
loader = PyPDFLoader(fileName)
#split the document into chunks
pages = loader.load_and_split()
#Use Langchain to create the embeddings using text-embedding-ada-002
db = FAISS.from_documents(documents=pages, embedding=embeddings)
#save the embeddings into FAISS vector store
db.save_local(r"C:\Users\vivek\OneDrive\Desktop\Hackathon\index")
from dotenv import load_dotenv
import os
import openai
from langchain.chains import RetrievalQA
from langchain.vectorstores import FAISS
from langchain.chains.question_answering import load_qa_chain
from langchain.chat_models import AzureChatOpenAI
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.chains import ConversationalRetrievalChain
from langchain.prompts import PromptTemplate
#load environment variables
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
OPENAI_DEPLOYMENT_ENDPOINT = os.getenv("OPENAI_DEPLOYMENT_ENDPOINT")
OPENAI_DEPLOYMENT_NAME = os.getenv("OPENAI_DEPLOYMENT_NAME")
OPENAI_MODEL_NAME = os.getenv("OPENAI_MODEL_NAME")
OPENAI_DEPLOYMENT_VERSION = os.getenv("OPENAI_DEPLOYMENT_VERSION")
OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME = os.getenv("OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME")
OPENAI_ADA_EMBEDDING_MODEL_NAME = os.getenv("OPENAI_ADA_EMBEDDING_MODEL_NAME")
def ask_question(qa, question):
result = qa({"query": question})
print("Question:", question)
print("Answer:", result["result"])
def ask_question_with_context(qa, question, chat_history):
query = "what is Azure OpenAI Service?"
result = qa({"question": question, "chat_history": chat_history})
print("answer:", result["answer"])
chat_history = [(query, result["answer"])]
return chat_history
if __name__ == "__main__":
# Configure OpenAI API
openai.api_type = "azure"
openai.api_base = os.getenv('OPENAI_API_BASE')
openai.api_key = os.getenv("OPENAI_API_KEY")
openai.api_version = os.getenv('OPENAI_API_VERSION')
llm = AzureChatOpenAI(deployment_name=OPENAI_DEPLOYMENT_NAME,
model_name=OPENAI_MODEL_NAME,
openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_version=OPENAI_DEPLOYMENT_VERSION,
openai_api_key=OPENAI_API_KEY,
openai_api_type="azure")
embeddings=OpenAIEmbeddings(deployment=OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME,
model=OPENAI_ADA_EMBEDDING_MODEL_NAME,
openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_type="azure",
chunk_size=1)
# Initialize gpt-35-turbo and our embedding model
#load the faiss vector store we saved into memory
vectorStore = FAISS.load_local(r"C:\Users\vivek\OneDrive\Desktop\Hackathon\index", embeddings)
#use the faiss vector store we saved to search the local document
retriever = vectorStore.as_retriever(search_type="similarity", search_kwargs={"k":2})
QUESTION_PROMPT = PromptTemplate.from_template("""Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:""")
qa = ConversationalRetrievalChain.from_llm(llm=llm,
retriever=retriever,
condense_question_prompt=QUESTION_PROMPT,
return_source_documents=True,
verbose=False)
chat_history = []
while True:
query = input('you: ')
if query == 'q':
break
chat_history = ask_question_with_context(qa, query, chat_history)
### Expected behavior
QA | AZure Openai Embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/14934/comments | 8 | 2023-12-20T01:40:55Z | 2024-05-23T16:34:06Z | https://github.com/langchain-ai/langchain/issues/14934 | 2,049,674,847 | 14,934 |
[
"hwchase17",
"langchain"
] | ### System Info
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\flask\app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\flask\app.py", line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\flask\app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\flask\app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\vivek\OneDrive\Desktop\SOPPOC\flask_app.py", line 43, in chat
return RCXStreakanswer(input)
^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\vivek\OneDrive\Desktop\SOPPOC\RCX_Streak.py", line 53, in RCXStreakanswer
openAIEmbedd = FAISS.from_documents(texts, embeddings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\schema\vectorstore.py", line 510, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\vectorstores\faiss.py", line 911, in from_texts
embeddings = embedding.embed_documents(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\embeddings\openai.py", line 549, in embed_documents
return self._get_len_safe_embeddings(texts, engine=engine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\embeddings\openai.py", line 392, in _get_len_safe_embeddings
encoding = tiktoken.encoding_for_model(model_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\model.py", line 97, in encoding_for_model
return get_encoding(encoding_name_for_model(model_name))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\registry.py", line 73, in get_encoding
enc = Encoding(**constructor())
^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken_ext\openai_public.py", line 64, in cl100k_base
mergeable_ranks = load_tiktoken_bpe(
^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\load.py", line 124, in load_tiktoken_bpe
return {
^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\load.py", line 126, in <dictcomp>
for token, rank in (line.split() for line in contents.splitlines() if line)
^^^^^^^^^^^
ValueError: not enough values to unpack (expected 2, got 1)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
loader = Docx2txtLoader(doc_path)
documents.extend(loader.load())
content = documents
text_splitter = RecursiveCharacterTextSplitter(
chunk_size = 100,
chunk_overlap = 20,
separators=["\n\n", "\n", "."]
)
texts = text_splitter.split_documents(content)
print(texts)
print()
embeddings = OpenAIEmbeddings()
openAIEmbedd = FAISS.from_documents(texts, embeddings)
print(openAIEmbedd)
prompt_template = """Given the following context and a question, generate an answer.
Based on user input extract only data for the given question from context. \
CONTEXT: {context}
QUESTION: {question}"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
retriever_openai = openAIEmbedd.as_retriever(search_kwargs={"k": 2})
print(retriever_openai)
chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever_openai,
return_source_documents=True,
chain_type_kwargs={"prompt": PROMPT})
ans=chain(user_message)
output= ans['result']
return output
### Expected behavior
should return answer | ValueError: not enough values to unpack (expected 2, got 1) | https://api.github.com/repos/langchain-ai/langchain/issues/14918/comments | 1 | 2023-12-19T17:30:51Z | 2024-03-26T16:08:41Z | https://github.com/langchain-ai/langchain/issues/14918 | 2,049,130,491 | 14,918 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain: v0.0.350
OS: Linux
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
The problem occurs when you use Azure with an GPT 4 Model because the Azure API will always respond with `gpt-4` as the Model name. You can also see this in the official Microsoft documentation. https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/gpt-with-vision#output. It will therefore calculate the wrong price → if you use Turbo will the price will be `x3` as it actually should be.
Code to Reproduce:
```python
llm = AzureChatOpenAI(
deployment_name="GPT4-TURBO"
)
with get_openai_callback() as cb:
# Run LLM
print((cb.total_tokens / 1000) * 0.01, "is instead", cb.total_cost)
```
### Expected behavior
It should return the correct price. | Issue when working with Azure and OpenAI Callback | https://api.github.com/repos/langchain-ai/langchain/issues/14912/comments | 4 | 2023-12-19T15:33:15Z | 2024-06-14T23:28:11Z | https://github.com/langchain-ai/langchain/issues/14912 | 2,048,902,467 | 14,912 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I have a document, which contains general text and tables.
I embedded this document using LangChain to build a bot with Node.js.
The bot answers correctly for general text in the document, but gives incorrect answers for table data.
How do I update?
### Suggestion:
_No response_ | How to embed the table data? | https://api.github.com/repos/langchain-ai/langchain/issues/14911/comments | 2 | 2023-12-19T15:11:34Z | 2024-03-26T16:08:36Z | https://github.com/langchain-ai/langchain/issues/14911 | 2,048,861,237 | 14,911 |
[
"hwchase17",
"langchain"
] | ### System Info
python: 3.11.4
langchain: 0.0.351
requests: 2.31.0
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
We have enabled the LangSmith tracing and after upgrading LangChain from `0.0.266` to `0.0.351` we started getting the following warnings:
```
Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2423)'))': /runs
```
We also get the same warnings when we try to update the feedback from the LangSmith client.
```
Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'RemoteDisconnected('Remote end closed connection without response')': /sessions?limit=1&name=mirror
```
Unfortunately, there is no additional stack trace.
This behavior is not consistent but it occurs randomly.
### Expected behavior
The expected behavior is all the runs to be propagated to the LangSmith and does not have this kind of warning. | `urllib3.connectionpool` warnings after upgrading to LangChain 0.0.351 | https://api.github.com/repos/langchain-ai/langchain/issues/14909/comments | 1 | 2023-12-19T14:53:36Z | 2024-03-26T16:08:31Z | https://github.com/langchain-ai/langchain/issues/14909 | 2,048,823,526 | 14,909 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Ref: https://python.langchain.com/docs/integrations/providers/wandb_tracking
> Note: the WandbCallbackHandler is being deprecated in favour of the WandbTracer . In future please use the WandbTracer as it is more flexible and allows for more granular logging. To know more about the WandbTracer refer to the [agent_with_wandb_tracing.html](https://python.langchain.com/en/latest/integrations/agent_with_wandb_tracing.html) notebook or use the following [colab notebook](http://wandb.me/prompts-quickstart). To know more about Weights & Biases Prompts refer to the following [prompts documentation](https://docs.wandb.ai/guides/prompts).
The link to the `agent_with_wandb_tracing.html` results in a HTTP 404
### Idea or request for content:
_No response_ | Link to agent_with_wandb_tracing.html notebook is broken | https://api.github.com/repos/langchain-ai/langchain/issues/14905/comments | 1 | 2023-12-19T14:22:07Z | 2024-03-26T16:08:26Z | https://github.com/langchain-ai/langchain/issues/14905 | 2,048,761,131 | 14,905 |
[
"hwchase17",
"langchain"
] | Cannot import LLMContentHandler
langchain: 0.0.351
python: 3.9
To reproduce:
``` python
from langchain.llms.sagemaker_endpoint import LLMContentHandler
```
Issue could be resolved by updating
https://github.com/langchain-ai/langchain/blob/583696732cbaa3d1cf3a3a9375539a7e8785850c/libs/langchain/langchain/llms/sagemaker_endpoint.py#L1C5-L7
as follow:
``` python
from langchain_community.llms.sagemaker_endpoint import (
LLMContentHandler,
SagemakerEndpoint,
)
__all__ = [
"SagemakerEndpoint",
"LLMContentHandler"
]
```
| Issue: cannot import name 'LLMContentHandler' from 'langchain.llms.sagemaker_endpoint | https://api.github.com/repos/langchain-ai/langchain/issues/14904/comments | 1 | 2023-12-19T13:53:31Z | 2023-12-19T15:00:33Z | https://github.com/langchain-ai/langchain/issues/14904 | 2,048,707,893 | 14,904 |
[
"hwchase17",
"langchain"
] | ### System Info
In chain.py, relative code as below:
```
def get_retriever(text):
_query = text
llm = ...
chroma_docs = [...]
_model_name, _embedding = get_embedding_HuggingFace()
chroma_vdb = Chroma.from_documents(chroma_docs, _embedding)
document_content_description = "..."
metadata_field_info = [...]
retriever = get_structured_retriever(llm, chroma_vdb, document_content_description, metadata_field_info, _query)
return retriever
chain = (
RunnableParallel({
"context": itemgetter("question") | RunnableLambda(get_retriever),
"question": RunnablePassthrough()
})
| prompt
| llm
| StrOutputParser()
)
```
When running the playground, there is no input frame (show as below):

But no error msg in langchain serve
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Show as the the code
### Expected behavior
Should have the input box | No input box show up when running the playground | https://api.github.com/repos/langchain-ai/langchain/issues/14902/comments | 1 | 2023-12-19T13:16:06Z | 2024-03-26T16:08:21Z | https://github.com/langchain-ai/langchain/issues/14902 | 2,048,640,189 | 14,902 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Current:
```
class AzureChatOpenAI(ChatOpenAI):
"""`Azure OpenAI` Chat Completion API.
To use this class you
must have a deployed model on Azure OpenAI. Use `deployment_name` in the
constructor to refer to the "Model deployment name" in the Azure portal.
In addition, you should have the ``openai`` python package installed, and the
following environment variables set or passed in constructor in lower case:
- ``AZURE_OPENAI_API_KEY``
- ``AZURE_OPENAI_API_ENDPOINT``
- ``AZURE_OPENAI_AD_TOKEN``
- ``OPENAI_API_VERSION``
- ``OPENAI_PROXY``
```
### Idea or request for content:
It should be
```
class AzureChatOpenAI(ChatOpenAI):
"""`Azure OpenAI` Chat Completion API.
To use this class you
must have a deployed model on Azure OpenAI. Use `deployment_name` in the
constructor to refer to the "Model deployment name" in the Azure portal.
In addition, you should have the ``openai`` python package installed, and the
following environment variables set or passed in constructor in lower case:
- ``AZURE_OPENAI_API_KEY``
- ``AZURE_OPENAI_ENDPOINT`` <---------- **Changed**
- ``AZURE_OPENAI_AD_TOKEN``
- ``OPENAI_API_VERSION``
- ``OPENAI_PROXY``
``` | DOC: Wrong parameter name in doc string | https://api.github.com/repos/langchain-ai/langchain/issues/14901/comments | 1 | 2023-12-19T12:54:07Z | 2024-03-26T16:08:16Z | https://github.com/langchain-ai/langchain/issues/14901 | 2,048,603,807 | 14,901 |
[
"hwchase17",
"langchain"
] | ### System Info
The `description` attribute of the function parameters described in our Pydantic *v2* model are missing in the output of `convert_to_openai_function` because it does not recognize Pydantic v2 `BaseModel` as a valid v1 `BaseModel`.
https://github.com/langchain-ai/langchain/blob/16399fd61d7744c529cca46464489e467b4b7741/libs/langchain/langchain/chains/openai_functions/base.py#L156-L161
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains.openai_functions.base import convert_to_openai_function
from pydantic.v1 import BaseModel as BaseModelV1, Field as FieldV1
from pydantic import BaseModel as BaseModelV2, Field as FieldV2
class FuncV1(BaseModelV1):
"Pydantic v1 model."
output: str = FieldV1(description="A output text")
class FuncV2(BaseModelV2):
"Pydantic v2 model."
output: str = FieldV2(description="A output text")
print(convert_to_openai_function(FuncV1))
{'name': 'FuncV1', 'description': 'Pydantic v1 model.', 'parameters': {'title': 'FuncV1', 'description': 'Pydantic v1 model.', 'type': 'object', 'properties': {'output': {'title': 'Output', 'description': 'A output text', 'type': 'string'}}, 'required': ['output']}}
print(convert_to_openai_function(FuncV2))
{'name': 'FuncV2', 'description': 'Pydantic v2 model.', 'parameters': {'type': 'object', 'properties': {'output': {'type': 'string'}}, 'required': ['output']}}
```
### Expected behavior
Add `description` attribute appeared in Pydantic v2 model. | `convert_to_openai_function` drops `description` for each parameter | https://api.github.com/repos/langchain-ai/langchain/issues/14899/comments | 9 | 2023-12-19T10:51:41Z | 2024-06-01T00:07:38Z | https://github.com/langchain-ai/langchain/issues/14899 | 2,048,405,277 | 14,899 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am working on implementing streaming for my ConversationRetrieval chain calls, and I plan to leverage the AsyncIteratorCallbackHandler along with its aiter method. While reviewing the source code, I noticed that the response from the on_llm_end method is not currently added to the queue. My goal is to enhance the aiter method so that the response is also included in the queue. This way, I can stream the final response to my client and use it to update cached data in my frontend. Additionally, I intend to leverage the on_llm_end method to update my database with the received response. Could you guide me on how to modify the aiter method within the AsyncIteratorCallbackHandler to align with these requirements?
### Suggestion:
_No response_ | Issue: Enhancing Streaming and Database Integration in ConversationRetrieval with AsyncIteratorCallbackHandler | https://api.github.com/repos/langchain-ai/langchain/issues/14898/comments | 5 | 2023-12-19T10:08:44Z | 2024-04-03T16:08:04Z | https://github.com/langchain-ai/langchain/issues/14898 | 2,048,330,418 | 14,898 |
[
"hwchase17",
"langchain"
] | ### Feature request
Currently (0.0.350) the Xata integration always creates new records with `XataVectorStore.from_documents`.
Provide the option to update embeddings and column content of existing record ids.
### Motivation
This will provide the capability to update Xata Vector Stores.
### Your contribution
Xata development team plans to contribute the enhancement. | Update records in the Xata integration | https://api.github.com/repos/langchain-ai/langchain/issues/14897/comments | 1 | 2023-12-19T10:00:54Z | 2024-03-26T16:08:11Z | https://github.com/langchain-ai/langchain/issues/14897 | 2,048,316,746 | 14,897 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.350
langchain-community 0.0.3
langchain-core 0.1.1
yandexcloud 0.248.0
Python 3.9.0 Windows 10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps
```bash.
!pip install yandexcloud langchain
```
```python
from langchain.chains import LLMChain
from langchain.llms import YandexGPT
from langchain.prompts import PromptTemplate
import os
os.environ["YC_IAM_TOKEN"] = "xxxxxxxxxxxxxxxxxxxx"
os.environ["YC_FOLDER_ID"] = "yyyyyyyyyyyyyyyyyyyy"
llm = YandexGPT()
template = "What is the capital of {country}?"
prompt = PromptTemplate(template=template, input_variables=["country"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
country = "Russia"
llm_chain.run(country)
```
Error
```
Requirement already satisfied: yandexcloud in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (0.248.0)
Collecting langchain
Downloading langchain-0.0.350-py3-none-any.whl.metadata (13 kB)
Requirement already satisfied: cryptography>=2.8 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from yandexcloud) (41.0.7)
Requirement already satisfied: grpcio>=1.56.2 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from yandexcloud) (1.59.3)
Requirement already satisfied: protobuf>=4.23.4 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from yandexcloud) (4.25.1)
Requirement already satisfied: googleapis-common-protos>=1.59.1 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from yandexcloud) (1.62.0)
Requirement already satisfied: pyjwt>=1.7.1 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from yandexcloud) (2.8.0)
Requirement already satisfied: requests>=2.22.0 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from yandexcloud) (2.31.0)
Requirement already satisfied: six>=1.14.0 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from yandexcloud) (1.16.0)
Requirement already satisfied: PyYAML>=5.3 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from langchain) (6.0.1)
Requirement already satisfied: SQLAlchemy<3,>=1.4 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from langchain) (2.0.23)
Requirement already satisfied: aiohttp<4.0.0,>=3.8.3 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from langchain) (3.9.0)
Requirement already satisfied: async-timeout<5.0.0,>=4.0.0 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from langchain) (4.0.3)
Requirement already satisfied: dataclasses-json<0.7,>=0.5.7 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from langchain) (0.6.2)
Collecting jsonpatch<2.0,>=1.33 (from langchain)
Downloading jsonpatch-1.33-py2.py3-none-any.whl.metadata (3.0 kB)
Collecting langchain-community<0.1,>=0.0.2 (from langchain)
Downloading langchain_community-0.0.3-py3-none-any.whl.metadata (7.0 kB)
Collecting langchain-core<0.2,>=0.1 (from langchain)
Downloading langchain_core-0.1.1-py3-none-any.whl.metadata (4.0 kB)
Collecting langsmith<0.1.0,>=0.0.63 (from langchain)
Downloading langsmith-0.0.71-py3-none-any.whl.metadata (10 kB)
Requirement already satisfied: numpy<2,>=1 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from langchain) (1.23.5)
Requirement already satisfied: pydantic<3,>=1 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from langchain) (1.10.13)
...
---------------------------------------- 46.2/46.2 kB 2.2 MB/s eta 0:00:00
Downloading jsonpointer-2.4-py2.py3-none-any.whl (7.8 kB)
Installing collected packages: jsonpointer, langsmith, jsonpatch, langchain-core, langchain-community, langchain
Successfully installed jsonpatch-1.33 jsonpointer-2.4 langchain-0.0.350 langchain-community-0.0.3 langchain-core-0.1.1 langsmith-0.0.71
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?5f589b9d-b63a-4c46-84d4-8fb1ca1bc863) or open in a [text editor](command:workbench.action.openLargeOutput?5f589b9d-b63a-4c46-84d4-8fb1ca1bc863). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
[notice] A new release of pip is available: 23.3.1 -> 23.3.2
[notice] To update, run: python.exe -m pip install --upgrade pip
---------------------------------------------------------------------------
_MultiThreadedRendezvous Traceback (most recent call last)
Cell In[18], [line 5](vscode-notebook-cell:?execution_count=18&line=5)
[3](vscode-notebook-cell:?execution_count=18&line=3) llm_chain = LLMChain(prompt=prompt, llm=llm)
[4](vscode-notebook-cell:?execution_count=18&line=4) country = "Russia"
----> [5](vscode-notebook-cell:?execution_count=18&line=5) llm_chain.run(country)
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain\chains\base.py:507](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:507), in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
[505](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:505) if len(args) != 1:
[506](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:506) raise ValueError("`run` supports only one positional argument.")
--> [507](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:507) return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
[508](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:508) _output_key
[509](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:509) ]
[511](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:511) if kwargs and not args:
[512](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:512) return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
[513](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:513) _output_key
[514](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:514) ]
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain\chains\base.py:312](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:312), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
[310](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:310) except BaseException as e:
[311](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:311) run_manager.on_chain_error(e)
--> [312](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:312) raise e
[313](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:313) run_manager.on_chain_end(outputs)
[314](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:314) final_outputs: Dict[str, Any] = self.prep_outputs(
[315](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:315) inputs, outputs, return_only_outputs
[316](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:316) )
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain\chains\base.py:306](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:306), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
[299](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:299) run_manager = callback_manager.on_chain_start(
[300](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:300) dumpd(self),
[301](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:301) inputs,
[302](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:302) name=run_name,
[303](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:303) )
[304](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:304) try:
[305](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:305) outputs = (
--> [306](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:306) self._call(inputs, run_manager=run_manager)
[307](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:307) if new_arg_supported
[308](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:308) else self._call(inputs)
[309](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:309) )
[310](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:310) except BaseException as e:
[311](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:311) run_manager.on_chain_error(e)
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain\chains\llm.py:103](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:103), in LLMChain._call(self, inputs, run_manager)
[98](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:98) def _call(
[99](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:99) self,
[100](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:100) inputs: Dict[str, Any],
[101](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:101) run_manager: Optional[CallbackManagerForChainRun] = None,
[102](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:102) ) -> Dict[str, str]:
--> [103](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:103) response = self.generate([inputs], run_manager=run_manager)
[104](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:104) return self.create_outputs(response)[0]
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain\chains\llm.py:115](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:115), in LLMChain.generate(self, input_list, run_manager)
[113](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:113) callbacks = run_manager.get_child() if run_manager else None
[114](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:114) if isinstance(self.llm, BaseLanguageModel):
--> [115](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:115) return self.llm.generate_prompt(
[116](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:116) prompts,
[117](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:117) stop,
[118](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:118) callbacks=callbacks,
[119](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:119) **self.llm_kwargs,
[120](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:120) )
[121](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:121) else:
[122](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:122) results = self.llm.bind(stop=stop, **self.llm_kwargs).batch(
[123](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:123) cast(List, prompts), {"callbacks": callbacks}
[124](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:124) )
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain_core\language_models\llms.py:516](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:516), in BaseLLM.generate_prompt(self, prompts, stop, callbacks, **kwargs)
[508](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:508) def generate_prompt(
[509](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:509) self,
[510](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:510) prompts: List[PromptValue],
(...)
[513](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:513) **kwargs: Any,
[514](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:514) ) -> LLMResult:
[515](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:515) prompt_strings = [p.to_string() for p in prompts]
--> [516](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:516) return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain_core\language_models\llms.py:666](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:666), in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, **kwargs)
[650](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:650) raise ValueError(
[651](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:651) "Asked to cache, but no cache found at `langchain.cache`."
[652](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:652) )
[653](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:653) run_managers = [
[654](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:654) callback_manager.on_llm_start(
[655](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:655) dumpd(self),
(...)
[664](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:664) )
[665](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:665) ]
--> [666](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:666) output = self._generate_helper(
[667](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:667) prompts, stop, run_managers, bool(new_arg_supported), **kwargs
[668](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:668) )
[669](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:669) return output
[670](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:670) if len(missing_prompts) > 0:
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain_core\language_models\llms.py:553](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:553), in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
[551](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:551) for run_manager in run_managers:
[552](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:552) run_manager.on_llm_error(e, response=LLMResult(generations=[]))
--> [553](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:553) raise e
[554](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:554) flattened_outputs = output.flatten()
[555](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:555) for manager, flattened_output in zip(run_managers, flattened_outputs):
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain_core\language_models\llms.py:540](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:540), in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
[530](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:530) def _generate_helper(
[531](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:531) self,
[532](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:532) prompts: List[str],
(...)
[536](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:536) **kwargs: Any,
[537](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:537) ) -> LLMResult:
[538](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:538) try:
[539](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:539) output = (
--> [540](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:540) self._generate(
[541](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:541) prompts,
[542](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:542) stop=stop,
[543](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:543) # TODO: support multiple run managers
[544](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:544) run_manager=run_managers[0] if run_managers else None,
[545](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:545) **kwargs,
[546](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:546) )
[547](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:547) if new_arg_supported
[548](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:548) else self._generate(prompts, stop=stop)
[549](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:549) )
[550](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:550) except BaseException as e:
[551](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:551) for run_manager in run_managers:
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain_core\language_models\llms.py:1069](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1069), in LLM._generate(self, prompts, stop, run_manager, **kwargs)
[1066](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1066) new_arg_supported = inspect.signature(self._call).parameters.get("run_manager")
[1067](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1067) for prompt in prompts:
[1068](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1068) text = (
-> [1069](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1069) self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
[1070](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1070) if new_arg_supported
[1071](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1071) else self._call(prompt, stop=stop, **kwargs)
[1072](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1072) )
[1073](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1073) generations.append([Generation(text=text)])
[1074](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1074) return LLMResult(generations=generations)
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain_community\llms\yandex.py:131](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_community/llms/yandex.py:131), in YandexGPT._call(self, prompt, stop, run_manager, **kwargs)
[129](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_community/llms/yandex.py:129) metadata = (("authorization", f"Api-Key {self.api_key}"),)
[130](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_community/llms/yandex.py:130) res = stub.Instruct(request, metadata=metadata)
--> [131](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_community/llms/yandex.py:131) text = list(res)[0].alternatives[0].text
[132](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_community/llms/yandex.py:132) if stop is not None:
[133](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_community/llms/yandex.py:133) text = enforce_stop_tokens(text, stop)
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\grpc\_channel.py:541](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/grpc/_channel.py:541), in _Rendezvous.__next__(self)
[540](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/grpc/_channel.py:540) def __next__(self):
--> [541](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/grpc/_channel.py:541) return self._next()
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\grpc\_channel.py:967](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/grpc/_channel.py:967), in _MultiThreadedRendezvous._next(self)
[965](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/grpc/_channel.py:965) raise StopIteration()
[966](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/grpc/_channel.py:966) elif self._state.code is not None:
--> [967](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/grpc/_channel.py:967) raise self
_MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.UNAUTHENTICATED
details = "You have to specify folder ID for user account"
debug_error_string = "UNKNOWN:Error received from peer ipv4:158.160.54.160:443 {created_time:"2023-12-18T13:29:25.0934987+00:00", grpc_status:16, grpc_message:"You have to specify folder ID for user account"}"
>
```
### Expected behavior
the model responds successfully | YandexGPT crashes with error "You have to specify folder ID for user account" | https://api.github.com/repos/langchain-ai/langchain/issues/14896/comments | 3 | 2023-12-19T09:29:50Z | 2023-12-19T09:56:08Z | https://github.com/langchain-ai/langchain/issues/14896 | 2,048,263,486 | 14,896 |
[
"hwchase17",
"langchain"
] | ### System Info
Hi folks,
I am getting one error in which sometime agent gives the exact same answer as it has output for the previous question. Here is the screen shot of my replies

Since it is giving the same exact string as answer I want put a manual check that whenever it given the answer which is exactly the same , then I will again query the agent for the new response.
I want to know how I can access the messages from the chat history
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
It is a rare incident so no exact method to catch it
### Expected behavior
Just a code piece to check the previous replies | Comparing the agent reply with the previous conversation | https://api.github.com/repos/langchain-ai/langchain/issues/14895/comments | 2 | 2023-12-19T09:29:35Z | 2024-03-26T16:08:06Z | https://github.com/langchain-ai/langchain/issues/14895 | 2,048,263,074 | 14,895 |
[
"hwchase17",
"langchain"
] | ### System Info
python 3.10
langchain 0.0.350
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
output_parser = LineListOutputParser()
QUERY_PROMPT = PromptTemplate(
input_variables=["question"],
template="""You are an AI language model assistant. Your task is to generate five
different versions of the given user question to retrieve relevant documents from a vector
database. By generating multiple perspectives on the user question, your goal is to help
the user overcome some of the limitations of the distance-based similarity search.
Provide these alternative questions separated by newlines.
Original question: {question}""",
)
llm_chain = LLMChain(llm=self.llm, prompt=QUERY_PROMPT, output_parser=output_parser)
db = self.embeddings_dict[doc_id].as_retriever(search_kwargs={"k": 15})
multi_query_retriever = MultiQueryRetriever.from_llm(retriever=db, llm=self.llm)
relevant_documents = multi_query_retriever.get_relevant_documents(query)
### Expected behavior
limit the maximum number of parallel llm call, for example, 4 | MultiQueryRetriever consume too much GPU mem, request to limit the maximum llm call | https://api.github.com/repos/langchain-ai/langchain/issues/14894/comments | 1 | 2023-12-19T08:44:18Z | 2024-03-26T16:08:01Z | https://github.com/langchain-ai/langchain/issues/14894 | 2,048,190,743 | 14,894 |
[
"hwchase17",
"langchain"
] | ### System Info
Python:3.11-slim-bullseye base docker image
Langchain version: 0.0.348
qdrant-client: 1.7.0
Qdrant database 1.7.1 (deployed on AWS cluster)
Reproduces regardless of prefer grpc true or false
### Who can help?
@agola11 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
1. Use ConversationalRetrievalChain with Qdrant vectordb
2. Use async acall interface
3. Maybe wait some idle time (15 min?)
You will experience a lot of errors from Qdrant:
AioRpcError of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "recvmsg:Connection reset by peer" debug_error_string = "UNKNOWN:Error received from peer {created_time}" >, type: AioRpcError
It's ok after retry of the request but this causes significant delay in response
### Expected behavior
Connection recovery for the vectordb should be handled by LangChain internally and ideally understand what causes the connection issues and resolve it | AIORpcError connection reset errors from Qdrant | https://api.github.com/repos/langchain-ai/langchain/issues/14891/comments | 5 | 2023-12-19T08:02:36Z | 2024-05-01T16:05:53Z | https://github.com/langchain-ai/langchain/issues/14891 | 2,048,126,563 | 14,891 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I construct a Chain using retriever with a certain template, but now I want to use another template when the retriever retrieve nothing back.
Is it possible for langchain? Or maybe it can only be handmade?
### Suggestion:
_No response_ | Can I switch my template during my Chain working? | https://api.github.com/repos/langchain-ai/langchain/issues/14890/comments | 1 | 2023-12-19T07:25:34Z | 2024-03-26T16:07:57Z | https://github.com/langchain-ai/langchain/issues/14890 | 2,048,077,349 | 14,890 |
[
"hwchase17",
"langchain"
] | I've deployed 'mistralai/Mistral-7B-v0.1' model to sagemaker and wanna use it as load_qa_chain
```
from langchain.chains.question_answering import load_qa_chain
from langchain.llms.sagemaker_endpoint import SagemakerEndpoint
content_handler = ContentHandler()
llm = SagemakerEndpoint(
endpoint_name=endpoint_name,
region_name="eu-west-2",
model_kwargs={
"temperature": 0,
"maxTokens": 1024,
"numResults": 2
},
content_handler=content_handler
)
chain = load_qa_chain(llm=llm, chain_type="stuff")
```
Now, running chain for Doc QA.
`chain.run(input_documents = docs, question = "what is dollarama")`
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[56], line 1
----> 1 chain.run(input_documents = docs, question = "what is dollarama")
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/chains/base.py:506, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
501 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
502 _output_key
503 ]
505 if kwargs and not args:
--> 506 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
507 _output_key
508 ]
510 if not kwargs and not args:
511 raise ValueError(
512 "`run` supported with either positional arguments or keyword arguments,"
513 " but none were provided."
514 )
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/chains/base.py:306, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
304 except BaseException as e:
305 run_manager.on_chain_error(e)
--> 306 raise e
307 run_manager.on_chain_end(outputs)
308 final_outputs: Dict[str, Any] = self.prep_outputs(
309 inputs, outputs, return_only_outputs
310 )
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/chains/base.py:300, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
293 run_manager = callback_manager.on_chain_start(
294 dumpd(self),
295 inputs,
296 name=run_name,
297 )
298 try:
299 outputs = (
--> 300 self._call(inputs, run_manager=run_manager)
301 if new_arg_supported
302 else self._call(inputs)
303 )
304 except BaseException as e:
305 run_manager.on_chain_error(e)
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py:119, in BaseCombineDocumentsChain._call(self, inputs, run_manager)
117 # Other keys are assumed to be needed for LLM prediction
118 other_keys = {k: v for k, v in inputs.items() if k != self.input_key}
--> 119 output, extra_return_dict = self.combine_docs(
120 docs, callbacks=_run_manager.get_child(), **other_keys
121 )
122 extra_return_dict[self.output_key] = output
123 return extra_return_dict
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py:171, in StuffDocumentsChain.combine_docs(self, docs, callbacks, **kwargs)
169 inputs = self._get_inputs(docs, **kwargs)
170 # Call predict on the LLM.
--> 171 return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/chains/llm.py:257, in LLMChain.predict(self, callbacks, **kwargs)
242 def predict(self, callbacks: Callbacks = None, **kwargs: Any) -> str:
243 """Format prompt with kwargs and pass to LLM.
244
245 Args:
(...)
255 completion = llm.predict(adjective="funny")
256 """
--> 257 return self(kwargs, callbacks=callbacks)[self.output_key]
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/chains/base.py:306, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
304 except BaseException as e:
305 run_manager.on_chain_error(e)
--> 306 raise e
307 run_manager.on_chain_end(outputs)
308 final_outputs: Dict[str, Any] = self.prep_outputs(
309 inputs, outputs, return_only_outputs
310 )
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/chains/base.py:300, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
293 run_manager = callback_manager.on_chain_start(
294 dumpd(self),
295 inputs,
296 name=run_name,
297 )
298 try:
299 outputs = (
--> 300 self._call(inputs, run_manager=run_manager)
301 if new_arg_supported
302 else self._call(inputs)
303 )
304 except BaseException as e:
305 run_manager.on_chain_error(e)
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/chains/llm.py:93, in LLMChain._call(self, inputs, run_manager)
88 def _call(
89 self,
90 inputs: Dict[str, Any],
91 run_manager: Optional[CallbackManagerForChainRun] = None,
92 ) -> Dict[str, str]:
---> 93 response = self.generate([inputs], run_manager=run_manager)
94 return self.create_outputs(response)[0]
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/chains/llm.py:103, in LLMChain.generate(self, input_list, run_manager)
101 """Generate LLM result from inputs."""
102 prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)
--> 103 return self.llm.generate_prompt(
104 prompts,
105 stop,
106 callbacks=run_manager.get_child() if run_manager else None,
107 **self.llm_kwargs,
108 )
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/llms/base.py:498, in BaseLLM.generate_prompt(self, prompts, stop, callbacks, **kwargs)
490 def generate_prompt(
491 self,
492 prompts: List[PromptValue],
(...)
495 **kwargs: Any,
496 ) -> LLMResult:
497 prompt_strings = [p.to_string() for p in prompts]
--> 498 return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/llms/base.py:647, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, **kwargs)
632 raise ValueError(
633 "Asked to cache, but no cache found at `langchain.cache`."
634 )
635 run_managers = [
636 callback_manager.on_llm_start(
637 dumpd(self),
(...)
645 )
646 ]
--> 647 output = self._generate_helper(
648 prompts, stop, run_managers, bool(new_arg_supported), **kwargs
649 )
650 return output
651 if len(missing_prompts) > 0:
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/llms/base.py:535, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
533 for run_manager in run_managers:
534 run_manager.on_llm_error(e)
--> 535 raise e
536 flattened_outputs = output.flatten()
537 for manager, flattened_output in zip(run_managers, flattened_outputs):
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/llms/base.py:522, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
512 def _generate_helper(
513 self,
514 prompts: List[str],
(...)
518 **kwargs: Any,
519 ) -> LLMResult:
520 try:
521 output = (
--> 522 self._generate(
523 prompts,
524 stop=stop,
525 # TODO: support multiple run managers
526 run_manager=run_managers[0] if run_managers else None,
527 **kwargs,
528 )
529 if new_arg_supported
530 else self._generate(prompts, stop=stop)
531 )
532 except BaseException as e:
533 for run_manager in run_managers:
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/llms/base.py:1048, in LLM._generate(self, prompts, stop, run_manager, **kwargs)
1042 for prompt in prompts:
1043 text = (
1044 self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
1045 if new_arg_supported
1046 else self._call(prompt, stop=stop, **kwargs)
1047 )
-> 1048 generations.append([Generation(text=text)])
1049 return LLMResult(generations=generations)
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/load/serializable.py:97, in Serializable.__init__(self, **kwargs)
96 def __init__(self, **kwargs: Any) -> None:
---> 97 super().__init__(**kwargs)
98 self._lc_kwargs = kwargs
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for Generation
text
str type expected (type=type_error.str)
```
### Suggestion:
When I'm using openai api, it works but, only sagemaker endpoint is problem.
| Issue: can't use llm sagemaker endpoint as load_qa_chain | https://api.github.com/repos/langchain-ai/langchain/issues/14886/comments | 4 | 2023-12-19T04:04:06Z | 2024-04-16T16:20:14Z | https://github.com/langchain-ai/langchain/issues/14886 | 2,047,879,728 | 14,886 |
[
"hwchase17",
"langchain"
] | I have developed a Flutter app with a chatroom feature and have successfully implemented chatting in the chatroom interface using the OpenAI API with ChatGPT. Now, I am looking to utilize an API to establish a connection between LangChain and my mobile app. How or what should i do to achieve this? I am new to this, any help will be thankful. | How to connect LangChain application to a mobile app using an API | https://api.github.com/repos/langchain-ai/langchain/issues/14885/comments | 1 | 2023-12-19T03:58:57Z | 2024-03-26T16:07:51Z | https://github.com/langchain-ai/langchain/issues/14885 | 2,047,875,945 | 14,885 |
[
"hwchase17",
"langchain"
] | I am working in project, where i have to use multiple pdf docs to give respose to the user query.
I have a load method to load pdf from directory.
```
def loadFiles():
loader = DirectoryLoader('./static/upload/', glob="./*.pdf", loader_cls=PyPDFLoader)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=50)
texts = text_splitter.split_documents(documents)
return texts
```
I am creating chromadb by below code:
```
def createDb(load,embeddings,persist_directory):
max_input_size = 3000
num_output = 256
chunk_size_limit = 1000 # token window size per document
max_chunk_overlap = 80 # overlap for each token fragment
vectordb = Chroma.from_documents(documents=load, embedding=embeddings, persist_directory=persist_directory)
vectordb.persist()
return vectordb
```
now , I am quering chromadb ,
```
qa_chain = RetrievalQA.from_chain_type(
llm=OpenAI(temperature=0,model_name = "text-davinci-003"),
retriever=vectordb.as_retriever(),chain_type="stuff",
chain_type_kwargs=chain_type_kwargs,
return_source_documents=True
)
```
However, i am getting the response , but not full response in some cases, like below :
My source pdf has following contents:
[source file](https://i.stack.imgur.com/Xaz7U.png)
while my response is showing only some parts as shown below:
[chromadb response](https://i.stack.imgur.com/CY4dL.png)
I tried incresing the chunk_ovarlap size as shown in createdb(), but it does not worked. I am expecting from chromadb full response and response should be comming from given pdf.
I am new to this, any help will be thankful. | how to increase the response size of chromadb | https://api.github.com/repos/langchain-ai/langchain/issues/14880/comments | 3 | 2023-12-19T01:59:00Z | 2024-03-29T16:07:30Z | https://github.com/langchain-ai/langchain/issues/14880 | 2,047,779,892 | 14,880 |
[
"hwchase17",
"langchain"
] | ### System Info
Platform: Ubuntu 22.04
Python: 3.11.6
Langchain: 0.0.351
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When the program is first initialized with `__setup_client()` and `__should_reingest()` returns `True`, `__get_new_client()` works as intended. However, if `reingest()` is called afterward, `__get_new_client()` returns the error below.
Relevant code:
```python
def __setup_client(self) -> None:
if self.__should_reingest():
self.db = self.__get_new_client()
else:
self.db = self.__get_existing_client()
def reingest(self) -> None:
self.db = self.__get_new_client()
def __get_new_client(self):
if os.path.exists(self.persist_directory):
shutil.rmtree(self.persist_directory)
docs = self.__get_docs()
client = Chroma.from_documents(
docs, self.embedding_function, persist_directory=self.persist_directory)
with open(f'{self.persist_directory}/date.txt', 'w') as f:
f.write(f'{datetime.date.today()}')
return client
```
Error:
```
Traceback (most recent call last):
...
File line 26, in reingest
self.cauldron.reingest()
File line 19, in reingest
self.db = self.__get_new_client()
^^^^^^^^^^^^^^^^^^^^^^^
File line 51, in __get_new_client
client = Chroma.from_documents(
^^^^^^^^^^^^^^^^^^^^^^
File ".../.pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_community/vectorstores/chroma.py", line 771, in from_documents
return cls.from_texts(
^^^^^^^^^^^^^^^
File ".../.pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_community/vectorstores/chroma.py", line 729, in from_texts
chroma_collection.add_texts(
File ".../.pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_community/vectorstores/chroma.py", line 297, in add_texts
self._collection.upsert(
File ".../.pyenv/versions/3.11.6/lib/python3.11/site-packages/chromadb/api/models/Collection.py", line 459, in upsert
self._client._upsert(
File ".../.pyenv/versions/3.11.6/lib/python3.11/site-packages/chromadb/telemetry/opentelemetry/__init__.py", line 127, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File ".../.pyenv/versions/3.11.6/lib/python3.11/site-packages/chromadb/api/segment.py", line 446, in _upsert
self._producer.submit_embeddings(coll["topic"], records_to_submit)
File ".../.pyenv/versions/3.11.6/lib/python3.11/site-packages/chromadb/telemetry/opentelemetry/__init__.py", line 127, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File ".../.pyenv/versions/3.11.6/lib/python3.11/site-packages/chromadb/db/mixins/embeddings_queue.py", line 172, in submit_embeddings
results = cur.execute(sql, params).fetchall()
^^^^^^^^^^^^^^^^^^^^^^^^
sqlite3.OperationalError: attempt to write a readonly database
```
### Expected behavior
No error returned | Calling Chroma.from_documents() returns sqlite3.OperationalError: attempt to write a readonly database, but only sometimes | https://api.github.com/repos/langchain-ai/langchain/issues/14872/comments | 24 | 2023-12-19T00:02:10Z | 2024-07-21T21:44:48Z | https://github.com/langchain-ai/langchain/issues/14872 | 2,047,680,474 | 14,872 |
[
"hwchase17",
"langchain"
] | ### System Info
I'm using Langchain version '0.0.350' in Databricks
using the following libraries:
`from langchain_experimental.sql import SQLDatabaseChain`
`from langchain import PromptTemplate`
`from langchain.sql_database import SQLDatabase`
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm using Langchain version '0.0.350'
`from langchain_experimental.sql import SQLDatabaseChain`
`from langchain import PromptTemplate`
`from langchain.sql_database import SQLDatabase`
mytemplate = """
You are a SQL expert. Given an input question, first create a syntactically correct sql query run on then look at the results of the query and return the answer.
Use the following format:
Question: Question here
SQLQuery: SQL Query to run
SQLResult: Result of the SQLQuery
Answer: Final answer here
Only use the following tables:
{question}
"""
`dbs = SQLDatabase.from_uri(conn_str)`
`db_prompt= PromptTemplate( input_variables = [ 'question'], template = mytemplate)`
`db_chain = SQLDatabaseChain.from_llm(llm = llms , db = dbs, prompt = db_prompt, verbose=True)`
`db_chain.run(question = 'question here')`
ValueError: Missing some input keys: {'query'}
### Expected behavior
I expect to get results of a query | SQLDatabaseChain.from_llm returning ValueError: Missing some input keys: {'query'} when custom template is used | https://api.github.com/repos/langchain-ai/langchain/issues/14865/comments | 1 | 2023-12-18T21:04:10Z | 2024-03-25T16:08:37Z | https://github.com/langchain-ai/langchain/issues/14865 | 2,047,451,496 | 14,865 |
[
"hwchase17",
"langchain"
] | ### System Info
platform: Vagrant - Ubuntu 2204
python: 3.9.18
langchain: 0.0.350
langchain-core: 0.1.1
langchain-community: 0.0.3
litellm: 1.15.1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Run the code:
``` python
from langchain.chat_models import ChatLiteLLM
from langchain.schema import HumanMessage
# Initialize ChatLiteLLM
chat = ChatLiteLLM(
model="together_ai/mistralai/Mixtral-8x7B-Instruct-v0.1",
verbose=True,
temperature=0.0,
)
text = "Write me a poem about the blue sky"
messages = [HumanMessage(content=text)]
print(chat(messages).content)
```
2: Error message:
``` bash
AttributeError: 'ValueError' object has no attribute 'status_code'
```
### Expected behavior
I can't get ChatLiteLLM to work with Together AI. I expect **ChatLiteLLM** to work correctly and to output the result. | (ChatLiteLLM - Together AI) AttributeError: 'ValueError' object has no attribute 'status_code' | https://api.github.com/repos/langchain-ai/langchain/issues/14863/comments | 2 | 2023-12-18T20:50:19Z | 2024-03-31T16:05:45Z | https://github.com/langchain-ai/langchain/issues/14863 | 2,047,431,121 | 14,863 |
[
"hwchase17",
"langchain"
] | ### System Info
**Below is my Linux box:**
Linux 5.15.0-1014-azure #17~20.04.1-Ubuntu SMP Thu Jun 23 20:01:51 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
**Below is libraries within my conda environment:**
google-auth 2.24.0
google-search-results 2.4.2
googleapis-common-protos 1.61.0
langchain 0.0.349
langchain-cli 0.0.19
langchain-community 0.0.1
langchain-core 0.0.13
langchainhub 0.1.14
requests 2.31.0
requests-oauthlib 1.3.1
types-requests 2.31.0.10
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**Below is source code:**
```
import requests
import json
import os
SERPAPI_API_KEY=os.environ["SERPAPI_API_KEY"]
print(f"{SERPAPI_API_KEY=}")
url = "https://google.serper.dev/search"
payload = json.dumps({
"q": "apple inc"
})
headers = {
'X-API-KEY': SERPAPI_API_KEY,
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print("************ result from request ***************")
print(response.text)
from langchain.utilities import SerpAPIWrapper
search = SerpAPIWrapper(serpapi_api_key=SERPAPI_API_KEY)
res = search.run("apple inc")
print("************ result from langchain ***************")
print(f"{res=}")
```
### Expected behavior
if the KEY works for request API, then SerpAPIWrapper shouldn't fails with invalid KEY error. | SerpAPIWrapper doesn't work | https://api.github.com/repos/langchain-ai/langchain/issues/14855/comments | 2 | 2023-12-18T18:36:20Z | 2024-03-25T16:08:27Z | https://github.com/langchain-ai/langchain/issues/14855 | 2,047,220,983 | 14,855 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain: latest (0.0.350)
python: 3.10.12
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code to reproduce (based on [code from docs](https://python.langchain.com/docs/modules/agents/agent_types/openai_tools))
```
import openai
from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad.openai_tools import (
format_to_openai_tool_messages,
)
from langchain.agents.output_parsers.openai_tools import OpenAIToolsAgentOutputParser
from langchain.chat_models import AzureChatOpenAI, ChatOpenAI
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.tools import BearlyInterpreterTool, DuckDuckGoSearchRun
from langchain.tools.render import format_tool_to_openai_tool
from settings import AppSettings
openai.api_type = AppSettings.OPENAI_API_TYPE or "azure"
openai.api_version = AppSettings.OPENAI_API_VERSION or "2023-03-15-preview"
openai.api_base = AppSettings.AZURE_OPENAI_API_ENDPOINT
openai.api_key = AppSettings.AZURE_OPENAI_API_KEY
lc_tools = [DuckDuckGoSearchRun()]
oai_tools = [format_tool_to_openai_tool(tool) for tool in lc_tools]
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant"),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
llm = AzureChatOpenAI(
openai_api_version=AppSettings.OPENAI_API_VERSION, # type: ignore TODO: I don't know why this is an error despite being in the class
azure_deployment=AppSettings.AZURE_OPENAI_DEPLOYMENT,
temperature=0,
streaming=True,
verbose=True,
)
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
),
}
| prompt
| llm.bind(tools=oai_tools)
| OpenAIToolsAgentOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=lc_tools, verbose=True)
agent_executor.invoke(
{"input": "What's the average of the temperatures in LA, NYC, and SF today?"}
)
```
Logs:
```
> Entering new AgentExecutor chain...
ic| merged[k]: {'arguments': '{"qu', 'name': 'duckduckgo_search'}
v: <OpenAIObject at 0x7fdb750c7920> JSON: {
"arguments": "ery\":"
}
type(merged[k]): <class 'dict'>
type(v): <class 'openai.openai_object.OpenAIObject'>
isinstance(merged[k], dict): True
isinstance(v, dict): True
Traceback (most recent call last):
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/test-issue.py", line 56, in <module>
agent_executor.invoke(
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 89, in invoke
return self(
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 312, in __call__
raise e
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 306, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1312, in _call
next_step_output = self._take_next_step(
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1038, in _take_next_step
[
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1038, in <listcomp>
[
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1066, in _iter_next_step
output = self.agent.plan(
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 461, in plan
output = self.runnable.invoke(inputs, config={"callbacks": callbacks})
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1514, in invoke
input = step.invoke(
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2937, in invoke
return self.bound.invoke(
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 160, in invoke
self.generate_prompt(
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 491, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 378, in generate
raise e
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 368, in generate
self._generate_with_cache(
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 524, in _generate_with_cache
return self._generate(
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_community/chat_models/openai.py", line 428, in _generate
return generate_from_stream(stream_iter)
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 65, in generate_from_stream
generation += chunk
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/outputs/chat_generation.py", line 62, in __add__
message=self.message + other.message,
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/messages/ai.py", line 52, in __add__
additional_kwargs=self._merge_kwargs_dict(
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/messages/base.py", line 128, in _merge_kwargs_dict
merged[k][i] = self._merge_kwargs_dict(merged[k][i], e)
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/messages/base.py", line 114, in _merge_kwargs_dict
raise TypeError(
TypeError: additional_kwargs["function"] already exists in this message, but with a different type.
```
### Expected behavior
No errors and the same result as without streaming=True.
Last week there was a PR [#14613](https://github.com/langchain-ai/langchain/pull/14613) that fixed the issue #13442. I tested the fix using the same scenario that I shared and it worked as expected. However, today I tested it again and the merge kwargs fails because the types don't match though both values are instances of **dict**
| BaseMessageChunk cannot merge function key when using open ai tools and streaming. | https://api.github.com/repos/langchain-ai/langchain/issues/14853/comments | 3 | 2023-12-18T17:36:47Z | 2023-12-18T19:28:23Z | https://github.com/langchain-ai/langchain/issues/14853 | 2,047,135,178 | 14,853 |
[
"hwchase17",
"langchain"
] | This GitHub App is used by Google employees to monitor GitHub repositories. It sends notifications in Chrome for Reviews, updates to Pull Requests, Mentions, etc.
https://github.com/apps/g3n-github
Could it be added to the LangChain repository to make it easier for Google engineers to contribute in a timely manner? (Must be done by an organization administrator)
@baskaryan @hwchase17
Thanks! | Add g3n-github app to repository | https://api.github.com/repos/langchain-ai/langchain/issues/14851/comments | 4 | 2023-12-18T16:20:31Z | 2024-06-08T16:08:00Z | https://github.com/langchain-ai/langchain/issues/14851 | 2,047,010,816 | 14,851 |
[
"hwchase17",
"langchain"
] | ### Feature request
There is a new feature with Azure Open AI to get deterministic output.
To use this feature we need to pass extra parameters when we iniative AzureOpenAI
https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/reproducible-output?tabs=pyton
### Motivation
Getting deterministic feature is a very important for most of the Generative AI applications. This feature would be a huge help.
https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/reproducible-output?tabs=pyton
### Your contribution
Yes, depends on the change what needed here can help | Reproducible output feature with Azure Open AI (support for seed parameter) | https://api.github.com/repos/langchain-ai/langchain/issues/14850/comments | 1 | 2023-12-18T15:45:57Z | 2024-03-25T16:08:22Z | https://github.com/langchain-ai/langchain/issues/14850 | 2,046,949,854 | 14,850 |
[
"hwchase17",
"langchain"
] | ### Feature request
I'm thinking about adding a pre-commit configuration file (`.pre-commit-config.yaml `) to the codebase as it helps improve the code quality and maintain consistency within the project. It will allow all the developers/future contributors to maintain a consistent code style. Adding this will help us by running some automated checks before anyone tries to make a commit.
1. [trailing-whitespace](https://github.com/pre-commit/pre-commit-hooks?tab=readme-ov-file#trailing-whitespace) (Removes trailing whitespace at the end of lines)
2. [end-of-file-fixer](https://github.com/pre-commit/pre-commit-hooks?tab=readme-ov-file#end-of-file-fixer) (Ensures that files end with a newline character)
3. [check-yaml](https://github.com/pre-commit/pre-commit-hooks?tab=readme-ov-file#check-yaml) (Attempts to load all yaml files to verify syntax)
### Motivation
These checks are very common and used in many big `Python` based organizations including [scikit-learn](https://github.com/scikit-learn/scikit-learn/blob/main/.pre-commit-config.yaml), [jax](https://github.com/Sai-Suraj-27/jax/blob/main/.pre-commit-config.yaml), [pandas](https://github.com/Sai-Suraj-27/pandas/blob/main/.pre-commit-config.yaml#L72), etc.
They help in maintaining consistent style across the repository and ensures same formatting and quality for all the contributors.
### Your contribution
I have a good experience in adding/modifying this file in large codebases (https://github.com/unifyai/ivy/pull/22220, https://github.com/unifyai/ivy/pull/22974, https://github.com/gprMax/gprMax/pull/354, https://github.com/pandas-dev/pandas/pull/55277)
So, if you think it is useful let me know I will make a PR, Thank you. | Adding a `.pre-commit-config.yaml` file for maintaining consistent style and code quality | https://api.github.com/repos/langchain-ai/langchain/issues/14845/comments | 1 | 2023-12-18T14:33:36Z | 2024-03-25T16:08:18Z | https://github.com/langchain-ai/langchain/issues/14845 | 2,046,801,582 | 14,845 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain: 0.0.350
Python: 3.8.8
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run code below,
```python
from langchain.embeddings.azure_openai import AzureOpenAIEmbeddings
from azure.identity import AzureCliCredential
credential = AzureCliCredential()
token = credential.get_token("https://cognitiveservices.azure.com/.default")
embeddings = AzureOpenAIEmbeddings(
deployment=model_name_retriever,
chunk_size=16,
azure_endpoint=azure_endpoint,
azure_ad_token=token,
openai_api_version=api_version,
http_client=http_client
)
```
Then,
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-4-b617513b6ef1> in <module>
----> 6 embeddings = AzureOpenAIEmbeddings(
7 deployment=model_name_retriever,
8 chunk_size=16,
/opt/conda/lib/python3.8/site-packages/pydantic/main.cpython-38-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__()
/opt/conda/lib/python3.8/site-packages/pydantic/main.cpython-38-x86_64-linux-gnu.so in pydantic.main.validate_model()
/opt/conda/lib/python3.8/site-packages/langchain_community/embeddings/azure_openai.py in validate_environment(cls, values)
82 "AZURE_OPENAI_ENDPOINT"
83 )
---> 84 values["azure_ad_token"] = values["azure_ad_token"] or os.getenv(
85 "AZURE_OPENAI_AD_TOKEN"
86 )
KeyError: 'azure_ad_token'
```
This error also happens in AzureChatOpenAI. These behavior do not match with [current API](https://api.python.langchain.com/en/stable/chat_models/langchain_community.chat_models.azure_openai.AzureChatOpenAI.html#langchain_community.chat_models.azure_openai.AzureChatOpenAI)
### Expected behavior
Defining class successfully. | azure_ad_token variable does not work for AzureChatOpenAI and AzureOpenAIEmbeddings | https://api.github.com/repos/langchain-ai/langchain/issues/14843/comments | 3 | 2023-12-18T12:56:27Z | 2023-12-19T01:03:47Z | https://github.com/langchain-ai/langchain/issues/14843 | 2,046,610,982 | 14,843 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
safety_settings argument is missing with the `ChatVertexAI` model
### Suggestion:
Should be able to define the safety settings. For example:
```
safety_settings = {
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
}
llm = ChatVertexAI(safety_settings=safety_settings)
``` | safety_settings argument is missing with the ChatVertexAI mode | https://api.github.com/repos/langchain-ai/langchain/issues/14841/comments | 11 | 2023-12-18T11:21:49Z | 2024-03-27T01:00:11Z | https://github.com/langchain-ai/langchain/issues/14841 | 2,046,443,649 | 14,841 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am working on a project where i need to fetch the content of sub URL by giving its base URL only.
Is there any method in Langchain to fetch all the content of its Sub URL by giving its base URL only ?
### Suggestion:
_No response_ | Issue: Is there any method in Langchain to fetch all the content of its Sub URL by giving its base URL only | https://api.github.com/repos/langchain-ai/langchain/issues/14837/comments | 1 | 2023-12-18T07:52:57Z | 2024-03-25T16:08:12Z | https://github.com/langchain-ai/langchain/issues/14837 | 2,045,948,323 | 14,837 |
[
"hwchase17",
"langchain"
] | ### Feature request
Currently we use PGVector as our Vector Database and do in combination with FastAPI. One issue is that in the indexing process, which happens over an API Endpoint, we found out that the current implementation of PgVector is blocking due to its synchronous implementation. FastAPI offers async/await and SQLAlchemy also allows using async sessions. I would like to add an Async Alternative for PGVector.
### Motivation
Non-Blocking code execution prevents blocking the main thread of our execution and provides far superior performance compared to synchronous execution
### Your contribution
I already coded a prototype, which works so far and does not block the main thread of the API. I would fully implement this async solution. Currently I only have to discuss this with my boss, since I did this during worktime. | Async PGVector to make LangChain with Postgres more performant | https://api.github.com/repos/langchain-ai/langchain/issues/14836/comments | 8 | 2023-12-18T07:45:33Z | 2024-07-07T16:59:56Z | https://github.com/langchain-ai/langchain/issues/14836 | 2,045,936,457 | 14,836 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
` #below is my code
def generate_embeddings(urls = None,persist_directory=None):
with sync_playwright() as p:
browser = p.chromium.launch()
navigate_tool = NavigateTool(sync_browser=browser)
extract_hyperlinks_tool = ExtractHyperlinksTool(sync_browser=browser)
for url in urls:
print(url,"url is ----------------------")
await navigate_tool._arun(url)
print(await navigate_tool._arun(url))
hyperlinks = await extract_hyperlinks_tool._arun()
for link in hyperlinks:
print(link,"link is ------------------------------------------")
browser.close()
asyncio.run(main())
loader = UnstructuredURLLoader(urls=urls)
urlDocument = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=50)
texts = text_splitter.split_documents(documents=urlDocument)
if texts:
embedding = OpenAIEmbeddings()
Chroma.from_documents(documents=texts, embedding=embedding, persist_directory=persist_directory)
file_crawl_status = True
file_index_status = True
else:
file_crawl_status = False
file_index_status = False
return file_crawl_status, file_index_status
`
# And I am getting these error
/home/hs/CustomBot/accounts/common_langcain_qa.py:121: RuntimeWarning: coroutine 'NavigateTool._arun' was never awaited
navigate_tool._arun(url)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
<coroutine object NavigateTool._arun at 0x7ffbaa2871c0>
/home/hs/CustomBot/accounts/common_langcain_qa.py:122: RuntimeWarning: coroutine 'NavigateTool._arun' was never awaited
print(navigate_tool._arun(url))
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
<coroutine object ExtractHyperlinksTool._arun at 0x7ffbaa2871c0> link is ------------------------------------------
Internal Server Error: /create-project/
Traceback (most recent call last):
File "/home/hs/env/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/home/hs/env/lib/python3.8/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/hs/env/lib/python3.8/site-packages/django/views/generic/base.py", line 69, in view
return self.dispatch(request, *args, **kwargs)
File "/home/hs/env/lib/python3.8/site-packages/django/views/generic/base.py", line 101, in dispatch
return handler(request, *args, **kwargs)
File "/home/hs/CustomBot/user_projects/views.py", line 1776, in post
file_crawl_status, file_index_status = generate_embeddings(
File "/home/hs/CustomBot/accounts/common_langcain_qa.py", line 126, in generate_embeddings
browser.close()
File "/home/hs/env/lib/python3.8/site-packages/playwright/sync_api/_generated.py", line 9869, in close
self._sync("browser.close", self._impl_obj.close())
File "/home/hs/env/lib/python3.8/site-packages/playwright/_impl/_sync_base.py", line 100, in _sync
task = self._loop.create_task(coro)
File "/usr/lib/python3.8/asyncio/base_events.py", line 429, in create_task
self._check_closed()
File "/usr/lib/python3.8/asyncio/base_events.py", line 508, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
### Suggestion:
_No response_ | Issue: I'm currently working on a project where I need to fetch all the sub-URLs from a website using Langchain. | https://api.github.com/repos/langchain-ai/langchain/issues/14834/comments | 1 | 2023-12-18T05:48:30Z | 2024-03-25T16:08:07Z | https://github.com/langchain-ai/langchain/issues/14834 | 2,045,789,011 | 14,834 |
[
"hwchase17",
"langchain"
] | ### Feature request
There's already many tracers in LangChain (https://github.com/langchain-ai/langchain/tree/master/libs/langchain/langchain/callbacks/tracers).
I would like to ask for adding an OpentelemetryTracer.
### Motivation
Community tracing
### Your contribution
Would like to contribute | Add an OpentelemetryTracer in LangChain | https://api.github.com/repos/langchain-ai/langchain/issues/14832/comments | 4 | 2023-12-18T03:52:49Z | 2024-03-27T16:08:32Z | https://github.com/langchain-ai/langchain/issues/14832 | 2,045,680,808 | 14,832 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
My code is :
```
llm = ChatOpenAI(temperature=0, verbose=True, model="gpt-3.5-turbo-16k")
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
agent_type = AgentType.ZERO_SHOT_REACT_DESCRIPTION
agent_executor_1 = initialize_agent()
agent_executor = create_sql_agent(llm=llm, toolkit=toolkit, verbose=True, agent_type=agent_type)
agent_executor.run(input="How many orders in 2023-12-15")
```
I got the result with the following format:
```
> Entering new AgentExecutor chain...
Thought: ...
Action: ...
Action Input: ...
Observation: ...
Thought: ...
AI: ...
```
I want to save AI result to a variable. I try print(agent_executor.run) but it didn't work. Is there any way that suit my need.
### Suggestion:
_No response_ | How can I grab only the AI answer of my langchain agent? | https://api.github.com/repos/langchain-ai/langchain/issues/14828/comments | 1 | 2023-12-18T00:32:57Z | 2024-03-25T16:07:56Z | https://github.com/langchain-ai/langchain/issues/14828 | 2,045,453,142 | 14,828 |
[
"hwchase17",
"langchain"
] | ### System Info
mac os latest, latest LangChain
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [x] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
ConversationBufferWindowMemory(
chat_memory=PostgresChatMessageHistory(
session_id=session_id, connection_string=os.getenv("DB_URL")
),
return_messages=True,
memory_key=memory_key,
)
try to run a chat with this
### Expected behavior
I think it should save the massages as they are to SQL, but use the properties of the ConversationSummaryBufferMemory for the chat itself, but it reality it just uses SQL and history is jus tall the massages | PostgresChatHistory or any other database for storing history does not work with ConversationSummaryBufferMemory(or other memory taht produces 'system' massage type after summarisation | https://api.github.com/repos/langchain-ai/langchain/issues/14822/comments | 9 | 2023-12-17T19:00:18Z | 2024-03-30T16:06:06Z | https://github.com/langchain-ai/langchain/issues/14822 | 2,045,336,301 | 14,822 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Is there a similar method for using `gemini-pro` with `ConversationalRetrievalChain.from_llm`, as there is for utilizing models from VertexAI with `ChatVertexAI` or `VertexAI`, where you specify the `model_name`?
### Suggestion:
_No response_ | Gemini with ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/14819/comments | 2 | 2023-12-17T17:26:06Z | 2023-12-20T08:45:09Z | https://github.com/langchain-ai/langchain/issues/14819 | 2,045,301,971 | 14,819 |
[
"hwchase17",
"langchain"
] | null | When using the OpenAIFunctionsAgentOutputParser() to parse agent output meet an error. | https://api.github.com/repos/langchain-ai/langchain/issues/14816/comments | 2 | 2023-12-17T15:44:02Z | 2024-01-22T07:05:03Z | https://github.com/langchain-ai/langchain/issues/14816 | 2,045,267,345 | 14,816 |
[
"hwchase17",
"langchain"
] | ### System Info
In this file: https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/together.py
There is no variable Prompt, however it is available in the documentation: https://docs.together.ai/reference/inference
```python
if config.model == "together":
return Together(
model="togethercomputer/StripedHyena-Nous-7B",
temperature=0.7,
max_tokens=128,
top_k=1,
together_api_key=config.together_api_key,
prompt="The capital of France is"
)
```
error I get is:
```bash
pydantic.v1.error_wrappers.ValidationError: 1 validation error for Together
prompt
extra fields not permitted (type=value_error.extra)
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Try to run together Ai model and pass Prompt on model it will fail. My goal is to set a system Prompt.
### Expected behavior
Accept Prompt as the system Prompt. | Missing "Prompt" in Together ai | https://api.github.com/repos/langchain-ai/langchain/issues/14813/comments | 3 | 2023-12-17T11:45:30Z | 2023-12-29T04:03:34Z | https://github.com/langchain-ai/langchain/issues/14813 | 2,045,185,889 | 14,813 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain.__version__ '0.0.350'
python 3.11.5
ollama version is 0.1.16
pyngrok-7.0.3
### Who can help?
@agola11 @hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When I use the combination: Ollama + Langchain + Google Colab + ngrok. I get an error
(The models are downloaded, I can see them in Ollama list)
```
llm = Ollama(
model="run deepseek-coder:6.7b", base_url="https://e12b-35-231-226-171.ngrok.io/")
responce = llm.predict('What do you know about Falco?')
Output exceeds the [size limit](command:workbench.action.openSettings?%5B%22notebook.output.textLineLimit%22%5D). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?5f7f2031-a63a-42c0-ac20-ccc8d53de6b2)---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
File [~/miniconda3/envs/llm/lib/python3.11/site-packages/requests/models.py:971](https://file+.vscode-resource.vscode-cdn.net/home/serhiy/Scalarr/llm/%20RAG/~/miniconda3/envs/llm/lib/python3.11/site-packages/requests/models.py:971), in Response.json(self, **kwargs)
970 try:
--> 971 return complexjson.loads(self.text, **kwargs)
972 except JSONDecodeError as e:
973 # Catch JSON-related errors and raise as requests.JSONDecodeError
974 # This aliases json.JSONDecodeError and simplejson.JSONDecodeError
File [~/miniconda3/envs/llm/lib/python3.11/site-packages/simplejson/__init__.py:514](https://file+.vscode-resource.vscode-cdn.net/home/serhiy/Scalarr/llm/%20RAG/~/miniconda3/envs/llm/lib/python3.11/site-packages/simplejson/__init__.py:514), in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, use_decimal, allow_nan, **kw)
510 if (cls is None and encoding is None and object_hook is None and
511 parse_int is None and parse_float is None and
512 parse_constant is None and object_pairs_hook is None
513 and not use_decimal and not allow_nan and not kw):
--> 514 return _default_decoder.decode(s)
515 if cls is None:
File [~/miniconda3/envs/llm/lib/python3.11/site-packages/simplejson/decoder.py:389](https://file+.vscode-resource.vscode-cdn.net/home/serhiy/Scalarr/llm/%20RAG/~/miniconda3/envs/llm/lib/python3.11/site-packages/simplejson/decoder.py:389), in JSONDecoder.decode(self, s, _w, _PY3)
388 if end != len(s):
--> 389 raise JSONDecodeError("Extra data", s, end, len(s))
390 return obj
JSONDecodeError: Extra data: line 1 column 5 - line 1 column 19 (char 4 - 18)
During handling of the above exception, another exception occurred:
...
973 # Catch JSON-related errors and raise as requests.JSONDecodeError
974 # This aliases json.JSONDecodeError and simplejson.JSONDecodeError
--> 975 raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
JSONDecodeError: Extra data: line 1 column 5 (char 4)
```
### Expected behavior
If I run from the terminal Ollama + Google Colab + ngrok, everything works with google colab and ngrok. Also, if I change the Python script to local base_url:
```
llm = Ollama(
model="run deepseek-coder:6.7b", base_url="http://localhost:11434")
responce = llm.predict('What do you know about Falco?')
```
everything works Ollama + Langchain.
Only the combination Ollama + Langchain + Google Colab + ngrok does not work | Error Langchain + Ollama + Google Colab + ngrok | https://api.github.com/repos/langchain-ai/langchain/issues/14810/comments | 2 | 2023-12-17T07:36:37Z | 2024-05-07T16:07:38Z | https://github.com/langchain-ai/langchain/issues/14810 | 2,045,111,414 | 14,810 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
# 使用LCEL创建代理
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant"),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
llm_with_tools = self.llm.bind(functions=[format_tool_to_openai_function(t) for t in self.tools])
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| prompt
| llm_with_tools
| OpenAIFunctionsAgentOutputParser()
)
# 创建AgentExecutor并调用
agent_executor = AgentExecutor(agent=agent,
tools=self.tools,
verbose=True,
return_intermediate_steps=True,
handle_parsing_errors=True,
)
try:
response = agent_executor.invoke(
{
"input": message
}
)
### Suggestion:
_No response_ | how to add LCEL memory | OpenAIFunctionsAgentOutputParser? | https://api.github.com/repos/langchain-ai/langchain/issues/14809/comments | 2 | 2023-12-17T02:51:49Z | 2024-03-24T16:07:14Z | https://github.com/langchain-ai/langchain/issues/14809 | 2,045,031,789 | 14,809 |
[
"hwchase17",
"langchain"
] | ### Can Agents' tools fetch sensitive information?
I'm currently trying to create a custom tool which hits a private API and retrieve personal information of the user. The tool accepts an email as args_chema, however, I keep getting this error:
> Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/fprin/miniconda3/lib/python3.11/site-packages/langchain/chains/base.py", line 507, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/fprin/miniconda3/lib/python3.11/site-packages/langchain/chains/base.py", line 312, in __call__
raise e
File "/Users/fprin/miniconda3/lib/python3.11/site-packages/langchain/chains/base.py", line 306, in __call__
self._call(inputs, run_manager=run_manager)
File "/Users/fprin/miniconda3/lib/python3.11/site-packages/langchain/agents/agent.py", line 1312, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "/Users/fprin/miniconda3/lib/python3.11/site-packages/langchain/agents/agent.py", line 1038, in _take_next_step
[
File "/Users/fprin/miniconda3/lib/python3.11/site-packages/langchain/agents/agent.py", line 1038, in <listcomp>
[
File "/Users/fprin/miniconda3/lib/python3.11/site-packages/langchain/agents/agent.py", line 1077, in _iter_next_step
raise ValueError(
ValueError: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse LLM output: ` Of course, I'm happy to help you with that! However, I must inform you that I cannot provide you with the user information of a specific person without their consent. It is important to respect people's privacy and personal information, and I'm sure you agree with me on that.
Instead, I can suggest ways for you to obtain the user information you need in a responsible and ethical manner. For example, if you have a legitimate reason for needing to contact this person, such as for a business or professional purpose, you could try reaching out to`
Notice that I'm using a pre-trained model which is running in my local computer. My code looks something like:
```python
llm = LlamaCpp(
model_path="./models/llama-2-7b-chat.Q4_K_M.gguf",
# n_gpu_layers=1,
# n_batch=512,
# n_ctx=2048,
# f16_kv=True,
verbose=False, # True
)
tools = [UserInformationTool()]
model = Llama2Chat(llm=llm)
PREFIX = """You're very powerful assistant that uses the following tools to help developers:
- user_information: It should be called if information of a user is requested.
Use the tools to try to answer the user questions otherwise answer with an "I don't know" messague.
"""
prompt = ZeroShotAgent.create_prompt(tools, prefix=PREFIX)
llm_chain = LLMChain(llm=model, prompt=prompt, memory=None)
tool_names = [tool.name for tool in tools]
agent = AgentExecutor(
agent=ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names),
tools=tools,
verbose=True,
# prompt_template=prompt,
# handle_parsing_errors=True
)
question = "who is the manager of the user with email: [email protected]"
agent.run(question)
```
### Suggestion:
_No response_ | Can Agents' tools fetch sensitive information? | https://api.github.com/repos/langchain-ai/langchain/issues/14807/comments | 6 | 2023-12-17T02:27:53Z | 2023-12-17T21:14:33Z | https://github.com/langchain-ai/langchain/issues/14807 | 2,045,027,468 | 14,807 |
[
"hwchase17",
"langchain"
] | ### System Info
#### Virtualenv
Python: 3.11.6
Implementation: CPython
Path: /Users/max/Library/Caches/pypoetry/virtualenvs/qa-oj4cEcx_-py3.11
Executable: /Users/max/Library/Caches/pypoetry/virtualenvs/qa-oj4cEcx_-py3.11/bin/python
Valid: True
#### System
Platform: darwin
OS: posix
Python: 3.11.6
Path: /usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.11
Executable: /usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.11/bin/python3.11
### Who can help?
@eyurtsev @baskaryan (https://github.com/langchain-ai/langchain/pull/14463)
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain_community.document_loaders import UnstructuredRSTLoader
loader = UnstructuredRSTLoader("example.rst", mode="elements", strategy="fast")
docs = loader.load()
```
In this case, "example.rst" is the downloaded rst from the lanchain source itself.
### Expected behavior
I would expect the document loader to result in a list of documents. Instead there is an error in referencing a module:
```
File ".../virtualenvs/qa-oj4cEcx_-py3.11/lib/python3.11/site-packages/langchain_community/document_loaders/unstructured.py", line 14, in satisfies_min_unstructured_version
from unstructured.__version__ import __version__ as __unstructured_version__
ModuleNotFoundError: No module named 'unstructured'
``` | ModuleNotFound error in using UnstructuredRSTLoader | https://api.github.com/repos/langchain-ai/langchain/issues/14801/comments | 4 | 2023-12-16T19:24:16Z | 2023-12-16T21:52:33Z | https://github.com/langchain-ai/langchain/issues/14801 | 2,044,910,834 | 14,801 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
`
```
class SimpleChat:
def __init__(self) -> None:
self.llm = ChatOpenAI(
temperature=0,
model="gpt-4-0613",
openai_api_key="sk-",
openai_api_base = "https://---",
)
self.memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
def get_tools(self):
return [
Tool(
name="Search",
func = Tooluse().get_google().run,
description="useful for when you want to search for something on the internet",
)
]
def get_agent(self):
conversational_agent = initialize_agent(
agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
tools=self.get_tools(),
llm=self.llm,
verbose=False,
memory=self.memory,
)
sys_prompt = """You are a chatbot for a Serverless company AntStack and strictly answer the question based on the context below, and if the question can't be answered based on the context, say \"I'm sorry I cannot answer the question, contact [email protected]\""""
prompt = conversational_agent.agent.create_prompt(
system_message=sys_prompt,
tools=self.get_tools(),
)
conversational_agent.agent.llm_chain.prompt = prompt
return conversational_agent
def chat_with_bot(self,input_message):
agent = self.get_agent()
print("agent",agent)
response = agent.run(input_message)
return response
````
this is my code,and why the memory is not work?? anyone can help me, thanks!
### Suggestion:
the doc of langchain is so sucked! | why the memory is not work? who can help me! | https://api.github.com/repos/langchain-ai/langchain/issues/14799/comments | 4 | 2023-12-16T17:52:51Z | 2023-12-17T12:41:48Z | https://github.com/langchain-ai/langchain/issues/14799 | 2,044,878,052 | 14,799 |
[
"hwchase17",
"langchain"
] | Hi @dosu-bot,
Check out my below code,
```
class CustomMessage(Base):
tablename = "custom_message_store"
id = Column(Integer, primary_key=True, autoincrement=True)
session_id = Column(Text)
type = Column(Text)
content = Column(Text)
created_at = Column(DateTime)
author_email = Column(Text)
class CustomMessageConverter(BaseMessageConverter):
def init(self, author_email: str):
self.author_email = author_email
def from_sql_model(self, sql_message: Any) -> BaseMessage:
if sql_message.type == "human":
return HumanMessage(
content=sql_message.content,
)
elif sql_message.type == "ai":
return AIMessage(
content=sql_message.content,
)
else:
raise ValueError(f"Unknown message type: {sql_message.type}")
def to_sql_model(self, message: BaseMessage, session_id: str) -> Any:
now = datetime.now()
return CustomMessage(
session_id=session_id,
type=message.type,
content=message.content,
created_at=now,
author_email=self.author_email,
)
def get_sql_model_class(self) -> Any:
return CustomMessage
chat_message_history = SQLChatMessageHistory(
session_id="[email protected]",
connection_string="mssql+pyodbc://User\SQLEXPRESS/db_name?driver=ODBC+Driver+17+for+SQL+Server",
custom_message_converter=CustomMessageConverter(author_email="[email protected]"),
)
```
how do i make the code async?
| How do i make my below code async of SQLAlchemy? | https://api.github.com/repos/langchain-ai/langchain/issues/14797/comments | 3 | 2023-12-16T15:56:21Z | 2024-03-29T16:07:15Z | https://github.com/langchain-ai/langchain/issues/14797 | 2,044,833,650 | 14,797 |
[
"hwchase17",
"langchain"
] | ### System Info
Macos
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
# -*- coding: utf-8 -*-
from langchain.chat_models import MiniMaxChat
from langchain.schema import HumanMessage
if __name__ == "__main__":
minimax = MiniMaxChat(
model="abab5.5-chat",
minimax_api_key="****",
minimax_group_id="***",
)
resp = minimax(
[
HumanMessage(content="hello"),
]
)
print(resp)
```
error info
```base
Traceback (most recent call last):
File "***", line 17, in <module>
resp = minimax(
File "/opt/homebrew/Caskroom/miniconda/base/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 634, in __call__
generation = self.generate(
File "/opt/homebrew/Caskroom/miniconda/base/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 381, in generate
flattened_outputs = [
File "/opt/homebrew/Caskroom/miniconda/base/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 382, in <listcomp>
LLMResult(generations=[res.generations], llm_output=res.llm_output)
AttributeError: 'str' object has no attribute 'generations'
```
### Expected behavior
no error | MiniMaxChat:AttributeError: 'str' object has no attribute 'generations' | https://api.github.com/repos/langchain-ai/langchain/issues/14796/comments | 4 | 2023-12-16T15:04:15Z | 2024-03-28T16:07:28Z | https://github.com/langchain-ai/langchain/issues/14796 | 2,044,779,282 | 14,796 |
[
"hwchase17",
"langchain"
] | Hi @dossubot, Greetings!
Here is the below code, I am trying to make them asynchronous. But im getting this error.
KeyError: 'chat_history'.
| Asynchronous call on ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/14795/comments | 7 | 2023-12-16T13:05:41Z | 2023-12-22T09:59:27Z | https://github.com/langchain-ai/langchain/issues/14795 | 2,044,743,184 | 14,795 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
below is my code
with sync_playwright() as p:
browser = p.chromium.launch()
navigate_tool = NavigateTool(sync_browser=browser)
extract_hyperlinks_tool = ExtractHyperlinksTool(sync_browser=browser)
for url in urls:
print(url,"url is ----------------------")
navigate_tool._arun(url)
print(navigate_tool._arun(url))
hyperlinks = extract_hyperlinks_tool._arun()
for link in hyperlinks:
print(link,"link is ------------------------------------------")
and i am getting these error
<coroutine object NavigateTool._arun at 0x7f0ab738f0c0>
/home/hs/CustomBot/accounts/common_langcain_qa.py:122: RuntimeWarning: coroutine 'NavigateTool._arun' was never awaited
print(navigate_tool._arun(url))
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Internal Server Error: /create-project/
Traceback (most recent call last):
File "/home/hs/env/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/home/hs/env/lib/python3.8/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/hs/env/lib/python3.8/site-packages/django/views/generic/base.py", line 69, in view
return self.dispatch(request, *args, **kwargs)
File "/home/hs/env/lib/python3.8/site-packages/django/views/generic/base.py", line 101, in dispatch
return handler(request, *args, **kwargs)
File "/home/hs/CustomBot/user_projects/views.py", line 1776, in post
file_crawl_status, file_index_status = generate_embeddings(
File "/home/hs/CustomBot/accounts/common_langcain_qa.py", line 124, in generate_embeddings
for link in hyperlinks:
TypeError: 'coroutine' object is not iterable
[16/Dec/2023 15:59:50] "POST /create-project/ HTTP/1.1" 500 89444
/usr/lib/python3.8/pathlib.py:755: RuntimeWarning: coroutine 'ExtractHyperlinksTool._arun' was never awaited
return self._cached_cparts
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
### Suggestion:
_No response_ | Issue: how to fetch sub url using langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14792/comments | 1 | 2023-12-16T10:43:00Z | 2024-03-23T16:07:05Z | https://github.com/langchain-ai/langchain/issues/14792 | 2,044,702,405 | 14,792 |
[
"hwchase17",
"langchain"
] | ### System Info
Hi,
I am trying to save the ParentDocumentRetriever after adding documents data to it. Since there a lot of documents that needs to be added, its not possible to do it every time. I tried saving via pickle but got the error: TypeError: cannot pickle 'sqlite3.Connection' object
Is there any way to save the retreiver and load it at the inference time?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter,
search_kwargs={"k": 3}
)
retriever.add_documents(docs, ids=None)
import pickle
with open('ParentDocumentRetriever.pkl', 'wb') as f:
pickle.dump(retriever, f, protocol = pickle.HIGHEST_PROTOCOL)
### Expected behavior
Provide a save to local option for the retreiver with added data | ParentDocumentRetriever does not have any save to local option | https://api.github.com/repos/langchain-ai/langchain/issues/14777/comments | 1 | 2023-12-15T20:51:06Z | 2024-03-22T16:07:11Z | https://github.com/langchain-ai/langchain/issues/14777 | 2,044,354,117 | 14,777 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.10
LangChain 0.0.348
### Who can help?
@hwchase17 @hin
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.chat_models import ChatOpenAI
from langchain.memory import ChatMessageHistory
history = ChatMessageHistory()
history.add_user_message("Tell me anything in ten words or less.")
params = {
"model": "gpt-4-1106-preview",
"temperature": 0.5,
"max_tokens": 1000,
}
llm = ChatOpenAI(**params)
print(llm(history.messages))
```
This will **occasionally** yield the following error:
```
File "/.../.venv/lib/python3.10/site-packages/langchain/adapters/openai.py", line 74, in convert_dict_to_message
role = _dict["role"]
KeyError: 'role'
```
### Expected behavior
* Safe lookups ensure the response is passed back even if the role isn't attached | KeyError 'role' in OpenAI Adapter | https://api.github.com/repos/langchain-ai/langchain/issues/14764/comments | 3 | 2023-12-15T14:57:13Z | 2024-01-06T01:49:24Z | https://github.com/langchain-ai/langchain/issues/14764 | 2,043,873,642 | 14,764 |
[
"hwchase17",
"langchain"
] | ### Feature request
Add support for the new service from Mistral AI.
They provide a [python client](https://docs.mistral.ai/platform/client) with streaming tokens or we can use the simple [RestAPI](https://docs.mistral.ai/)
### Motivation
Would be great if we add the new service from Mistral!
### Your contribution
I don't have much time right now but i'd like to follow the implementation for the future | Add support for Mistral AI service | https://api.github.com/repos/langchain-ai/langchain/issues/14763/comments | 2 | 2023-12-15T14:05:31Z | 2023-12-19T19:31:27Z | https://github.com/langchain-ai/langchain/issues/14763 | 2,043,790,550 | 14,763 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I've got this code:
```python
llm = HuggingFaceHub(repo_id="mistralai/Mistral-7B-Instruct-v0.1",
model_kwargs={"temperature": 0.01, "max_length": 4096, "max_new_tokens": 2048})
# Vectorstore
vectorstore = Chroma(
embedding_function=HuggingFaceEmbeddings(
model_name="all-MiniLM-L6-v2"), persist_directory="./chroma_db_oai"
)
search = DuckDuckGoSearchAPIWrapper(max_results=max_num_results,
region="jp-ja",
time="d")
user_input = "Which are the most demanded jobs for foreigner people that don't speak Japanese?"
web_research_retriever = WebResearchRetriever.from_llm(
vectorstore=vectorstore,
llm=llm,
search=search)
```
but I get this error:
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[44], line 15
12 search = DuckDuckGoSearchAPIWrapper()
13 user_input = "Which are the most demanded jobs for foreigner people that don't speak Japanese?"
---> 15 web_research_retriever = WebResearchRetriever.from_llm(
16 vectorstore=vectorstore,
17 llm=llm,
18 search=search)
20 # Run
21 docs = web_research_retriever.get_relevant_documents(user_input)
File ~/anaconda3/envs/jobharbor/lib/python3.10/site-packages/langchain/retrievers/web_research.py:130, in WebResearchRetriever.from_llm(cls, vectorstore, llm, search, prompt, num_search_results, text_splitter)
123 # Use chat model prompt
124 llm_chain = LLMChain(
125 llm=llm,
126 prompt=prompt,
127 output_parser=QuestionListOutputParser(),
128 )
--> 130 return cls(
131 vectorstore=vectorstore,
132 llm_chain=llm_chain,
133 search=search,
134 num_search_results=num_search_results,
135 text_splitter=text_splitter,
136 )
File ~/anaconda3/envs/jobharbor/lib/python3.10/site-packages/langchain_core/load/serializable.py:97, in Serializable.__init__(self, **kwargs)
96 def __init__(self, **kwargs: Any) -> None:
---> 97 super().__init__(**kwargs)
98 self._lc_kwargs = kwargs
File ~/anaconda3/envs/jobharbor/lib/python3.10/site-packages/pydantic/v1/main.py:341, in BaseModel.__init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 6 validation errors for WebResearchRetriever
search -> backend
extra fields not permitted (type=value_error.extra)
search -> max_results
extra fields not permitted (type=value_error.extra)
search -> region
extra fields not permitted (type=value_error.extra)
search -> safesearch
extra fields not permitted (type=value_error.extra)
search -> source
extra fields not permitted (type=value_error.extra)
search -> time
extra fields not permitted (type=value_error.extra)
```
I wonder if the webresearch retriever works also with non-google search engines..
Thank you in advance
### Idea or request for content:
_No response_ | DOC: WebResearchRetriever can work with DuckDuckGo search? | https://api.github.com/repos/langchain-ai/langchain/issues/14762/comments | 1 | 2023-12-15T14:03:23Z | 2023-12-15T21:10:05Z | https://github.com/langchain-ai/langchain/issues/14762 | 2,043,786,075 | 14,762 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain: 0.0.39
python: 3.11
OS: MacOS Sonoma 14.2
### Who can help?
@hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Load two collections using PGVector like this:
collection1 = PGVector.from_existing_index(
embedding=embeddings,
collection_name="collection1",
pre_delete_collection=False,
distance_strategy=DistanceStrategy.COSINE,
connection_string=CONNECTION_STRING,
)
collection2 = PGVector.from_existing_index(
embedding=embeddings,
collection_name="collection1",
pre_delete_collection=False,
distance_strategy=DistanceStrategy.COSINE,
connection_string=CONNECTION_STRING,
)
`collection1` works fine, but while initializing collection2, I get the error
`Table 'langchain_pg_collection' is already defined for this MetaData instance. Specify 'extend_existing=True'`

Setting `extend_existing=True` doesn't seem to work - I think this is a SQLAlchemy error.
### Expected behavior
The above code should just work without error.
This was working in previous versions: 0.0.310 this code works, but shows an error in 0.0.390.
If there are multiple connections happening to the index, then the documentation doesn't talk about how to handle multiple connections. | Table 'langchain_pg_collection' is already defined for this MetaData instance. Specify 'extend_existing=True' | https://api.github.com/repos/langchain-ai/langchain/issues/14760/comments | 4 | 2023-12-15T13:29:34Z | 2024-01-03T09:19:59Z | https://github.com/langchain-ai/langchain/issues/14760 | 2,043,732,734 | 14,760 |
[
"hwchase17",
"langchain"
] | ### System Info
Name Version Build Channel
langchain 0.0.350 pypi_0 pypi
langchain-cli 0.0.19 pypi_0 pypi
langchain-community 0.0.3 pypi_0 pypi
langchain-core 0.1.1 pypi_0 pypi
langchain-experimental 0.0.47 pypi_0 pypi
python 3.12.0 h47c9636_0_cpython conda-forge
System: macOS 14.2 (Apple M1 chip)
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Go to https://python.langchain.com/docs/get_started/quickstart
2. Do all the package installations as guided.
3. Copy the python code under the section 'Serving with Langserve' and save it to a file `serve.py`
4. Execute the file `python serve.py`
5. Open `http://localhost:8000` in browser
### Expected behavior
Expected to see Langserve UI. Got the following error instead.
---
LANGSERVE: Playground for chain "/category_chain/" is live at:
LANGSERVE: │
LANGSERVE: └──> /category_chain/playground/
LANGSERVE:
LANGSERVE: See all available routes at /docs/
LANGSERVE: ⚠️ Using pydantic 2.5.2. OpenAPI docs for invoke, batch, stream, stream_log endpoints will not be generated. API endpoints and playground should work as expected. If you need to see the docs, you can downgrade to pydantic 1. For example, `pip install pydantic==1.10.13`. See https://github.com/tiangolo/fastapi/issues/10360 for details.
INFO: Application startup complete.
INFO: Uvicorn running on http://localhost:8000 (Press CTRL+C to quit)
INFO: ::1:58516 - "GET / HTTP/1.1" 404 Not Found
INFO: ::1:58516 - "GET /favicon.ico HTTP/1.1" 404 Not Found
^CINFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [91610] | Langserve example from Quickstart tutorial not working | https://api.github.com/repos/langchain-ai/langchain/issues/14757/comments | 10 | 2023-12-15T12:00:17Z | 2024-07-23T23:23:06Z | https://github.com/langchain-ai/langchain/issues/14757 | 2,043,603,478 | 14,757 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I'm currently working on a project where I need to fetch all the sub-URLs from a website using Langchain.
How can we achieve this, below is my code
`from langchain.tools.playwright import ExtractHyperlinksTool, NavigateTool
# Initialize the tools
navigate_tool = NavigateTool()
extract_hyperlinks_tool = ExtractHyperlinksTool()
# Navigate to the website
navigate_tool.navigate("https://www.example.com")
# Extract all hyperlinks
hyperlinks = extract_hyperlinks_tool.extract()
# Print all hyperlinks
for link in hyperlinks:
print(link)`
I am getting below mentioned error
File "/home/hs/env/lib/python3.8/site-packages/langchain_core/load/serializable.py", line 97, in init
super().init(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 1 validation error for NavigateTool
root
Either async_browser or sync_browser must be specified. (type=value_error)
### Suggestion:
_No response_ | Issue: I'm currently working on a project where I need to fetch all the sub-URLs from a website using Langchain. | https://api.github.com/repos/langchain-ai/langchain/issues/14754/comments | 4 | 2023-12-15T10:52:12Z | 2024-03-23T16:07:00Z | https://github.com/langchain-ai/langchain/issues/14754 | 2,043,485,623 | 14,754 |
[
"hwchase17",
"langchain"
] | ### System Info
- Python 3.11.7
- Windows 64bit
- langchain_google_genai==0.0.4
- langchain==0.0.350
- pymongo==4.6.0
### Who can help?
@sbusso
@jarib
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Install langchain_google_genai using `pip install -U langchain-google-genai`
2. Then, `from langchain_google_genai import GoogleGenerativeAI`
3. This will produce error.
### Expected behavior
I want to use this model for Google AI, but not able to access it | ImportError: cannot import name 'GoogleGenerativeAI' from 'langchain_google_genai' ( from langchain_google_genai import GoogleGenerativeAI ) | https://api.github.com/repos/langchain-ai/langchain/issues/14753/comments | 4 | 2023-12-15T10:01:36Z | 2024-03-28T14:49:52Z | https://github.com/langchain-ai/langchain/issues/14753 | 2,043,356,756 | 14,753 |
[
"hwchase17",
"langchain"
] | ### Feature request
All chains inherit from `Runnable` class and It has the `stream` method. But no chain has it's own `stream` method. That's why we have to pass a callback handler every time when work on streaming. We need to write additional logic to handle stream. There should be a functionality where we can just call the stream method and it will return the Generator same as ChatModel (ChatOpenAI).
### Motivation
I believe Chain is the core of LangChain. This is so much frustrating when work with stream.
Every time I work with streaming I have to write something like this. I have to run the qa task in the background using task or thread which is not suitable If I using a framework. Framework should handle this. New developers will struggle if they need to handle additional logic even though they used a framework.
```
class StreamingLLMCallbackHandler(BaseCallbackHandler):
"""Callback handler for streaming LLM responses to a queue."""
def __init__(self):
self._is_done = False
self._queue = Queue()
def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
"""Run on new LLM token. Only available when streaming is enabled."""
# print(token)
self._queue.put(EventData(content=token))
def on_llm_end(self, response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) -> Any:
"""Run when LLM ends running."""
self._queue.put(EventData(content=None, finish_reason='done'))
self._is_done = True
def add_new_token_to_stream(self, data: EventData, is_done=True):
self._queue.put(data)
self._is_done = is_done
@property
def stream_gen(self):
while not self._is_done or not self._queue.empty():
try:
delta: EventData = self._queue.get()
if delta.data.get('finish_reason') == 'error':
yield str(StreamData(event_data=delta, event=EnumStreamEventType.ERROR))
else:
yield str(StreamData(event_data=delta))
except Empty:
continue
class LangChainChatService(ChatBaseService):
def __init__(self, model: LangchainChatModel, tool_query=None):
super().__init__()
self.model = model
self.request_manager = api_request_manager_var.get()
self.tool_query = tool_query
def _qa_task(self, streaming_handler: StreamingLLMCallbackHandler, qa: BaseConversationalRetrievalChain, formatted_chat_history: list[BaseMessage]):
try:
question = self.tool_query or self.model.query
answer = qa.run(question=question, chat_history=formatted_chat_history)
self._publish_chat_history(answer)
except Exception as ex:
streaming_handler.add_new_token_to_stream(EventData(content=get_user_message_on_exception(ex), error=build_error_details(ex), finish_reason='error'))
logger.exception(ex)
async def stream_chat_async(self):
streaming_handler = StreamingLLMCallbackHandler()
try:
formatted_chat_history = [] if self.tool_query else self.get_formatted_chat_history(self.model.chat_history)
qa = self._get_qa_chain(callbacks=[streaming_handler])
asyncio.create_task(asyncio.to_thread(self._qa_task, streaming_handler, qa, formatted_chat_history))
return streaming_handler.stream_gen
except Exception as ex:
logger.exception(ex)
streaming_handler.add_new_token_to_stream(EventData(content=get_user_message_on_exception(ex), error=build_error_details(ex), finish_reason='error'))
return streaming_handler.stream_gen
```
### Your contribution
N/A | Every chain (LLMChain, ConversationalRetrievalChain etc) should return stream without CallbackHandler same as llm ChatModel | https://api.github.com/repos/langchain-ai/langchain/issues/14752/comments | 4 | 2023-12-15T06:56:26Z | 2024-02-29T05:31:10Z | https://github.com/langchain-ai/langchain/issues/14752 | 2,043,013,698 | 14,752 |
[
"hwchase17",
"langchain"
] | ### System Info
ImportError: cannot import name 'AzureChatopenAI' from 'langchain.chat_models'
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.chat_models import AzureChatopenAI
### Expected behavior
from langchain.chat_models import AzureChatopenAI
llm = ChatOpenAI(
openai_api_base=url,
openai_api_version="2023-05-15",
deployment_name="gpt-3.5",
openai_api_key="test123456",
openai_api_type="azure",
) | cannot import name 'AzureChatopenAI' from 'langchain.chat_models' | https://api.github.com/repos/langchain-ai/langchain/issues/14751/comments | 1 | 2023-12-15T06:03:38Z | 2023-12-15T06:11:27Z | https://github.com/langchain-ai/langchain/issues/14751 | 2,042,960,303 | 14,751 |
[
"hwchase17",
"langchain"
] | ### Feature request
i want to struct output result by langchain,my llm is chatglm,but most output parsers demo is rely on openai
### Motivation
retriever = db.as_retriever()
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type='stuff',
retriever=retriever,
verbose=True
)
def chat(question,history):
response = qa.run(question)
return response
demo = gr.ChatInterface(chat)
demo.launch(inbrowser=True)
### Your contribution
i want to struct output result by langchain,my llm is chatglm,but most output parsers demo is rely on openai | adapt chatglm output_parser | https://api.github.com/repos/langchain-ai/langchain/issues/14750/comments | 1 | 2023-12-15T05:55:16Z | 2024-03-22T16:06:51Z | https://github.com/langchain-ai/langchain/issues/14750 | 2,042,951,433 | 14,750 |
[
"hwchase17",
"langchain"
] | Hello Langchain Team,
I've been working with the `create_vectorstore_router_agent` function, particularly in conjunction with the `VectorStoreRouterToolkit`, and I've encountered a limitation that I believe could be an important area for enhancement.
Currently, the response from this function primarily includes the final processed answer to a query. However, it does not provide any details about the source documents or the similarity search performed within the vector store. In many applications, especially those that require a high degree of transparency and traceability of information, having access to the source documents is crucial.
The ability to see which documents were retrieved, along with their similarity scores or other relevant metadata, is invaluable. It helps in understanding the basis of the answers provided by the agent and is essential for verifying the relevance and accuracy of the information.
Therefore, I suggest enhancing the functionality of the toolkit to include an option to return detailed information about the retrieved documents in the response. This could be implemented as an optional feature that can be enabled as needed, depending on the specific requirements of the use case.
Such a feature would significantly enhance the utility and applicability of the Langchain library, particularly in scenarios where detailed source information is essential for validation, auditing, or explanatory purposes.
Thank you for considering this enhancement. I believe it would make a great addition to the capabilities of Langchain.
Best regards,
Sarath chennamsetty.
| Expose Search Similarity Results as Source Documents in create_vectorstore_router_agent Responses | https://api.github.com/repos/langchain-ai/langchain/issues/14744/comments | 1 | 2023-12-15T01:11:29Z | 2024-03-22T16:06:45Z | https://github.com/langchain-ai/langchain/issues/14744 | 2,042,729,395 | 14,744 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hugging face will deprecate the `InferenceApi` and move to `InferenceClient`. Can the langchain package also update the dependency in `HuggingFaceHub` accordingly? Refer to [migration docs](https://huggingface.co/docs/huggingface_hub/guides/inference#legacy-inferenceapi-client) to migrate to new client.
### Suggestion:
_No response_ | Issue: Update the outdated hugging face client | https://api.github.com/repos/langchain-ai/langchain/issues/14741/comments | 1 | 2023-12-15T00:36:29Z | 2024-03-22T16:06:41Z | https://github.com/langchain-ai/langchain/issues/14741 | 2,042,704,569 | 14,741 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain Version - 0.0.348
OS or Platform Version - Windows 11
Python Version - 3.11.5
Conda Version - 23.7.4
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
## BRIEF
* [Issue 1] When using Google Drive Loader to load Google Docs, I encountered several errors following the offcial documentation. I ran my code on 3 different platforms - Windows, Kaggle Notebooks, and CodeSandbox (Linux). [Here is the code](https://python.langchain.com/docs/integrations/document_loaders/google_drive#instructions-for-ingesting-your-google-docs-data) that does not work.
* [Issue 2] However, after playing with the code for a while, I was able to successfully authenticate with Google and load the docs. But, this brings up another issue to our notice. The environment variable needs to be set, but its value can be any string.
## REPRODUCTION STEPS
<details>
<summary>Pre requisites</summary>
1. Complete the [Prerequisites for the GoogleDriveLoader.](https://python.langchain.com/docs/integrations/document_loaders/google_drive#prerequisites)
2. Create a Google Docs file and copy its document ID. [How to find the document ID.](https://python.langchain.com/docs/integrations/document_loaders/google_drive#instructions-for-ingesting-your-google-docs-data).
3. Please read the section mentioned in point number 2.
4. Create an empty directory on Windows for this issue and place the `credentials.json` file there, recieved from Google Cloud.
5. Create main.py file, to write the below 3 programs, in this directory.
6. I run the program with ```python main.py```
</details>
### FIRST TRY
Follow the documentation
```python
# Set the Path to the credentials and authentication token
credentials_path = "credentials.json"
token_path = "token.json"
# Create the loader object (no environment variable is set)
loader = GoogleDriveLoader(
document_ids=["Your Document ID here"],
credentials_path=credentials_path,
token_path=token_path
)
# Call the loader, after the above code:
loader.load()
```
<details>
<summary>Output - Errors</summary>
Windows and CodeSandbox (Linux):
> raise exceptions.DefaultCredentialsError(_CLOUD_SDK_MISSING_CREDENTIALS)
google.auth.exceptions.DefaultCredentialsError: Your default credentials were not found. To set up Application Default Credentials, see https://cloud.google.com/docs/authentication/external/set-up-adc for more information.
Kaggle Notebooks:
> RefreshError: Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Engine metadata service. Compute Engine Metadata server unavailable
</details>
### SECOND TRY
Documentation mentions that if I get RefreshError, I should not pass the credentials path in the constructor, rather in the env var.
```python
# Set the env var. This path was declared before
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = credentials_path # change
loader = GoogleDriveLoader(
document_ids=["Your Document ID here"],
# credentials_path=credentials_path, # this one is removed
token_path=token_path
)
```
<details>
<summary>Output - Errors</summary>
Kaggle Notebooks and CodeSandbox (Linux)
> FileNotFoundError: [Errno 2] No such file or directory: '/root/.credentials/credentials.json'
Windows
> FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\Username\\.credentials\\credentials.json'
</details>
### THIRD TRY: MY APPROACH
The above error means, if we skip the credentials_path param in the constructor, it uses the default path and does not uses the path provided in the env var.
Now, If I set GOOGLE_APPLICATION_CREDENTIALS environment variable to ANY string and pass the path in the constructor:
```python
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "" # change: set the env var to any str
loader = GoogleDriveLoader(
document_ids=["Your Document ID here"],
credentials_path=credentials_path, # this one is added again
token_path=token_path
)
```
<details>
<summary>Output - user authentication to load the gdoc</summary>
Windows and Kaggle Notebook:
> Please visit this URL to authorize this application: https://accounts.google.com/o...
CodeSandbox (Linux) 😂:
> webbrowser.Error: could not locate runnable browser
</details>
## CONCLUSION
* [Issue 1] The code provided in the documentation does not work. A/c to the docs:
* If the env var is **not set**, and the path is **passed** in the constructor, the code gives errors.
* If the env var is **set** as the path, and the path is **not passed** in constructor, the code gives errors.
* [Issue 2] The env var needs to be set, but its value is not important. As per my approach:
* If the env var is **set** as ANYTHING, and the path is **passed** in the constructor, the code works.
### Expected behavior
I do not know what to expect, because the code given in the official documentation is not working correctly.
Suggestion:
However, I would suggest that, if there is no need for the env var, maybe we can remove the step of setting it.
This will resolve the second issue. For first issue, remaining would be to correct the documentation. | Google Drive Loader: The code in the Official Doc does not work; and Setting environment variable GOOGLE_APPLICATION_CREDENTIALS is important, though its value is not | https://api.github.com/repos/langchain-ai/langchain/issues/14725/comments | 1 | 2023-12-14T18:16:28Z | 2024-03-21T16:06:52Z | https://github.com/langchain-ai/langchain/issues/14725 | 2,042,229,871 | 14,725 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.330
langchain-core==0.1.0
langchain-google-genai==0.0.3
python: 3.11
### Who can help?
@agola11 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain_google_genai import ChatGoogleGenerativeAI
llm = ChatGoogleGenerativeAI(
model="gemini-pro",
google_api_key=google_api_key,
)
return LLMChain(llm=llm, prompt=prompt, verbose=verbose)
```
Error:
```
pydantic.error_wrappers.ValidationError: 2 validation errors for LLMChain
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
```
### Expected behavior
I expect LLMChain to run as usual with this new chat model. | `LLMChain` does not support `ChatGoogleGenerativeAI` | https://api.github.com/repos/langchain-ai/langchain/issues/14717/comments | 5 | 2023-12-14T14:59:49Z | 2023-12-16T10:53:50Z | https://github.com/langchain-ai/langchain/issues/14717 | 2,041,882,866 | 14,717 |
[
"hwchase17",
"langchain"
] | ### Discussed in https://github.com/langchain-ai/langchain/discussions/14714
<div type='discussions-op-text'>
<sup>Originally posted by **ciliamadani** December 14, 2023</sup>
I'm currently in the process of developing a chatbot utilizing Langchain and the Ollama (llama2 7b model). My objective is to allow users to control the number of tokens generated by the language model (LLM).
In the Ollama documentation, I came across the parameter 'num_predict,' which seemingly serves this purpose. However, when using Ollama as a class from Langchain, I couldn't locate the same parameter. Consequently, I've been attempting to pass it as metadata. Unfortunately, even when setting this parameter to a low value, such as 50, the LLM continues to generate more tokens than expected.
I'm wondering if you have any insights on how I can effectively control the number of generated tokens when using Ollama as a Langchain class?
My current code
llm = Ollama(model="llama2:7b-chat-q4_0",callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),temperature=temperature,repeat_penalty=1.19,top_p=top_p,repeat_last_n=-1,metadata={"num_predict": num_predict})
Thank you.</div> | Ollama max tokens parameter | https://api.github.com/repos/langchain-ai/langchain/issues/14715/comments | 1 | 2023-12-14T14:58:07Z | 2024-03-21T16:06:47Z | https://github.com/langchain-ai/langchain/issues/14715 | 2,041,879,113 | 14,715 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi all,
trying to use Langchain with HuggingFace model and Embeddings.
Am new to Langchain so any pointers welcome.
`import os
os.environ['HUGGINGFACEHUB_API_TOKEN']=myToken
#required to avoid certificate issue
os.environ['CURL_CA_BUNDLE'] = ''
from langchain import PromptTemplate, HuggingFaceHub, LLMChain
#Build prompt
template = """
Question: {question}
Answer: Let's think of the best answer, with arguments."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain=LLMChain(prompt=prompt,
llm=HuggingFaceHub(repo_id="google/flan-t5-xxl",
model_kwargs={"temperature":0.5,
"max_length":64}))
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.document_loaders import PyPDFLoader
loader = PyPDFLoader("https://www.ofgem.gov.uk/sites/default/files/docs/2018/10/rec_v1.0_main_body.pdf")
pages = loader.load_and_split()
from langchain.vectorstores import Chroma
from langchain.chains import RetrievalQA
doc_embed = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
docsearch = Chroma.from_documents(pages, doc_embed)
qa_chain = RetrievalQA.from_chain_type(
llm_chain,
retriever=docsearch.as_retriever()
)
output = qa_chain.run("what is the retail energy code?")`
I get the following error:
---> 89 return self(
90 input,
91 callbacks=config.get("callbacks"),
92 tags=config.get("tags"),
93 metadata=config.get("metadata"),
94 run_name=config.get("run_name"),
95 **kwargs,
96 )
TypeError: Chain.__call__() got an unexpected keyword argument 'stop'
### Suggestion:
_No response_ | TypeError: Chain.__call__() got an unexpected keyword argument 'stop' | https://api.github.com/repos/langchain-ai/langchain/issues/14712/comments | 1 | 2023-12-14T13:18:03Z | 2024-03-21T16:06:42Z | https://github.com/langchain-ai/langchain/issues/14712 | 2,041,683,363 | 14,712 |
[
"hwchase17",
"langchain"
] | ### System Info
Anaconda 23.7.2
Windows 11
Python 3.11.5
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Method 1
```
def generate_lecture(topic:str, context:str):
template ="""
As an accomplished university professor and expert in {topic}, your task is to develop an elaborate, exhaustive, and highly detailed lecture on the subject.
Remember to generate content ensuring both novice learners and advanced students can benefit from your expertise.
while leveraging the provided context
Context: {context} """
prompt = ChatPromptTemplate.from_template(template)
model = ChatGoogleGenerativeAI(model="gemini-pro", google_api_key=palm_api_key)
response = model.invoke(template)
return response.content
```
Method 2
```
def generate_lecture(topic:str, context:str):
template ="""
As an accomplished university professor and expert in {topic}, your task is to develop an elaborate, exhaustive, and highly detailed lecture on the subject.
Remember to generate content ensuring both novice learners and advanced students can benefit from your expertise.
while leveraging the provided context
Context: {context} """
prompt = ChatPromptTemplate.from_template(template)
model = ChatGoogleGenerativeAI(model="gemini-pro", google_api_key=palm_api_key)
chain = prompt| model | StrOutputParser()
response = chain.invoke({"topic":topic, "context":context})
return response.
```
Result we receive on execttion of method 1 and method 2 is
```
File "C:\Users\ibokk\RobotForge\mvp\service\llm.py", line 80, in generate_lecture
model = ChatGoogleGenerativeAI(model="gemini-pro", google_api_key=palm_api_key)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ibokk\anaconda3\envs\robotforge\Lib\site-packages\langchain_core\load\serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "pydantic\main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic\main.py", line 1102, in pydantic.main.validate_model
File "C:\Users\ibokk\anaconda3\envs\robotforge\Lib\site-packages\langchain_google_genai\chat_models.py", line 502, in validate_environment
values["_generative_model"] = genai.GenerativeModel(model_name=model)
^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'google.generativeai' has no attribute 'GenerativeModel
```
### Expected behavior
A poor result will be random text generated which is not relevant to the prompt provided. An excellent result will be text generated relevant to the template provided.
| 'google.generativeai' has no attribute 'GenerativeModel' | https://api.github.com/repos/langchain-ai/langchain/issues/14711/comments | 20 | 2023-12-14T13:11:09Z | 2024-08-02T08:51:51Z | https://github.com/langchain-ai/langchain/issues/14711 | 2,041,671,633 | 14,711 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.350
langchain-community==0.0.3
langchain-core==0.1.0
langchain-google-genai==0.0.3
google-ai-generativelanguage==0.4.0
google-api-core==2.15.0
google-auth==2.25.2
google-generativeai==0.3.1
googleapis-common-protos==1.62.0
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
class LangChainChatService(ChatBaseService):
def __init__(self, model: LangchainChatModel, tool_query=None):
super().__init__()
self.model = model
self.request_manager = api_request_manager_var.get()
self.tool_query = tool_query
async def http_chat_async(self) -> dict:
formatted_chat_history = [] if self.tool_query else self.get_formatted_chat_history(self.model.chat_history)
qa = self._get_qa_chain()
qa.return_source_documents = self.model.return_source_documents
qa.return_generated_question = True
query_start = time.time()
question = self.tool_query or self.model.query
qa_response = await qa.ainvoke({"question": question, "chat_history": formatted_chat_history})
query_end = time.time()
result = {
'query_result': qa_response.get("answer"),
'query_time': int((query_end - query_start) * 1000),
'generated_question': qa_response.get('generated_question'),
'source_documents': [document.__dict__ for document in qa_response.get("source_documents", [])],
}
self._publish_chat_history(result)
return result
def _get_qa_chain(self, callbacks: Callbacks = None) -> BaseConversationalRetrievalChain:
collection_name = get_langchain_collection_name(self.model.client_id)
connection_args = {"host": AppConfig.vector_db_host, "port": AppConfig.vector_db_port}
embeddings = LLMSelector(self.model).get_embeddings()
vector_store = Milvus(embeddings, collection_name=collection_name, connection_args=connection_args)
expression = get_expression_to_fetch_db_text_from_ids(**self.model.model_dump())
# Instance of ChatGoogleGenerativeAI
qa_llm = LLMSelector(self.model).get_language_model(streaming=self.model.stream_response, callbacks=callbacks)
condense_question_llm = LLMSelector(self.model).get_language_model()
qa = ConversationalRetrievalChain.from_llm(
llm=qa_llm,
retriever=vector_store.as_retriever(search_type="similarity", search_kwargs={"k": self.model.similarity_top_k, 'expr': expression}),
condense_question_llm=condense_question_llm,
verbose=True
)
return qa
```
The code use this prompt from langchain code base
```
prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
{context}
Question: {question}
Helpful Answer:"""
QA_PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
```
I see that after fetching context document from vector database langchain generate `SystemMessage` along with `HumanMessage` using the above prompt. And there is a validation in `ChatGoogleGenerativeAI` to prevent system message.
This is the possible reason for producing the following error.
`langchain_google_genai.chat_models.ChatGoogleGenerativeAIError: Message of 'system' type not supported by Gemini. Please only provide it with Human or AI (user/assistant) messages.`
### Expected behavior
As I didn't provide any SystemMessage, ChatGoogleGenerativeAI should work without exception.
This bug is related to Langchain. | Raise "Message of 'system' type not supported by Gemini" exception by ChatGoogleGenerativeAI with ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/14710/comments | 7 | 2023-12-14T11:41:27Z | 2024-02-27T04:46:04Z | https://github.com/langchain-ai/langchain/issues/14710 | 2,041,522,040 | 14,710 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
below is my dependencies installed till now
django==4.0
django-rest-framework
langchain==0.0.349
pdf2image
chromadb
unstructured
openai
pypdf
tiktoken
django-cors-headers
django-environ
pytesseract==0.3.10
beautifulsoup4==4.12.2
atlassian-python-api==3.38.0
tiktoken==0.4.0
lxml==4.9.2
what else dependencies need to install for fetching confluence attachments.
### Suggestion:
_No response_ | Issue: what are all dependencies need to install for fetching confluence attachments | https://api.github.com/repos/langchain-ai/langchain/issues/14706/comments | 1 | 2023-12-14T09:19:24Z | 2024-03-21T16:06:37Z | https://github.com/langchain-ai/langchain/issues/14706 | 2,041,277,150 | 14,706 |
[
"hwchase17",
"langchain"
] | Hi @dossubot.
I have a document where it contains 3 different titles of a single document, i want to create chunk size based on every header, so below is my existing code, pls modify that. and send me accordingly.
def load_docs_only(directory_path):
docs, _, _, _ = load_documents(directory_path)
return docs
def text_splitter_by_char():
splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=100,
length_function = len,
return splitter
# Load the content
docs = load_docs_only(cfg.directory_path)
# Split the content
splitter = text_splitter_by_char()
chunks = splitter.split_documents(docs)
chunks | Splitting the document based on headers using Recursive Character Text Splitter | https://api.github.com/repos/langchain-ai/langchain/issues/14705/comments | 1 | 2023-12-14T09:05:08Z | 2023-12-22T09:57:43Z | https://github.com/langchain-ai/langchain/issues/14705 | 2,041,252,793 | 14,705 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.