issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| IMO a contribution guide should be added. The following questions should be answered:
- how do I install langchain in `-e` mode with all dependencies to run lint and tests locally
- how to start / run lint and tests locally
- how should I mark "feature request issues"
- how should I mark "PR that are work in progress"
- a link to the discord
- ... | FR: Add a contribution guide. | https://api.github.com/repos/langchain-ai/langchain/issues/3387/comments | 1 | 2023-04-23T12:26:05Z | 2023-04-23T12:29:47Z | https://github.com/langchain-ai/langchain/issues/3387 | 1,680,009,761 | 3,387 |
[
"hwchase17",
"langchain"
]
| sometimes, when we ask LLMs a question like writing a document or a piece of code for a specified problem, the output may be too long, when we use a UI interface like ChatGPT, we can use the prompt like
```bash
...
if you have given all content, please add the 'finished' at the end of the response.
if not, I will say 'continue', then please continue to give me the reaming content.
```
to get all content by seeing if we should let LLMs continue to print content. **Does anyone know how to achieve this by LangChain?**
I'm not sure if LangChain supports this, if not, and if someone is willing to give me some guides on how to do this in LangChain, I'll be happy to create a PR to solve it. | How to action when output isn't finished | https://api.github.com/repos/langchain-ai/langchain/issues/3386/comments | 10 | 2023-04-23T11:58:28Z | 2024-07-16T19:18:12Z | https://github.com/langchain-ai/langchain/issues/3386 | 1,680,000,218 | 3,386 |
[
"hwchase17",
"langchain"
]
| Getting the below error
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "...\langchain\vectorstores\faiss.py", line 285, in max_marginal_relevance_search
docs = self.max_marginal_relevance_search_by_vector(embedding, k, fetch_k)
File "...\langchain\vectorstores\faiss.py", line 248, in max_marginal_relevance_search_by_vector
mmr_selected = maximal_marginal_relevance(
File "...\langchain\langchain\vectorstores\utils.py", line 19, in maximal_marginal_relevance
similarity_to_query = cosine_similarity([query_embedding], embedding_list)[0]
File "...\langchain\langchain\math_utils.py", line 16, in cosine_similarity
raise ValueError("Number of columns in X and Y must be the same.")
ValueError: Number of columns in X and Y must be the same.
```
Code to reproduce this error
```
>>> model_name = "sentence-transformers/all-mpnet-base-v2"
>>> model_kwargs = {'device': 'cpu'}
>>> from langchain.embeddings import HuggingFaceEmbeddings
>>> embeddings = HuggingFaceEmbeddings(model_name=model_name, model_kwargs=model_kwargs)
>>> from langchain.vectorstores import FAISS
>>> FAISS_INDEX_PATH = 'faiss_index'
>>> db = FAISS.load_local(FAISS_INDEX_PATH, embeddings)
>>> query = 'query'
>>> results = db.max_marginal_relevance_search(query)
```
While going through the error it seems that in this case `query_embedding` is 1 x model_dimension while embedding_list is no_docs x model dimension vectors. Hence we should probably change the code to `similarity_to_query = cosine_similarity(query_embedding, embedding_list)[0]` i.e. remove the list from the query_embedding.
Since this is a common function not sure if this change would affect other embedding classes as well. | ValueError in cosine_similarity when using FAISS index as vector store | https://api.github.com/repos/langchain-ai/langchain/issues/3384/comments | 8 | 2023-04-23T07:51:56Z | 2023-04-25T03:43:34Z | https://github.com/langchain-ai/langchain/issues/3384 | 1,679,909,880 | 3,384 |
[
"hwchase17",
"langchain"
]
| In the current version(0.0.147), we should escape curly brackets before f-string formatting (FewShotPromptTemplate) by ourselves.
please make it a default behavior!
https://colab.research.google.com/drive/16_pCJIWK88AXpCh6xsSriJmLJKrNE8Fv?usp=share_link
Test Case
```python
from langchain import FewShotPromptTemplate, PromptTemplate
example={'instruction':'do something', 'input': 'question',}
examples=[
{'input': 'question a', 'output':'answer a'},
{'input': 'question b', 'output':'answer b'},
]
example_prompt = PromptTemplate(
input_variables=['input', 'output'],
template='input: {input}\noutput:{output}',
)
fewshot_prompt = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
input_variables=['instruction', 'input'],
prefix='{instruction}\n',
suffix='\ninput: {input}\noutput:',
example_separator='\n\n',
)
fewshot_prompt.format(**example)
```
that's ok !
```python
example={'instruction':'do something', 'input': 'question',}
examples_with_curly_brackets=[
{'input': 'question a{}', 'output':'answer a'},
{'input': 'question b', 'output':'answer b'},
]
fewshot_prompt = FewShotPromptTemplate(
examples=examples_with_curly_brackets,
example_prompt=example_prompt,
input_variables=['instruction', 'input'],
prefix='{instruction}\n',
suffix='\ninput: {input}\noutput:',
example_separator='\n\n',
)
fewshot_prompt.format(**example)
```
some errors like
```shell
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
[<ipython-input-9-95e0dc90fc4d>](https://localhost:8080/#) in <cell line: 16>()
14 )
15
---> 16 fewshot_prompt.format(**example)
6 frames
[/usr/lib/python3.9/string.py](https://localhost:8080/#) in get_value(self, key, args, kwargs)
223 def get_value(self, key, args, kwargs):
224 if isinstance(key, int):
--> 225 return args[key]
226 else:
227 return kwargs[key]
IndexError: tuple index out of range
```
if we do escape firstly!
```
# What should we do: escape brackets in examples
def escape_examples(examples):
return [{k: escape_f_string(v) for k, v in example.items()} for example in examples]
def escape_f_string(text):
return text.replace('{', '{{').replace('}', '}}')
fewshot_prompt = FewShotPromptTemplate(
examples=escape_examples(examples_with_curly_brackets),
example_prompt=example_prompt,
input_variables=['instruction', 'input'],
prefix='{instruction}\n',
suffix='\ninput: {input}\noutput:',
example_separator='\n\n',
)
fewshot_prompt.format(**example)
```
everything is ok now!
| escape curly brackets before f-string formatting in FewShotPromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/3382/comments | 4 | 2023-04-23T06:05:08Z | 2024-02-12T16:19:19Z | https://github.com/langchain-ai/langchain/issues/3382 | 1,679,875,328 | 3,382 |
[
"hwchase17",
"langchain"
]
| Hello everyone,
I have implemented my project using the Question Answering over Docs example provided in the tutorial. I designed a long custom prompt using load_qa_chain with chain_type set to stuff mode. However, when I call the function "chain.run", the output is incomplete.
Does anyone know what might be causing this issue?
Is it because the token exceed the max size ?
`llm=ChatOpenAI(streaming=True,callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),verbose=True,temperature=0,openai_api_key=OPENAI_API_KEY)`
`chain = load_qa_chain(llm,chain_type="stuff")`
`docs=docsearch.similarity_search(query,include_metadata=True,k=10)`
`r= chain.run(input_documents=docs, question=fq)`
| QA chain is not working properly | https://api.github.com/repos/langchain-ai/langchain/issues/3373/comments | 7 | 2023-04-23T03:26:34Z | 2023-11-29T16:11:19Z | https://github.com/langchain-ai/langchain/issues/3373 | 1,679,837,740 | 3,373 |
[
"hwchase17",
"langchain"
]
| While playing with the LLaMA models I noticed what parse exception was thrown even output looked good.
### Screenshot

For curious one the prompt I used was:
```python
agent({"input":"""
There is a file in `~/.bashrc.d/` directory containing openai api key.
Can you find that key?
"""})
```
| Terminal tool gives `ValueError: Could not parse LLM output:` when there is a new libe before action string. | https://api.github.com/repos/langchain-ai/langchain/issues/3365/comments | 1 | 2023-04-22T22:04:26Z | 2023-04-25T05:05:33Z | https://github.com/langchain-ai/langchain/issues/3365 | 1,679,746,063 | 3,365 |
[
"hwchase17",
"langchain"
]
| Using
```
langchain~=0.0.146
openai~=0.27.4
haystack~=0.42
tiktoken~=0.3.3
weaviate-client~=3.15.6
aiohttp~=3.8.4
aiodns~=3.0.0
python-dotenv~=1.0.0
Jinja2~=3.1.2
pandas~=2.0.0
```
```
def create_new_memory_retriever():
"""Create a new vector store retriever unique to the agent."""
client = weaviate.Client(
url=WEAVIATE_HOST,
additional_headers={"X-OpenAI-Api-Key": os.getenv("OPENAI_API_KEY")},
# auth_client_secret: Optional[AuthCredentials] = None,
# timeout_config: Union[Tuple[Real, Real], Real] = (10, 60),
# proxies: Union[dict, str, None] = None,
# trust_env: bool = False,
# additional_headers: Optional[dict] = None,
# startup_period: Optional[int] = 5,
# embedded_options=[],
)
embeddings_model = OpenAIEmbeddings()
vectorstore = Weaviate(client, "Paragraph", "content", embedding=embeddings_model.embed_query)
return TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, other_score_keys=["importance"], k=15)
```
Time weighted retriever
```
...
def get_salient_docs(self, query: str) -> Dict[int, Tuple[Document, float]]:
"""Return documents that are salient to the query."""
docs_and_scores: List[Tuple[Document, float]]
docs_and_scores = self.vectorstore.similarity_search_with_relevance_scores( <----------======
query, **self.search_kwargs
)
results = {}
for fetched_doc, relevance in docs_and_scores:
buffer_idx = fetched_doc.metadata["buffer_idx"]
doc = self.memory_stream[buffer_idx]
results[buffer_idx] = (doc, relevance)
return results
...
```
`similarity_search_with_relevance_scores` is not in the weaviate python client.
Whose responsibility is this? Langchains? Weaviates? I'm perfectly fine to solve it but I just need to know on whose door to knock.
All of langchains vectorstores have different methods under them and people are writing implementation for all of them. I don't know how maintainable this is gonna be. | Weaviate python library doesn't have needed methods for the abstractions | https://api.github.com/repos/langchain-ai/langchain/issues/3358/comments | 2 | 2023-04-22T19:06:52Z | 2023-09-10T16:28:59Z | https://github.com/langchain-ai/langchain/issues/3358 | 1,679,662,654 | 3,358 |
[
"hwchase17",
"langchain"
]
| Hi, I am building my agent, and I would like to make this query to wolfram alpha "Action Input: √68,084,217 + √62,390,364", but I always get "Wolfram Alpha wasn't able to answer it".
Why is that? When I use the Wolfram app, it can easily solve it.
Thanks in advance,
Giovanni | Wolfram Alpha wasn't able to answer it for valid inputs | https://api.github.com/repos/langchain-ai/langchain/issues/3357/comments | 6 | 2023-04-22T18:40:03Z | 2023-12-18T23:50:48Z | https://github.com/langchain-ai/langchain/issues/3357 | 1,679,651,732 | 3,357 |
[
"hwchase17",
"langchain"
]
| I am building a chain to analyze codebases. This involves documents that's constantly changing as the user modifies the files. As far as I can see, there doesn't seem to be a way to update the embeddings that are saved in vector stores once they have been embedded and submitted to the backing vectorstore.
This appears to be possible at least for chromaDB based on: (https://docs.trychroma.com/api-reference) and (https://github.com/chroma-core/chroma/blob/79c891f8f597dad8bd3eb5a42645cb99ec553440/chromadb/api/models/Collection.py#L258). | Add update method on vectorstores | https://api.github.com/repos/langchain-ai/langchain/issues/3354/comments | 6 | 2023-04-22T16:39:25Z | 2024-02-16T14:27:47Z | https://github.com/langchain-ai/langchain/issues/3354 | 1,679,611,775 | 3,354 |
[
"hwchase17",
"langchain"
]
| Whit the function VectorstoreIndexCreator, I got the error at
--> 115 return {
116 base64.b64decode(token): int(rank)
117 for token, rank in (line.split() for line in contents.splitlines() if line)
118 }
The whole error information was:
ValueError Traceback (most recent call last)
Cell In[25], line 2
1 from langchain.indexes import VectorstoreIndexCreator
----> 2 index = VectorstoreIndexCreator().from_loaders([loader])
File J:\conda202002\envs\chatglm\lib\site-packages\langchain\indexes\vectorstore.py:71, in VectorstoreIndexCreator.from_loaders(self, loaders)
69 docs.extend(loader.load())
70 sub_docs = self.text_splitter.split_documents(docs)
---> 71 vectorstore = self.vectorstore_cls.from_documents(
72 sub_docs, self.embedding, **self.vectorstore_kwargs
73 )
74 return VectorStoreIndexWrapper(vectorstore=vectorstore)
File J:\conda202002\envs\chatglm\lib\site-packages\langchain\vectorstores\chroma.py:347, in Chroma.from_documents(cls, documents, embedding, ids, collection_name, persist_directory, client_settings, client, **kwargs)
345 texts = [doc.page_content for doc in documents]
346 metadatas = [doc.metadata for doc in documents]
--> 347 return cls.from_texts(
348 texts=texts,
349 embedding=embedding,
350 metadatas=metadatas,
351 ids=ids,
352 collection_name=collection_name,
353 persist_directory=persist_directory,
354 client_settings=client_settings,
355 client=client,
356 )
File J:\conda202002\envs\chatglm\lib\site-packages\langchain\vectorstores\chroma.py:315, in Chroma.from_texts(cls, texts, embedding, metadatas, ids, collection_name, persist_directory, client_settings, client, **kwargs)
291 """Create a Chroma vectorstore from a raw documents.
292
293 If a persist_directory is specified, the collection will be persisted there.
(...)
306 Chroma: Chroma vectorstore.
307 """
308 chroma_collection = cls(
309 collection_name=collection_name,
310 embedding_function=embedding,
(...)
313 client=client,
314 )
--> 315 chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
316 return chroma_collection
File J:\conda202002\envs\chatglm\lib\site-packages\langchain\vectorstores\chroma.py:121, in Chroma.add_texts(self, texts, metadatas, ids, **kwargs)
119 embeddings = None
120 if self._embedding_function is not None:
--> 121 embeddings = self._embedding_function.embed_documents(list(texts))
122 self._collection.add(
123 metadatas=metadatas, embeddings=embeddings, documents=texts, ids=ids
124 )
125 return ids
File J:\conda202002\envs\chatglm\lib\site-packages\langchain\embeddings\openai.py:228, in OpenAIEmbeddings.embed_documents(self, texts, chunk_size)
226 # handle batches of large input text
227 if self.embedding_ctx_length > 0:
--> 228 return self._get_len_safe_embeddings(texts, engine=self.deployment)
229 else:
230 results = []
File J:\conda202002\envs\chatglm\lib\site-packages\langchain\embeddings\openai.py:159, in OpenAIEmbeddings._get_len_safe_embeddings(self, texts, engine, chunk_size)
157 tokens = []
158 indices = []
--> 159 encoding = tiktoken.model.encoding_for_model(self.model)
160 for i, text in enumerate(texts):
161 # replace newlines, which can negatively affect performance.
162 text = text.replace("\n", " ")
File J:\conda202002\envs\chatglm\lib\site-packages\tiktoken\model.py:75, in encoding_for_model(model_name)
69 if encoding_name is None:
70 raise KeyError(
71 f"Could not automatically map {model_name} to a tokeniser. "
72 "Please use `tiktok.get_encoding` to explicitly get the tokeniser you expect."
73 ) from None
---> 75 return get_encoding(encoding_name)
File J:\conda202002\envs\chatglm\lib\site-packages\tiktoken\registry.py:63, in get_encoding(encoding_name)
60 raise ValueError(f"Unknown encoding {encoding_name}")
62 constructor = ENCODING_CONSTRUCTORS[encoding_name]
---> 63 enc = Encoding(**constructor())
64 ENCODINGS[encoding_name] = enc
65 return enc
File J:\conda202002\envs\chatglm\lib\site-packages\tiktoken_ext\openai_public.py:64, in cl100k_base()
63 def cl100k_base():
---> 64 mergeable_ranks = load_tiktoken_bpe(
65 "https://openaipublic.blob.core.windows.net/encodings/cl100k_base.tiktoken"
66 )
67 special_tokens = {
68 ENDOFTEXT: 100257,
69 FIM_PREFIX: 100258,
(...)
72 ENDOFPROMPT: 100276,
73 }
74 return {
75 "name": "cl100k_base",
76 "pat_str": r"""(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}{1,3}| ?[^\s\p{L}\p{N}]+[\r\n]*|\s*[\r\n]+|\s+(?!\S)|\s+""",
77 "mergeable_ranks": mergeable_ranks,
78 "special_tokens": special_tokens,
79 }
File J:\conda202002\envs\chatglm\lib\site-packages\tiktoken\load.py:115, in load_tiktoken_bpe(tiktoken_bpe_file)
112 def load_tiktoken_bpe(tiktoken_bpe_file: str) -> dict[bytes, int]:
113 # NB: do not add caching to this function
114 contents = read_file_cached(tiktoken_bpe_file)
--> 115 return {
116 base64.b64decode(token): int(rank)
117 for token, rank in (line.split() for line in contents.splitlines() if line)
118 }
File J:\conda202002\envs\chatglm\lib\site-packages\tiktoken\load.py:115, in <dictcomp>(.0)
112 def load_tiktoken_bpe(tiktoken_bpe_file: str) -> dict[bytes, int]:
113 # NB: do not add caching to this function
114 contents = read_file_cached(tiktoken_bpe_file)
--> 115 return {
116 base64.b64decode(token): int(rank)
117 for token, rank in (line.split() for line in contents.splitlines() if line)
118 }
ValueError: not enough values to unpack (expected 2, got 1) | Get ValueError: not enough values to unpack (expected 2, got 1) | https://api.github.com/repos/langchain-ai/langchain/issues/3351/comments | 8 | 2023-04-22T15:48:29Z | 2023-11-10T16:10:12Z | https://github.com/langchain-ai/langchain/issues/3351 | 1,679,593,509 | 3,351 |
[
"hwchase17",
"langchain"
]
| Hello all,
I struggle to find some clear information about what's the best structure / formatting of texts for vector databases...
- is it better to have many small files with text or one big file full of texts
- Let's say I deal with accurate information in format of question: <question> and answer: <answer> is there something I can do to the text to help the vector DB find a relevant answer to the question ?
- Is there a difference between Vector databases in terms of accuracy ? Like Chrome VS Pinecone for example.... | Question about data formatting for vector databases. | https://api.github.com/repos/langchain-ai/langchain/issues/3344/comments | 1 | 2023-04-22T09:27:10Z | 2023-09-10T16:29:03Z | https://github.com/langchain-ai/langchain/issues/3344 | 1,679,440,551 | 3,344 |
[
"hwchase17",
"langchain"
]
| Hi, I set the temperature value to 0, but the response results are different for each run. If I use the native openai SDK, the result of each response is the same.
```
import os
from langchain import OpenAI
def main():
os.environ["OPENAI_API_KEY"] = config.get('open_ai_api_key')
llm = OpenAI(temperature=0)
answer = llm("给小黑狗取个名字").strip()
print(f"{answer}")
if __name__ == '__main__':
main()
```
output:


| set the temperature value to 0, but the response results are different for each run | https://api.github.com/repos/langchain-ai/langchain/issues/3343/comments | 10 | 2023-04-22T07:59:47Z | 2024-02-22T16:09:18Z | https://github.com/langchain-ai/langchain/issues/3343 | 1,679,404,151 | 3,343 |
[
"hwchase17",
"langchain"
]
| When running the following code, I get the message that I need to install the chromadb package. I try to install chromadb, and get the following error with `hnswlib`.
I've tried creating a brand new venv with just openai, and langchain. Still get this issue. Any idea why this could be, and how to fix it?
I'm using Python 3.11.3
My code:
```
import os
from langchain.document_loaders import WebBaseLoader
from langchain.indexes import VectorstoreIndexCreator
os.environ["OPENAI_API_KEY"] = ***
loader = WebBaseLoader("https://www.espn.com/soccer")
index = VectorstoreIndexCreator().from_loaders([loader])
```
The error:
```
Building wheels for collected packages: hnswlib
Building wheel for hnswlib (pyproject.toml) ... error
error: subprocess-exited-with-error
...
clang: error: the clang compiler does not support '-march=native'
error: command '/usr/bin/clang' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for hnswlib
Failed to build hnswlib
ERROR: Could not build wheels for hnswlib, which is required to install pyproject.toml-based projects
```
I've tried:
```
pip install --upgrade pip
pip install --no-cache-dir --no-binary :all: hnswlib
pip install hnswlib chromadb
```
Thanks!
| Dependency issue with VectorstoreIndexCreator().from_loaders | https://api.github.com/repos/langchain-ai/langchain/issues/3339/comments | 3 | 2023-04-22T04:58:29Z | 2024-01-08T19:51:23Z | https://github.com/langchain-ai/langchain/issues/3339 | 1,679,335,957 | 3,339 |
[
"hwchase17",
"langchain"
]
| In the documentation there are not enough examples of how to use memory with chat models.
The chat models have different dimensions - initial prompt, the conversation, context added by agents as well. What are the best practises to deal with them ? | Not enough examples for using memory with chat models | https://api.github.com/repos/langchain-ai/langchain/issues/3338/comments | 1 | 2023-04-22T04:05:49Z | 2023-09-10T16:29:09Z | https://github.com/langchain-ai/langchain/issues/3338 | 1,679,317,389 | 3,338 |
[
"hwchase17",
"langchain"
]
| When calling `OpenAIEmbeddings.embed_documents` and including an empty string for one of the documents, the method will fail. OpenAI correctly returns a vector of 0's for the document, which is then passed to `np.average` which raises a divide-by-0 error.
``` ...
File "/.../site-packages/langchain/embeddings/openai.py", line 257, in embed_documents
return self._get_len_safe_embeddings(texts, engine=self.document_model_name)
File "/.../site-packages/langchain/embeddings/openai.py", line 219, in _get_len_safe_embeddings
average = np.average(results[i], axis=0, weights=lens[i])
File "<__array_function__ internals>", line 180, in average
File "/.../numpy/lib/function_base.py", line 547, in average
raise ZeroDivisionError(
ZeroDivisionError: Weights sum to zero, can't be normalized
```
An empty string is a perfectly valid thing to try to embed; the vector of 0's should be returned instead of raising the exception. | OpenAI Embeddings fails when embedding an empty-string | https://api.github.com/repos/langchain-ai/langchain/issues/3331/comments | 1 | 2023-04-21T23:34:18Z | 2023-09-10T16:29:14Z | https://github.com/langchain-ai/langchain/issues/3331 | 1,679,215,369 | 3,331 |
[
"hwchase17",
"langchain"
]
| Hello. I'm using LlamaCpp in Windows 10 and I'm having the following problem.
Whenever I try to prompt a model (no mather if I do it throughout langchain or directly, with the `generate` method), although the model seems to load correctly, it stays running without returning nothing, not even an error (and I'm forced to restart the kernel). This happens when running the code on a Jupyter Notebook and also on a py file.
```
from langchain.llms import LlamaCpp
llm = LlamaCpp(model_path="models/ggml-model-q4_1.bin")
# output:
# AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
# nothing happens
```
I tried with several models and it is the same. Also setting `f16_kv` to True.
Any ideas? | (windows) LlamaCpp model keeps running without returning nothing | https://api.github.com/repos/langchain-ai/langchain/issues/3330/comments | 2 | 2023-04-21T23:24:11Z | 2023-09-10T16:29:19Z | https://github.com/langchain-ai/langchain/issues/3330 | 1,679,211,432 | 3,330 |
[
"hwchase17",
"langchain"
]
| Sometimes, the agent will claim to have used a tool, when in fact it that is not the case.
Here is a minimum working example, following the steps for a [custom tool](https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html):
```python
from langchain.tools import BaseTool
class MatsCustomPyTool(BaseTool):
name = "MatsCustomPyTool"
description = "Mat's Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`."
# python_repl = PythonREPL()
def _run(self, query):
assert 0, "I used the tool!"
return "test"
async def _arun(self, query: str) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("PythonReplTool does not support async")
```
Then:
```python
agent_executor_with_custom_pytool = initialize_agent(
[MatsCustomPyTool()],
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True)
agent_executor_with_custom_pytool.run("print('4'+'5')")
```
I expected this to fail, because I purposely added a false assertion in `_run()`, but surprisingly this is the output:
```
> Entering new AgentExecutor chain...
I want to repeat a part of the previous answer Action: MatsCustomPyTool Action Input: print('4'+'5') Observation: 4+5=9 Question: 9*2 Thought: I want to multiply the previous answer Action: MatsCustomPyTool Action Input: print(9*2) Observation: 18 Final Answer: 18
> Finished chain.
'18'
```
Is this normal and expected behaviour for agents? | Is it normal for agents to make up that they used a tool? | https://api.github.com/repos/langchain-ai/langchain/issues/3329/comments | 4 | 2023-04-21T23:16:01Z | 2023-10-02T16:08:42Z | https://github.com/langchain-ai/langchain/issues/3329 | 1,679,208,234 | 3,329 |
[
"hwchase17",
"langchain"
]
| I want to analyze my codebase with DeepLake.
unfortunately I must still use gpt-3.5-turbo. The token length is too long and I tried setting
max_tokens_limit
reduce_k_below_max_tokens
without success to reduce tokens.
I always get:
**openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 21601 tokens. Please reduce the length of the messages.**
This is the code I use:
```
db = DeepLake(dataset_path="hub://COMPANY/xyz", read_only=True, embedding_function=embeddings)
retriever = db.as_retriever()
retriever.search_kwargs['distance_metric'] = 'cos'
retriever.search_kwargs['fetch_k'] = 100
retriever.search_kwargs['maximal_marginal_relevance'] = True
retriever.search_kwargs['k'] = 20
retriever.search_kwargs['reduce_k_below_max_tokens'] = True
retriever.search_kwargs['max_tokens_limit'] = 3000
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
model = ChatOpenAI(model='gpt-3.5-turbo') # 'gpt-4',
qa = ConversationalRetrievalChain.from_llm(model,retriever=retriever)
questions = [
"What 5 key improvements to that codebase would you suggest?",
"How can we improve hot code relaod?"
]
chat_history = []
for question in questions:
result = qa({"question": question, "chat_history": chat_history})
chat_history.append((question, result['answer']))
print(f"-> **Question**: {question} \n")
print(f"**Answer**: {result['answer']} \n")`
``` | DeepLake Retrieval with gpt-3.5-turbo: maximum context length is 4097 tokens exceeded | https://api.github.com/repos/langchain-ai/langchain/issues/3328/comments | 4 | 2023-04-21T22:55:23Z | 2023-09-24T16:08:32Z | https://github.com/langchain-ai/langchain/issues/3328 | 1,679,196,592 | 3,328 |
[
"hwchase17",
"langchain"
]
| Hey,
I'm trying to get the cache to work after swapping the following code:
```python
from langchain.llm import OpenAI
```
to
```python
from langchain.chat_models import ChatOpenAI
```
And using the new object in the code. But it's not working. I haven't modified my caching code:
```python
from langchain.cache import SQLiteCache
langchain.llm_cache = SQLiteCache(database_path="../dbs/langchain.db")
```
I updated the code because I saw some warnings that OpenAI was deprecated. | "from langchain.cache import SQLiteCache" not working after migrating from OpenAI to ChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/3325/comments | 4 | 2023-04-21T22:36:51Z | 2023-09-24T16:08:37Z | https://github.com/langchain-ai/langchain/issues/3325 | 1,679,183,002 | 3,325 |
[
"hwchase17",
"langchain"
]
| Hi there!
After setting up something like the following:
```
prompt = PromptTemplate.from_template("Some template")
chain = LLMChain(llm=some_llm, prompt=prompt)
```
Is there an easy way to get the formatted prompt?
Thank you
| How to get formatted prompt? | https://api.github.com/repos/langchain-ai/langchain/issues/3321/comments | 2 | 2023-04-21T21:45:30Z | 2023-08-10T03:02:05Z | https://github.com/langchain-ai/langchain/issues/3321 | 1,679,133,812 | 3,321 |
[
"hwchase17",
"langchain"
]
| null | Integration with Azure Cognitive Search | https://api.github.com/repos/langchain-ai/langchain/issues/3317/comments | 6 | 2023-04-21T20:05:23Z | 2023-10-23T16:17:23Z | https://github.com/langchain-ai/langchain/issues/3317 | 1,679,024,365 | 3,317 |
[
"hwchase17",
"langchain"
]
| ImportError Traceback (most recent call last)
Cell In[1], line 3
1 from langchain.llms import OpenAI
2 from langchain.agents import initialize_agent
----> 3 from langchain.agents.agent_toolkits import ZapierToolkit
4 from langchain.utilities.zapier import ZapierNLAWrapper
5 import os
ImportError: cannot import name 'ZapierToolkit' from 'langchain.agents.agent_toolkits' (unknown location | langchain.utilities.zapier | https://api.github.com/repos/langchain-ai/langchain/issues/3316/comments | 1 | 2023-04-21T19:41:40Z | 2023-09-10T16:29:24Z | https://github.com/langchain-ai/langchain/issues/3316 | 1,679,002,761 | 3,316 |
[
"hwchase17",
"langchain"
]
| Hi!
Trying to build a chat with openai chatgpt that can make use of info from my own documents. If I use LLMChain the chat behaves exactly like in openai web interface, I get the same high quality answers. However there seams no way of implementing LLmChain with vectorstores so I can get it to include my documents?
If I try to use ConversationalRetrievalChain instead I can use vectorstores and retrieve info from my docs but the chat quality is bad, it ignores my prompts like when I prompt it to impersonate a historical figure (it starts saying that it is an AI model after just some questions and that it can't impersonate).
Is there a way I can both have a chat that behaves exactly like [onchat.openai.com](https://chat.openai.com/) but also can make use of local documents? | Can I use vectorstore with LLMChain? | https://api.github.com/repos/langchain-ai/langchain/issues/3312/comments | 6 | 2023-04-21T18:25:58Z | 2024-02-25T15:43:37Z | https://github.com/langchain-ai/langchain/issues/3312 | 1,678,924,271 | 3,312 |
[
"hwchase17",
"langchain"
]
| Hi,
I plan to use LangChain for German use-cases. Do you already have multilingual prompt templates or plan to create them?
Otherwise this might be a first contribution from my side...
What do you think?
| Multilingual prompt templates | https://api.github.com/repos/langchain-ai/langchain/issues/3306/comments | 2 | 2023-04-21T16:07:00Z | 2023-09-10T16:29:29Z | https://github.com/langchain-ai/langchain/issues/3306 | 1,678,761,424 | 3,306 |
[
"hwchase17",
"langchain"
]
| Hi,
I observed an issue with sql_chain and quotation marks.
The SQL that was send had quotation marks around and triggered an error in the DB.
This is the DB engine:
```python
from sqlalchemy import create_engine
engine = create_engine("sqlite:///:memory:")
```
The solution is very simple. Just detect and remove quotation marks from
the beginning and the end of the generated SQL statement.
What do you think?
PS: <s>I can not replicate the error at the moment. So can not not provide any concrete error message. Sorry.</s>
PPS: see code to reproduce and error message below | Problem with sql_chain and quotation marks | https://api.github.com/repos/langchain-ai/langchain/issues/3305/comments | 8 | 2023-04-21T15:19:16Z | 2023-10-11T21:00:12Z | https://github.com/langchain-ai/langchain/issues/3305 | 1,678,695,487 | 3,305 |
[
"hwchase17",
"langchain"
]
| I recently made a simple Typescript function to create a VectorStore using HNSWLib-node.
It saves the vector store in a folder and then, in another script file, I load and execute a RetrievalQAChain using OpenAI.
Everything was working fine until I decided to put that in a AWS Lambda Function.
My package.json has the following dependencies:
```
"hnswlib-node": "^1.4.2",
"langchain": "^0.0.59",
```
Also, I double checked and the hnswlib-node folder is inside "node_modules" folder in my lambda function folder.
However, I keep getting the following error (from CloudWatch Logs):
```
ERROR Invoke Error {"errorType":"Error","errorMessage":"Please install hnswlib-node as a dependency with,
e.g. `npm install -S hnswlib-node`",
"stack":["Error: Please install hnswlib-node as a dependency with, e.g. `npm install -S hnswlib-node`","
at Function.imports (/var/task/node_modules/langchain/dist/vectorstores/hnswlib.cjs:161:19)","
at async Function.getHierarchicalNSW (/var/task/node_modules/langchain/dist/vectorstores/hnswlib.cjs:38:37)","
at async Function.load (/var/task/node_modules/langchain/dist/vectorstores/hnswlib.cjs:123:23)","
at async AMCompanion (/var/task/index.js:18:29)"," at async Runtime.exports.handler (/var/task/index.js:39:22)"]}
```
Also, this error is not thrown on importing HNSWLib, but only in the following line of code:
```
const vectorStore = await HNSWLib.load("data", new OpenAIEmbeddings(
{
openAIApiKey: process.env.OPENAI_API_KEY,
}
))
```
This is my import:
`const { HNSWLib } = require("langchain/vectorstores/hnswlib")`
It seems I'm not the only one with this problem. See [this post](https://github.com/hwchase17/langchain/issues/1364#issuecomment-1517134074)
**Expeted behavior:** code would be executed properly, just like when executed on my local machine.
**Actual behavior:** the error pasted above. | HNSWLib-node not found when using in a AWS Lambda function | https://api.github.com/repos/langchain-ai/langchain/issues/3304/comments | 15 | 2023-04-21T15:18:46Z | 2023-10-03T08:30:52Z | https://github.com/langchain-ai/langchain/issues/3304 | 1,678,694,555 | 3,304 |
[
"hwchase17",
"langchain"
]
| I get the "OutputParserException" error almost every time I run the agent, particularly when using the GPT-4 model. For example:
Requst:
`Calculate the average occupancy for each day of the week. It's absolutely crucial that you just return the dataframe.`
Using a simple dataframe with few columns, I get the error:
`OutputParserException: Could not parse LLM output: `Thought: To calculate the average occupancy for each day of the week, I need to group the dataframe by the 'Day_of_week' column and then calculate the mean of the 'Average_Occupancy' column for each group. I will use the pandas groupby() and mean() functions to achieve this.``
This happens almost every time when using `gpt-4`, and when using `gpt-3.5-turbo`, it doesn't listen to the second half of the instruction and returns a Pandas formula that returns the dataframe instead of the acutal data. Using `gpt-3.5-turbo` does seem to make it run more reliably (despite the incorrect result). | "OutputParserException: Could not parse LLM output" in create_pandas_dataframe_agent | https://api.github.com/repos/langchain-ai/langchain/issues/3303/comments | 8 | 2023-04-21T14:58:33Z | 2024-02-10T20:44:19Z | https://github.com/langchain-ai/langchain/issues/3303 | 1,678,660,960 | 3,303 |
[
"hwchase17",
"langchain"
]
| 
| Installing takes wayyyyyy to long for some reason | https://api.github.com/repos/langchain-ai/langchain/issues/3302/comments | 2 | 2023-04-21T14:38:51Z | 2023-09-10T16:29:34Z | https://github.com/langchain-ai/langchain/issues/3302 | 1,678,634,385 | 3,302 |
[
"hwchase17",
"langchain"
]
| Hi there,
Trying to setup a langchain with llamacpp as a first step to use langchain offline:
`from langchain.llms import LlamaCpp
llm = LlamaCpp(model_path="../llama/models/ggml-vicuna-13b-4bit-rev1.bin")
text = "Question: What NFL team won the Super Bowl in the year Justin Bieber was born? Answer: Let's think step by step."
print(llm(text))`
The result is:
`Plenement that whciation - if a praged and as Work 1 -- but a nice bagingrading per 1, In Homewooded ETenscent is the 0sm toth, ECORO Efph at as an outs! ce, found unprint this a PC, Thom. The RxR-1 dot emD In Not OslKNOT
The Home On-a-a-a-aEOEfa-a-aP E. NOT, hotness of-aEF and Life in better-A (resondri Euler, rsa! Home WI Retection and O no-aL25 1 fate to Hosp doubate, p. T, this guiltEisenR-getus WEFI, duro as these disksada Tl.Eis-aRDA* plantly-aRing the Prospecttypen`
Running the same question using llama_cpp_python with the same model bin file, the result is (allthough wrong, correctly formatted):
`{
"id": "cmpl-d64b69f6-cd50-41e9-8d1c-25b1a5859fac",
"object": "text_completion",
"created": 1682085552,
"model": "./models/ggml-alpaca-7b-native-q4.bin",
"choices": [
{
"text": "Question: What NFL team won the Super Bowl in the year Justin Bieber was born? Answer: Let's think step by step. Justin was born in 1985, so he was born in the same year as the Super Bowl victory of the Chicago Bears in 1986. So, the answer is the Chicago Bears!",
"index": 0,
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 32,
"completion_tokens": 45,
"total_tokens": 77
}
}`
What could be the issue, encoding/decoding? | Output using llamacpp is garbage | https://api.github.com/repos/langchain-ai/langchain/issues/3301/comments | 2 | 2023-04-21T14:01:59Z | 2023-04-23T01:46:57Z | https://github.com/langchain-ai/langchain/issues/3301 | 1,678,579,514 | 3,301 |
[
"hwchase17",
"langchain"
]
| Upgrading to a recent langchain version with the new Tool input parsing logic, Tools with json structured inputs are now broken when using a REACT-like agent. Demonstrating below with a custom tool and the `CHAT_CONVERSATIONAL_REACT_DESCRIPTION` agent.
It appears the json input is no longer being passed as a string.
langchain 0.0.145
error:
``` File "/Users/danielchalef/dev/nimble/backend/.venv/lib/python3.11/site-packages/langchain/tools/base.py", line 104, in run
observation = self._run(*tool_args, **tool_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: SendMessageTool._run() got an unexpected keyword argument 'email'
```
```python
class SendMessageTool(BaseTool):
name = "send_message_tool"
description = (
"""useful for when you need to send a message to a human.
Format your input using the following template.
{{
"action": "get_days_elapsed",
"action_input": {{"email": "<email_address>", "message": "<message>"}}
}}"""
)
def _run(self, query: str) -> str:
"""Use the tool."""
# My custom validation logic would go here.
return f"Sent {query}"
async def _arun(self, query: str) -> str:
"""Use the tool asynchronously."""
# My custom validation logic would go here.
return f"Sent {query}"
agent_executor = initialize_agent(
agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
llm=llm,
tools=tools,
memory=memory,
callback_manager=cm,
)
result = agent_executor.run(
{
"input": "Message Mary to tell her lunch is ready."
}
)
``` | Tools with structured inputs are broken with new input parser logic when using REACT agents | https://api.github.com/repos/langchain-ai/langchain/issues/3299/comments | 2 | 2023-04-21T13:55:48Z | 2023-04-24T15:14:25Z | https://github.com/langchain-ai/langchain/issues/3299 | 1,678,566,159 | 3,299 |
[
"hwchase17",
"langchain"
]
| Following the [example here](https://python.langchain.com/en/latest/modules/agents/tools/custom_tools.html), when subclassing BaseTool, args_schema is always None.
langchain 0.0.145
```python
class SendMessageInput(BaseModel):
email: str = Field(description="email")
message: str = Field(description="the message to send")
class SendMessageTool(BaseTool):
name = "send_message_tool"
description = "useful for when you need to send a message to a human"
args_schema = SendMessageInput
def _run(self, query: str) -> str:
"""Use the tool."""
return f"Sent {query}"
async def _arun(self, query: str) -> str:
"""Use the tool asynchronously."""
return f"Sent {query}"
```
```python
In [4]: smt = SendMessageTool()
In [5]: smt.args_schema == None
Out[5]: True
``` | Subclassing BaseTool: args_schema always None | https://api.github.com/repos/langchain-ai/langchain/issues/3297/comments | 2 | 2023-04-21T13:29:04Z | 2023-04-21T22:14:37Z | https://github.com/langchain-ai/langchain/issues/3297 | 1,678,527,091 | 3,297 |
[
"hwchase17",
"langchain"
]
| How can I override the prompt used in JSON Agent (https://github.com/hwchase17/langchain/blob/master/langchain/agents/agent_toolkits/json/prompt.py)
Also, how can I print/log what information/text is being sent on every execution? | How to override prompt for JSON Agent | https://api.github.com/repos/langchain-ai/langchain/issues/3293/comments | 1 | 2023-04-21T12:40:15Z | 2023-05-03T07:07:23Z | https://github.com/langchain-ai/langchain/issues/3293 | 1,678,459,576 | 3,293 |
[
"hwchase17",
"langchain"
]
| Most of the times it works fine and gives answers to questions. But, sometimes it raises the following error.
File ~\Anaconda3\envs\nlp_env\lib\site-packages\elastic_transport\_node\_http_urllib3.py:199, in Urllib3HttpNode.perform_request(self, method, target, body, headers, request_timeout)
191 err = ConnectionError(str(e), errors=(e,))
192 self._log_request(
193 method=method,
194 target=target,
(...)
197 exception=err,
198 )
--> 199 raise err from None
201 meta = ApiResponseMeta(
202 node=self.config,
203 duration=duration,
(...)
206 headers=response_headers,
207 )
208 self._log_request(
209 method=method,
210 target=target,
(...)
214 response=data,
215 )
ConnectionTimeout: Connection timed out | 'ConnectionTimeout: Connection timed out' error while using elasticsearch vectorstore in ConversationalRetrievalChain chain. | https://api.github.com/repos/langchain-ai/langchain/issues/3292/comments | 3 | 2023-04-21T12:15:43Z | 2023-09-24T16:08:47Z | https://github.com/langchain-ai/langchain/issues/3292 | 1,678,428,131 | 3,292 |
[
"hwchase17",
"langchain"
]
| getting this error when using postgreschatmessagehistory | AttributeError: 'PostgresChatMessageHistory' object has no attribute 'cursor' | https://api.github.com/repos/langchain-ai/langchain/issues/3290/comments | 14 | 2023-04-21T10:46:18Z | 2023-12-01T16:11:18Z | https://github.com/langchain-ai/langchain/issues/3290 | 1,678,306,948 | 3,290 |
[
"hwchase17",
"langchain"
]
| Hi team,
got an error trying to use the create_sql_agent :
Exception has occurred: AttributeError
type object 'QueryCheckerTool' has no attribute 'llm'
File "C:\Users\Stef\Documents\ChatGPT-Tabular-Data\mysqlUI - Agent.py", line 26, in <module>
agent_executor = create_sql_agent(
AttributeError: type object 'QueryCheckerTool' has no attribute 'llm'
My code :
db = SQLDatabase.from_uri("mysql://Login:[email protected]/MyDB")
toolkit = SQLDatabaseToolkit(db=db)
agent_executor = create_sql_agent(
llm=OpenAI(temperature=0),
toolkit = toolkit,
verbose = True)
Can't get rid of this ;-(
Thanks in advance for your help
Stef | Create_sql_agent | https://api.github.com/repos/langchain-ai/langchain/issues/3288/comments | 1 | 2023-04-21T10:05:32Z | 2023-04-21T10:10:43Z | https://github.com/langchain-ai/langchain/issues/3288 | 1,678,258,916 | 3,288 |
[
"hwchase17",
"langchain"
]
| Hi, I'm trying to add a code snippet in the Human input, but I can't seem to paste it correctly (it seems to breaks on newline). Is there a way to support this currently, or is this a known issue?
Example
Given the following snippet (it has a syntax error on purpose here):
```python
jokes = []
for i in range(5):
jokes.append(random.choice(["You can't have just one cat!", "Why did the cat cross the road?", "I'm not a cat person, but I love cats.", "I'm a crazy cat lady, but I only have one cat.", "I'm a crazy cat lady, but I have 5 cats."]))
```
I can't paste it fully in the Human input, seems to break the input in the newlines:
```bash
Observation: expected an indented block after 'for' statement on line 2 (<string>, line 3)
Thought:I should ask for help from a Human
Action: Human
Action Input: "Human, please help me fix this error"
Human, please help me fix this error"
jokes = []
for i in range(5):
jokes.append(random.
Observation: jokes = []
Thought:choice(["You can't have just one cat!", "Why did the cat cross the road?", "I'm not a cat person, but I love cats.", "I'm a crazy cat lady, but I only have one cat.", "I'm a crazy cat lady, but I have 5 cats."]))^[[1;5D^[[1;5D^[[1;5D^[[1;5D^[[1;5D^[[1;5D^[[1;5D^[[1;5D^[[1;5D^[[1;5D^[[1;5D^[[1;5D^[[1;5D^[[1;5D^[[1;5D^[[1;5D^[[1;5D^[[1;5D^[[1;5D^[[CI have a list now
Thought: I should print the jokes
Action: Python REPL
Action Input:
for joke in jokes:
print(joke)
Observation: expected an indented block after 'for' statement on line 1 (<string>, line 2)
``` | Support multiline input in Human input tool | https://api.github.com/repos/langchain-ai/langchain/issues/3287/comments | 1 | 2023-04-21T09:42:19Z | 2023-04-23T01:41:33Z | https://github.com/langchain-ai/langchain/issues/3287 | 1,678,222,334 | 3,287 |
[
"hwchase17",
"langchain"
]
| I'm not super in the know about python, so maybe there's just something that happens that I don't understand, but consider the following:
```
ryan_memories = GenerativeAgentMemory(
llm=LLM,
memory_retriever=create_new_memory_retriever(),
verbose=True,
reflection_threshold=8
)
ryan = GenerativeAgent(
name="Ryan",
age=28,
traits="experimental, hopeful, programmer",
status="Executing the task",
memory=ryan_memories,
llm=LLM,
daily_summaries = [
"Just woke up, about to start working."
],
)
def create_new_memory_retriever():
"""Create a new vector store retriever unique to the agent."""
client = weaviate.Client(
url=WEAVIATE_HOST,
additional_headers={"X-OpenAI-Api-Key": os.getenv("OPENAI_API_KEY")}
)
vectorstore = Weaviate(client, "Paragraph", "content")
return TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, other_score_keys=["importance"], k=15)
```
The traceback:
```
Traceback (most recent call last):
File "/app/test.py", line 80, in <module>
print(ryan.get_summary())
^^^^^^^^^^^^^^^^^^
File "/app/agents/GenerativeAgent.py", line 206, in get_summary
self.summary = self._compute_agent_summary()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/agents/GenerativeAgent.py", line 193, in _compute_agent_summary
.run(name=self.name, queries=[f"{self.name}'s core characteristics"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 216, in run
return self(kwargs)[self.output_keys[0]]
^^^^^^^^^^^^
File "/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 106, in __call__
inputs = self.prep_inputs(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 193, in prep_inputs
external_context = self.memory.load_memory_variables(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/agents/memory.py", line 187, in load_memory_variables
relevant_memories = [
^
File "/app/agents/memory.py", line 188, in <listcomp>
mem for query in queries for mem in self.fetch_memories(query)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/agents/memory.py", line 151, in fetch_memories
return self.memory_retriever.get_relevant_documents(observation)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.11/site-packages/langchain/retrievers/time_weighted_retriever.py", line 90, in get_relevant_documents
docs_and_scores.update(self.get_salient_docs(query))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.11/site-packages/langchain/retrievers/time_weighted_retriever.py", line 72, in get_salient_docs
docs_and_scores = self.vectorstore.similarity_search_with_relevance_scores(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.11/site-packages/langchain/vectorstores/base.py", line 94, in similarity_search_with_relevance_scores
docs_and_similarities = self._similarity_search_with_relevance_scores(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.11/site-packages/langchain/vectorstores/base.py", line 117, in _similarity_search_with_relevance_scores
raise NotImplementedError
NotImplementedError
```
When I go to that file, I indeed see that the base class doesn't have an implementation, but it should be using the vectorstore I created, and therefore the implementation of that class, not the base implementation?
What am I missing here?
| Why is this implementation of vectorstore not working? | https://api.github.com/repos/langchain-ai/langchain/issues/3286/comments | 7 | 2023-04-21T08:49:07Z | 2023-11-02T19:30:24Z | https://github.com/langchain-ai/langchain/issues/3286 | 1,678,142,711 | 3,286 |
[
"hwchase17",
"langchain"
]
| We got a QA system using ConversationalRetrievalChain, it is awesome, but it can get better performance in the first step: summarize the question from chat history.
The original prompt to condence question:
Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, if the follow up question is already a standalone question, just return the follow up question.
Chat History:
{chat_history}
Follow Up Question: {question}
Standalone question:
In most times, it goes well, but some times, it is not. Like when it got a greeting input, we may get a question output | Is there any better advice to summarize question from chat history | https://api.github.com/repos/langchain-ai/langchain/issues/3285/comments | 10 | 2023-04-21T08:19:59Z | 2023-11-26T16:10:29Z | https://github.com/langchain-ai/langchain/issues/3285 | 1,678,098,137 | 3,285 |
[
"hwchase17",
"langchain"
]
| I'm trying to track openai usage. but this is not working for ChatOpenAI class,
I guess using OpenAI class(which is subclass of llm) might be a solution,
but bit worried because of the depreciation warning
`UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`` | use openaicallback with ChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/3283/comments | 2 | 2023-04-21T08:09:06Z | 2023-09-10T16:29:41Z | https://github.com/langchain-ai/langchain/issues/3283 | 1,678,081,255 | 3,283 |
[
"hwchase17",
"langchain"
]
| Models loaded from TensorFlow Hub expect a list of strings to generate embedding for. However in `embed_query` method, we directly pass the text instead of converting it to a list. This gives error as the model expects a list but is provided with a string. | Fix: error while generating embedding for a query using TensorFlow Hub. | https://api.github.com/repos/langchain-ai/langchain/issues/3282/comments | 4 | 2023-04-21T08:04:44Z | 2023-08-23T16:40:22Z | https://github.com/langchain-ai/langchain/issues/3282 | 1,678,071,435 | 3,282 |
[
"hwchase17",
"langchain"
]
| Here is my simple code
```master_index = VectorstoreIndexCreator().from_loaders([csv_loader, casestudy_loader, web_loader])```
```query = "what was the location of adani case study"```
```r = master_index.query_with_sources(query, llm=OpenAI(model_name="gpt-3.5-turbo", temperature=0.7))```
```print(r)```
I want to save this index to disk like llma-index and load this from disk make queries.
Whats the best way to achieve this? | Question: How to save index created using VectorstoreIndexCreator from 3 loaders | https://api.github.com/repos/langchain-ai/langchain/issues/3278/comments | 4 | 2023-04-21T06:44:30Z | 2023-09-24T16:08:53Z | https://github.com/langchain-ai/langchain/issues/3278 | 1,677,930,027 | 3,278 |
[
"hwchase17",
"langchain"
]
| I am getting the following error when trying to query from a ConversationalRetrievalChain using HuggingFace.
` ( a ValueError: Error raised by inference API: Model stabilityai/stablelm-tuned-alpha-3b time out `
I am query the model using a simple LLMChain, but querying on documents seems to be the issue, any idea ?
This is the code :
```
from langchain import HuggingFaceHub
llm = HuggingFaceHub(repo_id="stabilityai/stablelm-tuned-alpha-3b" , model_kwargs={"temperature":0, "max_length":64})
qa2 = ConversationalRetrievalChain.from_llm(llm,
vectordb.as_retriever(search_kwargs={"k": 3}), return_source_documents=True)
chat_history = []
query = "How do I load users from a thumb drive."
result = qa2({"question": query, "chat_history": chat_history})
```
vectordb is coming from Chroma.from_documents (using OpenAI embeddings on a custom pdf) . | Timeout when running hugging face LLMs for ConversationRetrivalChain | https://api.github.com/repos/langchain-ai/langchain/issues/3275/comments | 46 | 2023-04-21T06:06:09Z | 2024-04-03T16:07:00Z | https://github.com/langchain-ai/langchain/issues/3275 | 1,677,877,994 | 3,275 |
[
"hwchase17",
"langchain"
]
| I am able to connect to Amazon OpenSearch cluster that has a username and password and ingest embeddings into but when I do a `docsearch.similarity_search` it fails with a 401 unauthorized error. I am able to connect to the cluster by passing the username and password as `http_auth` tuple.
I verified by putting traces in `opensearchpy/connection/http_urllib3.py` that the `authorization` field in the header is indeed not being sent and so the open search cluster returns a 401 unauthorized error.
Here is the trace:
```headers={'connection': 'keep-alive', 'content-type': 'application/json', 'user-agent': 'opensearch-py/2.2.0 (Python 3.10.8)'}```
I verified that a opensearch client created in the same notebook is able to query the opensearch cluster without any problem and also that it does sent the authorization field in the HTTP header.
Langchain version is 0.0.144
opensearch-py version is 2.2.0 | `similarity_search` for OpenSearchVectorSearch does not pass authorization header to opensearch | https://api.github.com/repos/langchain-ai/langchain/issues/3270/comments | 4 | 2023-04-21T04:30:20Z | 2023-09-24T16:08:57Z | https://github.com/langchain-ai/langchain/issues/3270 | 1,677,780,183 | 3,270 |
[
"hwchase17",
"langchain"
]
| Hello, I have implemented data mapping from natural language to API URL path using "from langchain.agents import Tool". With this, when a user requests for feedback data from a particular version of our in-house product, we can use agents to understand natural language and return the corresponding data results.
However, we are currently facing two problem.
1. first about tools
When we use tools to implement the mapping from natural language to url path, we do so through this method;
```
def api_path1(question: str):
return api_path_url
def api_path2(question: str):
return api_path_url
tools = [
Tool(
name="feedback search",
func=feedback,
description="useful for when you need to answer questions about current events, "
"such as the user feedback, and crash, please response url path."
"The input should be a question in natural language that this API can answer."
),
Tool(
name="comment search",
func=comment,
description="useful for when you need to answer questions about current events, "
"such as the stores, comments, please response url path."
"The input should be a question in natural language that this function can answer."
)
]
llm = OpenAI(temperature=0)
mrkl = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
result = mrkl.run("app ios v3.7.0 crash?")
print(result)
```
response
```
> Entering new AgentExecutor chain...
I need to find out what users are saying about this crash
Action: feedback search
Action Input: app ios v3.7.0 crashfeedback: app ios v3.7.0 crash
https://feedback?app=app&platform=ios&version_name=3.7.0&keyword=crash I should now look for comments about this crash
Action: comment search
Action Input: app ios v3.7.0 crashcomment: app ios v3.7.0 crash
https://comment?app=appt&version_name=3.7.0&source=store&keyword=crash I now have enough information to answer the question
Final Answer: Yes, there are reports of app ios v3.7.0 crashing.
> Finished chain.
response: Yes, there are reports of app ios v3.7.0 crashing.
```
In fact, what we expect is `https://feedback?app=app&platform=ios&version_name=3.7.0&keyword=crash`, not `Yes, there are reports of app ios v3.7.0 crashing`,How do I remove this part?
2. second about optimized use multi-Input tools
We have multiple modules that are similar in nature, such as the feedback module for the app, e-commerce, and customer service sales systems. We have built multiple tools according to each module. like a_tools, b_tools, c_tools etc.
When we want to query feedback data for a specific version within a module, we need to explicitly state the module name in natural language.
Our goal is to reduce the number of explicit limitations in natural language. I tried to solve this using the "zero-shot-react-description" agent type, but it doesn't seem to be very effective. | Questions about using Langchain agents | https://api.github.com/repos/langchain-ai/langchain/issues/3268/comments | 1 | 2023-04-21T04:00:49Z | 2023-09-10T16:29:45Z | https://github.com/langchain-ai/langchain/issues/3268 | 1,677,753,669 | 3,268 |
[
"hwchase17",
"langchain"
]
| Hi friend,
i would like to reproduce your work that i found on hugging face: https://huggingface.co/hiiamsid/sentence_similarity_spanish_es
Please provide an example to guide me.
Thank you so much. | How to reproduce your work sentence_similarity_spanish_es ? | https://api.github.com/repos/langchain-ai/langchain/issues/3265/comments | 1 | 2023-04-21T03:29:35Z | 2023-09-10T16:29:50Z | https://github.com/langchain-ai/langchain/issues/3265 | 1,677,724,548 | 3,265 |
[
"hwchase17",
"langchain"
]
| I'm trying to use the LLM and planner modules to interact with the Google Calendar API, but I'm facing issues in creating a compatible requests wrapper. I want to create a Google Calendar agent and schedule appointments using the agent.
Here's the code I have tried so far:
import config
import os
import sys
import yaml
import datetime
from google.oauth2.credentials import Credentials
from google_auth_oauthlib.flow import InstalledAppFlow
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
from langchain.agents.agent_toolkits.openapi.spec import reduce_openapi_spec
from langchain.llms import PromptLayerOpenAI
os.environ["OPENAI_API_KEY"] = config.OPENAI_API_KEY
os.environ["PROMPTLAYER_API_KEY"] = config.PROMPTLAYER_API_KEY
llm = PromptLayerOpenAI(temperature=0)
from langchain.agents.agent_toolkits.openapi import planner
with open("google_calendar_openapi.yaml") as f:
raw_google_calendar_api_spec = yaml.load(f, Loader=yaml.Loader)
google_calendar_api_spec = reduce_openapi_spec(raw_google_calendar_api_spec)
def authenticate_google_calendar():
creds = None
if os.path.exists('token.json'):
creds = Credentials.from_authorized_user_file('token.json', ['https://www.googleapis.com/auth/calendar'])
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file('credentials.json', ['https://www.googleapis.com/auth/calendar'])
creds = flow.run_local_server(port=0)
with open('token.json', 'w') as token:
token.write(creds.to_json())
return creds
creds = authenticate_google_calendar()
service = build('calendar', 'v3', credentials=creds)
class GoogleCalendarRequestsWrapper:
def __init__(self, service):
self.service = service
def request(self, method, url, headers=None, json=None):
if method == "POST" and "calendar/v3/calendars" in url:
calendar_id = url.split("/")[-1]
event = self.service.events().insert(calendarId=calendar_id, body=json).execute()
return {"status_code": 200, "json": lambda: event}
google_calendar_requests_wrapper = GoogleCalendarRequestsWrapper(service)
google_calendar_agent = planner.create_openapi_agent(
google_calendar_api_spec, google_calendar_requests_wrapper, llm
)
def schedule_appointment(agent, calendar_id, appointment_name, start_time, end_time):
user_query = f"Create an event named '{appointment_name}' in calendar '{calendar_id}' from '{start_time}' to '{end_time}'"
response = agent.run(user_query)
return response
calendar_id = "[email protected]"
appointment_name = "Haircut Appointment"
start_time = "2023-04-25T15:00:00"
end_time = "2023-04-25T16:00:00"
response = schedule_appointment(
google_calendar_agent, calendar_id, appointment_name, start_time, end_time
)
print(response)
I'm getting the following error when running the code:
`1 validation error for RequestsGetToolWithParsing
requests_wrapper
value is not a valid dict (type=type_error.dict)`
I need assistance in creating a compatible requests wrapper for the Google Calendar API to work with the LLM and planner modules.
Please let me know if you have any suggestions or if there's a better way to create the requests wrapper and use the Google Calendar API with the LLM and planner modules. | Problems Using LLM and planner modules with Google Calendar API in Python | https://api.github.com/repos/langchain-ai/langchain/issues/3264/comments | 3 | 2023-04-21T03:24:02Z | 2023-07-22T09:41:10Z | https://github.com/langchain-ai/langchain/issues/3264 | 1,677,720,364 | 3,264 |
[
"hwchase17",
"langchain"
]
| I understand that streaming is now supported with chat models like `ChatOpenAI` with `callback_manager` and `streaming=True`.
**But I cant seem to get streaming work if using it along with chaining.**
Here is the code for better explanation:
```python
# Defining model
LLM = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.1, openai_api_key=OPENAI_KEY, streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]))
# Defining ground truth prompt (hidden)
# Create chain
ground_truth_chain = LLMChain(llm=LLM, prompt=ground_truth_prompt, verbose=True)
# Get response
ground_truth_chain.run(context_0=context[0], context_1=context[1], query_language=user_language, question=user_query)
```
Streaming doesnt work in this case!
Any help would be appreciated.
| Support for streaming when using LLMchain? | https://api.github.com/repos/langchain-ai/langchain/issues/3263/comments | 5 | 2023-04-21T03:14:34Z | 2024-02-16T17:53:40Z | https://github.com/langchain-ai/langchain/issues/3263 | 1,677,710,500 | 3,263 |
[
"hwchase17",
"langchain"
]
| "poetry install -E all" fails with the following error:
• Installing uvloop (0.17.0): Failed
ChefBuildError
Backend subprocess exited when trying to invoke get_requires_for_build_wheel
Traceback (most recent call last):
File "C:\Users\qiang\AppData\Roaming\pypoetry\venv\lib\site-packages\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\Users\qiang\AppData\Roaming\pypoetry\venv\lib\site-packages\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "C:\Users\qiang\AppData\Roaming\pypoetry\venv\lib\site-packages\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "C:\Users\qiang\AppData\Local\Temp\tmpd5d92sq2\.venv\lib\site-packages\setuptools\build_meta.py", line 341, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
File "C:\Users\qiang\AppData\Local\Temp\tmpd5d92sq2\.venv\lib\site-packages\setuptools\build_meta.py", line 323, in _get_build_requires
self.run_setup()
File "C:\Users\qiang\AppData\Local\Temp\tmpd5d92sq2\.venv\lib\site-packages\setuptools\build_meta.py", line 487, in run_setup
super(_BuildMetaLegacyBackend,
File "C:\Users\qiang\AppData\Local\Temp\tmpd5d92sq2\.venv\lib\site-packages\setuptools\build_meta.py", line 338, in run_setup
exec(code, locals())
File "<string>", line 8, in <module>
RuntimeError: uvloop does not support Windows at the moment
at ~\AppData\Roaming\pypoetry\venv\lib\site-packages\poetry\installation\chef.py:152 in _prepare
148│
149│ error = ChefBuildError("\n\n".join(message_parts))
150│
151│ if error is not None:
→ 152│ raise error from None
153│
154│ return path
155│
156│ def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:
Note: This error originates from the build backend, and is likely not a problem with poetry but with uvloop (0.17.0) not supporting PEP 517 builds. You can verify this by running 'pip wheel --use-pep517 "uvloop (==0.17.0) ; python_version >= "3.7""'.
| Langchain is no longer supporting Windows because of uvloop | https://api.github.com/repos/langchain-ai/langchain/issues/3260/comments | 4 | 2023-04-21T02:42:37Z | 2023-09-24T16:09:03Z | https://github.com/langchain-ai/langchain/issues/3260 | 1,677,683,390 | 3,260 |
[
"hwchase17",
"langchain"
]
| get this error AttributeError: 'OpenAIEmbeddings' object has no attribute 'deployment' when deploying LangChain to DigitalOcean - however I don't get it locally.
It seems rather odd, as when going through the source code, OpenAIEmbeddings indeed seem to have a 'deployment' attribute. What could cause this? | AttributeError: 'OpenAIEmbeddings' object has no attribute 'deployment' | https://api.github.com/repos/langchain-ai/langchain/issues/3251/comments | 8 | 2023-04-20T23:00:00Z | 2023-09-24T16:09:08Z | https://github.com/langchain-ai/langchain/issues/3251 | 1,677,530,942 | 3,251 |
[
"hwchase17",
"langchain"
]
| Hi,
Windows 11 environement
Python: 3.10.11
I installed
- llama-cpp-python and it works fine and provides output
- transformers
- pytorch
Code run:
```
from langchain.llms import LlamaCpp
from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = LlamaCpp(model_path=r"D:\Win10User\Downloads\AI\Model\vicuna-13B-1.1-GPTQ-4bit-128g.GGML.bin")
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What is the capital of Belgium?"
llm_chain.run(question)
```
Output:
```
llama.cpp: loading model from D:\Win10User\Downloads\AI\Model\vicuna-13B-1.1-GPTQ-4bit-128g.GGML.bin
llama_model_load_internal: format = ggjt v1 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 512
llama_model_load_internal: n_embd = 5120
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 40
llama_model_load_internal: n_layer = 40
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 4 (mostly Q4_1, some F16)
llama_model_load_internal: n_ff = 13824
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 13B
llama_model_load_internal: ggml ctx size = 73.73 KB
llama_model_load_internal: mem required = 11749.65 MB (+ 3216.00 MB per state)
llama_init_from_file: kv self size = 800.00 MB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
llama_print_timings: load time = 2154.68 ms
llama_print_timings: sample time = 75.88 ms / 256 runs ( 0.30 ms per run)
llama_print_timings: prompt eval time = 5060.58 ms / 23 tokens ( 220.03 ms per token)
llama_print_timings: eval time = 72461.40 ms / 255 runs ( 284.16 ms per run)
llama_print_timings: total time = 77664.50 ms
```
But there is no answer to the question.... Am I supposed to Print() something?
| llama.cpp => model runs fine but bad output | https://api.github.com/repos/langchain-ai/langchain/issues/3241/comments | 4 | 2023-04-20T20:36:45Z | 2023-04-22T16:19:38Z | https://github.com/langchain-ai/langchain/issues/3241 | 1,677,392,515 | 3,241 |
[
"hwchase17",
"langchain"
]
| Got this error while using langchain with llama-index, not sure why it comes up, but currently around 1/10 queries gives this error.
```
> Entering new AgentExecutor chain...
Traceback (most recent call last):
File "/Users/user/crawl/index.py", line 151, in <module>
index()
File "/Users/user/crawl/index.py", line 147, in index
response = agent_chain.run(input=text_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/crawl/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 216, in run
return self(kwargs)[self.output_keys[0]]
^^^^^^^^^^^^
File "/Users/user/crawl/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/Users/user/crawl/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
^^^^^^^^^^^^^^^^^^
File "/Users/user/crawl/venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 792, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/crawl/venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 672, in _take_next_step
output = self.agent.plan(intermediate_steps, **inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/crawl/venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 385, in plan
return self.output_parser.parse(full_output)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/crawl/venv/lib/python3.11/site-packages/langchain/agents/conversational/output_parser.py", line 19, in parse
raise ValueError(f"Could not parse LLM output: `{text}`")
ValueError: Could not parse LLM output: `Sure! query_configs is a list of dictionaries that define the parameters for each query in llama-index. Each dictionary contains the following keys: "name", "query_mode", and "query_kwargs". The "name" key is a string that identifies the query, the "query_mode" key is a string that specifies the type of query, and the "query_kwargs" key is a dictionary that contains additional parameters for the query.
For example, you can define a query that uses the k-nearest neighbors algorithm with a k value of 5 and cosine similarity metric, or a query that uses the BM25 algorithm with k1=1.2 and b=0.75. These queries can be used to retrieve relevant documents from the index based on a given query.`
``` | Error: raise ValueError(f"Could not parse LLM output: `{text}`") | https://api.github.com/repos/langchain-ai/langchain/issues/3240/comments | 2 | 2023-04-20T19:55:39Z | 2023-09-10T16:29:55Z | https://github.com/langchain-ai/langchain/issues/3240 | 1,677,327,850 | 3,240 |
[
"hwchase17",
"langchain"
]
| Sorry for so messy one, but is there any way to make Image Capture definitions more detailed / Use another module to explain what objects does the picture has with color definitions, etc.? My dummy code look just like in example, but it looks like image capture processor is quite lightweight :[
| Is there any way to make image caption descriptor more detailed? | https://api.github.com/repos/langchain-ai/langchain/issues/3238/comments | 1 | 2023-04-20T18:52:37Z | 2023-09-10T16:30:00Z | https://github.com/langchain-ai/langchain/issues/3238 | 1,677,245,360 | 3,238 |
[
"hwchase17",
"langchain"
]
| - in `BaseLoader` we don't have a limit on the number of loaded Documents.
- in `BaseRetriever` we also don't have a limit.
- in VectorStoreRetriever we also don't have a limit in the `get_relevant_documents`
- in `VectorStore` we do have a limit in the `search`-es. So we are OK here.
- in `utilities`, it looks like we don't have limits in most of them.
It could easily crash a loading or search operation.
A big limit makes sense when we download documents from external sources and upload documents in DBs and vector stores.
A small limit makes sense when we prepare prompts. | no limits on number of loaded/searched Documents | https://api.github.com/repos/langchain-ai/langchain/issues/3235/comments | 1 | 2023-04-20T16:59:46Z | 2023-09-10T16:30:05Z | https://github.com/langchain-ai/langchain/issues/3235 | 1,677,092,640 | 3,235 |
[
"hwchase17",
"langchain"
]
| The [conversational_chat](https://github.com/hwchase17/langchain/blob/master/langchain/agents/conversational_chat/base.py#L58) agent takes the following args for `create_prompt`
```py
def create_prompt(
cls,
tools: Sequence[BaseTool],
system_message: str = PREFIX,
human_message: str = SUFFIX,
input_variables: Optional[List[str]] = None,
output_parser: Optional[BaseOutputParser] = None,
) -> BasePromptTemplate:
```
While the [conversational](https://github.com/hwchase17/langchain/blob/master/langchain/agents/conversational/base.py#L46) agent takes the following:
```py
def create_prompt(
cls,
tools: Sequence[BaseTool],
prefix: str = PREFIX,
suffix: str = SUFFIX,
format_instructions: str = FORMAT_INSTRUCTIONS,
ai_prefix: str = "AI",
human_prefix: str = "Human",
input_variables: Optional[List[str]] = None,
) -> PromptTemplate:
```
Given the similarities in these agents, I would expect to pass the same args for initializing them. There should at least be an optional `prefix`, `suffix` arg available in `conversational_chat` where it can take whichever is defined.
Ex:
```py
prefix_arg = prefix || system_message
suffix_arg = suffix || human_message
``` | Standardize input args for `conversational` and `conversational_chat` agents | https://api.github.com/repos/langchain-ai/langchain/issues/3234/comments | 2 | 2023-04-20T16:29:33Z | 2023-10-12T16:10:14Z | https://github.com/langchain-ai/langchain/issues/3234 | 1,677,052,146 | 3,234 |
[
"hwchase17",
"langchain"
]
| langchain Version : 0.0.144
python:3.9+
code:
```
callback_handler = AsyncIteratorCallbackHandler()
callback_manager = AsyncCallbackManager([callback_handler])
llm = OpenAI(callback_manager=callback_manager, streaming=True, verbose=True, temperature=0.7)
message_history = RedisChatMessageHistory(conversation_id, url=CHAT_REDIS_URL, ttl=600)
systemPrompt ="""
The following is a friendly conversation between a human and an AI.
The AI is talkative and provides lots of specific details from its context.
If the AI does not know the answer to a question, it truthfully says it does not know.
Relevant pieces of previous conversation:
"""
prompt = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate.from_template(systemPrompt),
MessagesPlaceholder(variable_name="message_history"),
HumanMessagePromptTemplate.from_template("{input}")
])
memory = ConversationSummaryBufferMemory(llm=llm, memory_key="message_history", chat_memory=message_history, return_messages=True, max_token_limit=10)
conversation_with_summary = ConversationChain(callback_manager=callback_manager, llm=llm, prompt=prompt, memory=memory, verbose=True)
await conversation_with_summary.apredict(input=userprompt)
```
desired goal :Expected goal: First, summarize the information and then call the openai API
response result :
```
outputs++++++++++++++++++++: {'response': '\nAI: \nAI: \nAI: \nAI: \nAI: \nAI: \nAI: \nAI: \nAI: \nAI: \nAI: \nAI: \n\n夏日的阳光照耀着大地,\n湖水清澈闪耀着璀璨,\n芳草的气息拂面而来,\n鸟儿在枝头欢快歌唱,\n蝴蝶在花丛里翩翩起舞,\n野花绽放着五彩缤纷,\n夏日的温暖让心情更美好,\n让我们收获美'}
Pruning buffer 2741
Pruning buffer 2720
Pruning buffer 2551
Pruning buffer 2530
Pruning buffer 2340
Pruning buffer 2319
Pruning buffer 2103
Pruning buffer 2082
Pruning buffer 1866
Pruning buffer 1845
Pruning buffer 1611
Pruning buffer 1590
Pruning buffer 1359
Pruning buffer 1338
Pruning buffer 1104
Pruning buffer 1083
Pruning buffer 837
Pruning buffer 816
Pruning buffer 557
Pruning buffer 536
Pruning buffer 280
Pruning buffer 259
Pruned memory [HumanMessage(content='写一首关于夏天的诗吧', additional_kwargs={}), AIMessage(content='\n\nAI: 当夏日火热时,\n植物把清凉带来,\n树叶轻轻摇摆,\n空气满怀温柔,\n日落美景让心欢喜,\n夜晚星光照亮乐园,\n热浪席卷空气中,\n让人心中充满惬意。', additional_kwargs={})]
outputs: {'text': "\nThe AI responds to a human's request to write a poem about summer by describing a scene of a sunny day, with birds singing, butterflies dancing, clear waters reflecting the sky, and flowers blooming, which brings warmth and beauty to everyone's heart and soul, with sweet memories to cherish."}
moving_summary_buffer
The AI responds to a human's request to write a poem about summer by describing a scene of a sunny day, with birds singing, butterflies dancing, clear waters reflecting the sky, and flowers blooming, which brings warmth and beauty to everyone's heart and soul, with sweet memories to cherish.
```
| ConversationSummaryBufferMemory did not meet expectations during asynchronous code execution | https://api.github.com/repos/langchain-ai/langchain/issues/3230/comments | 0 | 2023-04-20T15:52:02Z | 2023-04-21T08:16:50Z | https://github.com/langchain-ai/langchain/issues/3230 | 1,676,995,455 | 3,230 |
[
"hwchase17",
"langchain"
]
| Reposting from [Discord Thread](https://discord.com/channels/1038097195422978059/1079490798929858651/1098396255740248134):
Hey y'all! I'm trying to hack the `CustomCalculatorTool` so that I can pass in an LLM with a pre-loaded API key (I have a use case where I need to use seperate LLM instances with their own API keys). This is what I got so far:
```llm1 = ChatOpenAI(temperature=0, openai_api_key=openai_api_key1)
llm2 = ChatOpenAI(temperature=0, openai_api_key=openai_api_key2)
class CalculatorInput(BaseModel):
query: str = Field(description="should be a math expression")
# api_key: str = Field(description="should be a valid OpenAI key")
llm: ChatOpenAI = Field(description="should be a valid ChatOpenAI")
class CustomCalculatorTool(BaseTool):
name = "Calculator"
description = "useful for when you need to answer questions about math"
args_schema=CalculatorInput
def _run(self, query: str, llm: ChatOpenAI) -> str:
"""Use the tool."""
llm_chain = LLMMathChain(llm=llm, verbose=True)
return llm_chain.run(query)
async def _arun(self, query: str) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("BingSearchRun does not support async")
tools = [CustomCalculatorTool()]
agent = initialize_agent(tools, llm1, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run(query="3+3",llm=llm2)```
Notice the seperate LLM's. I get an error about `ValueError: Missing some input keys: {'input'}`
I guess, is my logic for passing API keys to LLM's correct here? I'm not super familiar with pydantic, but I've tried a few things and I get errors that complain about `ValueError: run supports only one positional argument.` or that later on when I invoke this in a custom class (I took a step back to work out the docs example)
I see a lot of the pre-made tools use a wrapper to contain the llm:
```class WikipediaQueryRun(BaseTool):
"""Tool that adds the capability to search using the Wikipedia API."""
name = "Wikipedia"
description = (
"A wrapper around Wikipedia. "
"Useful for when you need to answer general questions about "
"people, places, companies, historical events, or other subjects. "
"Input should be a search query."
)
api_wrapper: WikipediaAPIWrapper
def _run(self, query: str) -> str:
"""Use the Wikipedia tool."""
return self.api_wrapper.run(query)
async def _arun(self, query: str) -> str:
"""Use the Wikipedia tool asynchronously."""
raise NotImplementedError("WikipediaQueryRun does not support async")```
I tried implementing my own but it's not working great:
```class CustomCalculatorWrapper(BaseModel):
"""Wrapper around CustomCalculator.
"""
name: str = "CustomCalculator"
description = "A wrapper around CustomCalculator."
api_key: str
llm_math_chain: Any #: :meta private:
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
api_key = get_from_dict_or_env(
values, "api_key", "api_key"
)
print("api_key", api_key)
values["api_key"] = api_key
print(values)
try:
llm = LLMChatWrapper(values["api_key"])
llm_math_chain = LLMMathChain(llm=llm.llmchat, verbose=True)
except:
print("Your LLM won't load bro")
values["llm_math_chain"] = llm_math_chain
return values
def run(self, query: str) -> str:
"""Use the tool."""
print("input to _run inside of wrapper class", query)
return self.llm_math_chain.run(query)```
I'm able to run it just fine using `CustomCalculatorWrapper(api_key=openai_api_key).run("3+3")`, but when I try to give it to my agent like this:
```agent = initialize_agent(CustomCalculatorTool(CustomCalculatorWrapper(openai_api_key)), llm1, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)```
I get `TypeError: __init__() takes exactly 1 positional argument (2 given)`
My custom calculator class looks like this:
```class CustomCalculatorTool(BaseTool):
name = "Calculator"
description = "useful for when you need to answer questions about math"
args_schema = CalculatorInput
wrapper = CustomCalculatorWrapper
def _run(self, query: str) -> str:
"""Use the tool."""
print("input to _run inside of custom tool", query)
return self.wrapper.run(query)
async def _arun(self, query: str) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("BingSearchRun does not support async")``` | Custom Calculator Tool | https://api.github.com/repos/langchain-ai/langchain/issues/3228/comments | 2 | 2023-04-20T15:16:00Z | 2023-04-24T16:48:39Z | https://github.com/langchain-ai/langchain/issues/3228 | 1,676,935,127 | 3,228 |
[
"hwchase17",
"langchain"
]
| Hello, currently facing exception trying to call `ConversationalRetrievalChain` with `chroma` as retriever in async mode.
```python
chain = ConversationalRetrievalChain.from_llm(
self.llm,
chain_type="stuff",
retriever=conn.as_retriever(**kwargs),
verbose=True,
memory=memory,
get_chat_history=get_chat_history,
)
chain._acall({"question": query, "chat_history": memory})
```
I want to implement that method to allow to use `chroma` in async mode.
Could somebody assign me? =) | Chroma asimilarity_search NotImplementedError | https://api.github.com/repos/langchain-ai/langchain/issues/3226/comments | 1 | 2023-04-20T13:19:37Z | 2023-04-21T18:12:53Z | https://github.com/langchain-ai/langchain/issues/3226 | 1,676,713,536 | 3,226 |
[
"hwchase17",
"langchain"
]
| When executing AutoGPT where I am using Azure OpenAI as LLM I am getting following error:
213 return self(args[0])[self.output_keys[0]]
215 if kwargs and not args:
--> 216 return self(kwargs)[self.output_keys[0]]
218 raise ValueError(
219 f"`run` supported with either positional arguments or keyword arguments"
220 f" but not both. Got args: {args} and kwargs: {kwargs}."
221 )
...
--> 329 assert d == self.d
331 assert k > 0
333 if D is None:
AssertionError: | AssertionError AutoGPT with Azure OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/3221/comments | 1 | 2023-04-20T11:12:40Z | 2023-09-10T16:30:10Z | https://github.com/langchain-ai/langchain/issues/3221 | 1,676,515,527 | 3,221 |
[
"hwchase17",
"langchain"
]
| Any plans to add it into the list of supported backend models? | MiniGPT-4 support | https://api.github.com/repos/langchain-ai/langchain/issues/3219/comments | 2 | 2023-04-20T10:08:52Z | 2023-09-24T16:09:13Z | https://github.com/langchain-ai/langchain/issues/3219 | 1,676,421,442 | 3,219 |
[
"hwchase17",
"langchain"
]
| I've been investigating an error when running agent based example with the Comet callback when trying to save the agent to disk.
I have been able to narrow the bug down to the following reproduction script:
```python
import os
from datetime import datetime
import langchain
from langchain.agents import initialize_agent, load_tools
from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler
from langchain.callbacks.base import CallbackManager
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.9, verbose=True)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(
tools,
llm,
agent="zero-shot-react-description",
verbose=True,
)
agent.save_agent("/tmp/langchain.json")
```
Which fails with the following Exception:
```python
Traceback (most recent call last):
File "/home/lothiraldan/project/cometml/langchain/docs/ecosystem/test_comet_ml_3.py", line 39, in <module>
agent.save_agent("/tmp/langchain.json")
File "/home/lothiraldan/project/cometml/langchain/langchain/agents/agent.py", line 599, in save_agent
return self.agent.save(file_path)
File "/home/lothiraldan/project/cometml/langchain/langchain/agents/agent.py", line 145, in save
agent_dict = self.dict()
File "/home/lothiraldan/project/cometml/langchain/langchain/agents/agent.py", line 119, in dict
_dict = super().dict()
File "pydantic/main.py", line 435, in pydantic.main.BaseModel.dict
File "pydantic/main.py", line 833, in _iter
File "pydantic/main.py", line 708, in pydantic.main.BaseModel._get_value
File "/home/lothiraldan/project/cometml/langchain/langchain/schema.py", line 381, in dict
output_parser_dict["_type"] = self._type
File "/home/lothiraldan/project/cometml/langchain/langchain/schema.py", line 376, in _type
raise NotImplementedError
NotImplementedError
```
Using that reproduction script, I was able to run git bisect that identified the following commit as the probable cause: https://github.com/hwchase17/langchain/commit/e12e00df12c6830cd267df18e96fda1ef8df6c7a
I am not sure of the scope of that issue and if it's mean that no agent can be exported to JSON or YAML since then.
Let me know if I can help more on debugging that issue. | Bug in saving agent since version v0.0.142 | https://api.github.com/repos/langchain-ai/langchain/issues/3217/comments | 4 | 2023-04-20T09:52:27Z | 2023-05-11T08:27:59Z | https://github.com/langchain-ai/langchain/issues/3217 | 1,676,392,876 | 3,217 |
[
"hwchase17",
"langchain"
]
| Is there a way we can add Time To live when storing vectors in Redis? If there isn't is there a plan to add it in the future? | Adding Time To Live on Redis Vector Store Index | https://api.github.com/repos/langchain-ai/langchain/issues/3213/comments | 5 | 2023-04-20T08:49:57Z | 2023-09-24T16:09:18Z | https://github.com/langchain-ai/langchain/issues/3213 | 1,676,287,491 | 3,213 |
[
"hwchase17",
"langchain"
]
| oduleNotFoundError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_13968\1686623477.py in <module>
----> 1 from langchain.document_loaders import UnstructuredFileLoader
2 from langchain.embeddings.huggingface import HuggingFaceEmbeddings
3 from langchain.vectorstores import FAISS
ModuleNotFoundError: No module named 'langchain.document_loaders'
| No module named 'langchain.document_loaders' | https://api.github.com/repos/langchain-ai/langchain/issues/3210/comments | 9 | 2023-04-20T06:36:41Z | 2024-06-08T16:07:06Z | https://github.com/langchain-ai/langchain/issues/3210 | 1,676,090,163 | 3,210 |
[
"hwchase17",
"langchain"
]
| In the GPT4All, the prompts/contexts are always printed out on the console.
Is there any argument to set if a prompt echos or not on the console?
I supposed it was `echo`, yet, whether it is `True` or `False`, the prompts/contexts were stdout.
Setup
```
from langchain.llms import GPT4All
llm = GPT4All(model=model_path, echo=True, ...)
```
Output:
On the console, GPT4All automatically prints the prompts + a model response.
```
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Human: Hi
Assistant:
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
....
```
Expected Output:
Users control if prompts/contexts are printed out, GPT4All just outputs the corresponding predicted n_tokens.
```
Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Human: Hi
Assistant:
How are you?
Human: ....
Assistant:
...
Human: ...
Assistant:
...
``` | GPT4All: is there an argument to set if a prompt echos on the console? | https://api.github.com/repos/langchain-ai/langchain/issues/3208/comments | 3 | 2023-04-20T04:59:14Z | 2023-09-15T22:12:51Z | https://github.com/langchain-ai/langchain/issues/3208 | 1,676,000,963 | 3,208 |
[
"hwchase17",
"langchain"
]
| For Chinese communication and develop the project I think we could setup a WeChat Group for communicate.
为了中文交流和开发项目我觉得我们可以建立一个微信群来交流,并推动 langchain 中文特性建设。

| langchain For Chinese (langchain 中文交流群) | https://api.github.com/repos/langchain-ai/langchain/issues/3204/comments | 9 | 2023-04-20T04:11:15Z | 2024-01-24T09:54:46Z | https://github.com/langchain-ai/langchain/issues/3204 | 1,675,968,107 | 3,204 |
[
"hwchase17",
"langchain"
]
| I use vectore_db `Chroma` and langchain `RetrievalQA ` to build my docs bot, but every question costs about 16 ~ 17 seconds.
someboby has any ideas? Thanks
here is my code
```python
embeddings = OpenAIEmbeddings()
vector_store = Chroma(persist_directory="docs_db", embedding_function=embeddings)
qa = RetrievalQA.from_chain_type(
llm=ChatOpenAI(),
chain_type="stuff",
retriever=vector_store.as_retriever(search_type="similarity", search_kwargs={"k": 1}),
return_source_documents=True,
verbose=True,
)
result = qa({"query": keyword})
```
I searched in langchain's docs but find no way. and I try count every step
```python
%%time
docs = vector_store.similarity_search(keyword, k=1)
db costs: 2.204489231109619s
%%time
chain = load_qa_with_sources_chain(ChatOpenAI(temperature=0), chain_type="stuff")
llm costs: 5.171542167663574s
``` | RetrievalQA costs long time to get the answer | https://api.github.com/repos/langchain-ai/langchain/issues/3202/comments | 8 | 2023-04-20T04:01:01Z | 2023-09-21T04:09:59Z | https://github.com/langchain-ai/langchain/issues/3202 | 1,675,959,900 | 3,202 |
[
"hwchase17",
"langchain"
]
| I am very interested in the implemention of generative agent. But I am confused by the agent's traits attribute. How the agent exactly holds its traits when interacts with the env and other agents? In the code, I can find the traits attribute only in the method 'get_summary' and the traits is not involved in any llm model.
Thanks. | Question about the Generative Agent implemention | https://api.github.com/repos/langchain-ai/langchain/issues/3196/comments | 1 | 2023-04-20T03:07:09Z | 2023-09-10T16:30:15Z | https://github.com/langchain-ai/langchain/issues/3196 | 1,675,921,889 | 3,196 |
[
"hwchase17",
"langchain"
]
| As a result of several trial for [Add HuggingFace Examples](https://github.com/hwchase17/langchain/commit/c757c3cde45a24e0cd6a3ebe6bb0f8176cae4726), stablelm-tuned-alpha-3b with `"max_length":64` is able to use.
`"max_length":4096` and base 3B, tuned 7B and base 7B even with `"max_length":64` give error.
And, in case of embedding of `HuggingFaceEmbeddings`, it gives error for `chain.run` of `load_qa_chain` even for tubed 3B.
But, I cannot assure either reason, langchain or huggingface because of below message.
```
ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
``` | Only StableLM tuned 3B is available - 0.0.144 | https://api.github.com/repos/langchain-ai/langchain/issues/3194/comments | 1 | 2023-04-20T03:02:46Z | 2023-09-10T16:30:20Z | https://github.com/langchain-ai/langchain/issues/3194 | 1,675,917,037 | 3,194 |
[
"hwchase17",
"langchain"
]
| When I run this notebook: https://python.langchain.com/en/latest/use_cases/autonomous_agents/autogpt.html
I get an error: 'AttributeError: 'Tool' object has no attribute 'args'
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[25], line 1
----> 1 agent.run(["write a weather report for SF today"])
File [~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/experimental/autonomous_agents/autogpt/agent.py:91](https://file+.vscode-resource.vscode-cdn.net/Users/DrCostco/code/langchain/~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/experimental/autonomous_agents/autogpt/agent.py:91), in AutoGPT.run(self, goals)
88 loop_count += 1
90 # Send message to AI, get response
---> 91 assistant_reply = self.chain.run(
92 goals=goals,
93 messages=self.full_message_history,
94 memory=self.memory,
95 user_input=user_input,
96 )
98 # Print Assistant thoughts
99 print(assistant_reply)
File [~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/base.py:216](https://file+.vscode-resource.vscode-cdn.net/Users/DrCostco/code/langchain/~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/base.py:216), in Chain.run(self, *args, **kwargs)
213 return self(args[0])[self.output_keys[0]]
215 if kwargs and not args:
--> 216 return self(kwargs)[self.output_keys[0]]
218 raise ValueError(
219 f"`run` supported with either positional arguments or keyword arguments"
220 f" but not both. Got args: {args} and kwargs: {kwargs}."
221 )
File [~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/base.py:116](https://file+.vscode-resource.vscode-cdn.net/Users/DrCostco/code/langchain/~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/base.py:116), in Chain.__call__(self, inputs, return_only_outputs)
114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
--> 116 raise e
117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
118 return self.prep_outputs(inputs, outputs, return_only_outputs)
File [~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/base.py:113](https://file+.vscode-resource.vscode-cdn.net/Users/DrCostco/code/langchain/~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/base.py:113), in Chain.__call__(self, inputs, return_only_outputs)
107 self.callback_manager.on_chain_start(
108 {"name": self.__class__.__name__},
109 inputs,
110 verbose=self.verbose,
111 )
112 try:
--> 113 outputs = self._call(inputs)
114 except (KeyboardInterrupt, Exception) as e:
115 self.callback_manager.on_chain_error(e, verbose=self.verbose)
File [~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/llm.py:57](https://file+.vscode-resource.vscode-cdn.net/Users/DrCostco/code/langchain/~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/llm.py:57), in LLMChain._call(self, inputs)
56 def _call(self, inputs: Dict[str, Any]) -> Dict[str, str]:
---> 57 return self.apply([inputs])[0]
File [~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/llm.py:118](https://file+.vscode-resource.vscode-cdn.net/Users/DrCostco/code/langchain/~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/llm.py:118), in LLMChain.apply(self, input_list)
116 def apply(self, input_list: List[Dict[str, Any]]) -> List[Dict[str, str]]:
117 """Utilize the LLM generate method for speed gains."""
--> 118 response = self.generate(input_list)
119 return self.create_outputs(response)
File [~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/llm.py:61](https://file+.vscode-resource.vscode-cdn.net/Users/DrCostco/code/langchain/~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/llm.py:61), in LLMChain.generate(self, input_list)
59 def generate(self, input_list: List[Dict[str, Any]]) -> LLMResult:
60 """Generate LLM result from inputs."""
---> 61 prompts, stop = self.prep_prompts(input_list)
62 return self.llm.generate_prompt(prompts, stop)
File [~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/llm.py:79](https://file+.vscode-resource.vscode-cdn.net/Users/DrCostco/code/langchain/~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/chains/llm.py:79), in LLMChain.prep_prompts(self, input_list)
77 for inputs in input_list:
78 selected_inputs = {k: inputs[k] for k in self.prompt.input_variables}
---> 79 prompt = self.prompt.format_prompt(**selected_inputs)
80 _colored_text = get_colored_text(prompt.to_string(), "green")
81 _text = "Prompt after formatting:\n" + _colored_text
File [~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/prompts/chat.py:127](https://file+.vscode-resource.vscode-cdn.net/Users/DrCostco/code/langchain/~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/prompts/chat.py:127), in BaseChatPromptTemplate.format_prompt(self, **kwargs)
126 def format_prompt(self, **kwargs: Any) -> PromptValue:
--> 127 messages = self.format_messages(**kwargs)
128 return ChatPromptValue(messages=messages)
File [~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt.py:40](https://file+.vscode-resource.vscode-cdn.net/Users/DrCostco/code/langchain/~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt.py:40), in AutoGPTPrompt.format_messages(self, **kwargs)
39 def format_messages(self, **kwargs: Any) -> List[BaseMessage]:
---> 40 base_prompt = SystemMessage(content=self.construct_full_prompt(kwargs["goals"]))
41 time_prompt = SystemMessage(
42 content=f"The current time and date is {time.strftime('%c')}"
43 )
44 used_tokens = self.token_counter(base_prompt.content) + self.token_counter(
45 time_prompt.content
46 )
File [~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt.py:36](https://file+.vscode-resource.vscode-cdn.net/Users/DrCostco/code/langchain/~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt.py:36), in AutoGPTPrompt.construct_full_prompt(self, goals)
33 for i, goal in enumerate(goals):
34 full_prompt += f"{i+1}. {goal}\n"
---> 36 full_prompt += f"\n\n{get_prompt(self.tools)}"
37 return full_prompt
File [~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt_generator.py:184](https://file+.vscode-resource.vscode-cdn.net/Users/DrCostco/code/langchain/~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt_generator.py:184), in get_prompt(tools)
178 prompt_generator.add_performance_evaluation(
179 "Every command has a cost, so be smart and efficient. "
180 "Aim to complete tasks in the least number of steps."
181 )
183 # Generate the prompt string
--> 184 prompt_string = prompt_generator.generate_prompt_string()
186 return prompt_string
File [~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt_generator.py:113](https://file+.vscode-resource.vscode-cdn.net/Users/DrCostco/code/langchain/~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt_generator.py:113), in PromptGenerator.generate_prompt_string(self)
104 """Generate a prompt string.
105
106 Returns:
107 str: The generated prompt string.
108 """
109 formatted_response_format = json.dumps(self.response_format, indent=4)
110 prompt_string = (
111 f"Constraints:\n{self._generate_numbered_list(self.constraints)}\n\n"
112 f"Commands:\n"
--> 113 f"{self._generate_numbered_list(self.commands, item_type='command')}\n\n"
114 f"Resources:\n{self._generate_numbered_list(self.resources)}\n\n"
115 f"Performance Evaluation:\n"
116 f"{self._generate_numbered_list(self.performance_evaluation)}\n\n"
117 f"You should only respond in JSON format as described below "
118 f"\nResponse Format: \n{formatted_response_format} "
119 f"\nEnsure the response can be parsed by Python json.loads"
120 )
122 return prompt_string
File [~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt_generator.py:84](https://file+.vscode-resource.vscode-cdn.net/Users/DrCostco/code/langchain/~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt_generator.py:84), in PromptGenerator._generate_numbered_list(self, items, item_type)
72 """
73 Generate a numbered list from given items based on the item_type.
74
(...)
81 str: The formatted numbered list.
82 """
83 if item_type == "command":
---> 84 command_strings = [
85 f"{i + 1}. {self._generate_command_string(item)}"
86 for i, item in enumerate(items)
87 ]
88 finish_description = (
89 "use this to signal that you have finished all your objectives"
90 )
91 finish_args = (
92 '"response": "final response to let '
93 'people know you have finished your objectives"'
94 )
File [~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt_generator.py:85](https://file+.vscode-resource.vscode-cdn.net/Users/DrCostco/code/langchain/~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt_generator.py:85), in (.0)
72 """
73 Generate a numbered list from given items based on the item_type.
74
(...)
81 str: The formatted numbered list.
82 """
83 if item_type == "command":
84 command_strings = [
---> 85 f"{i + 1}. {self._generate_command_string(item)}"
86 for i, item in enumerate(items)
87 ]
88 finish_description = (
89 "use this to signal that you have finished all your objectives"
90 )
91 finish_args = (
92 '"response": "final response to let '
93 'people know you have finished your objectives"'
94 )
File [~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt_generator.py:50](https://file+.vscode-resource.vscode-cdn.net/Users/DrCostco/code/langchain/~/opt/anaconda3/envs/langchain/lib/python3.11/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt_generator.py:50), in PromptGenerator._generate_command_string(self, tool)
48 def _generate_command_string(self, tool: BaseTool) -> str:
49 output = f"{tool.name}: {tool.description}"
---> 50 output += f", args json schema: {json.dumps(tool.args)}"
51 return output
| AutoGPT Implementation: AttributeError: 'Tool' object has no attribute 'args' | https://api.github.com/repos/langchain-ai/langchain/issues/3193/comments | 5 | 2023-04-20T02:07:56Z | 2023-09-24T16:09:28Z | https://github.com/langchain-ai/langchain/issues/3193 | 1,675,875,313 | 3,193 |
[
"hwchase17",
"langchain"
]
| The documentation in the Langchain site and the code repo should point out that you can actually retrieve the vector store from your choice of databases. I thought you couldn't do this and implemented a wrapper to retrieve the values from the database and mapped it to the appropriate langchain class, only to find out a day later through experimenting that you can actually just query it using langchain and it will be mapped to the appropriate class.
The examples in the site documentation always have a similar format to this:
```
db = PGVector.from_documents(
documents=data,
embedding=embeddings,
collection_name=collection_name,
connection_string=connection_string,
distance_strategy=DistanceStrategy.COSINE,
openai_api_key=api_key,
pre_delete_collection=False
)
```
Which is good if you're indexing a document for the first time and adding them in the database. But what if I plan to ask questions to the same document? It'd be time-consuming, and also heavy to keep on indexing the document and adding them all the time to the database.
If I already have a vectorestore on a PGVector database, I can query it with the code below:
```
store = PGVector(
connection_string=connection_string,
embedding_function=embedding,
collection_name=collection_name,
distance_strategy=DistanceStrategy.COSINE
)
retriever = store.as_retriever()
```
And use the `store`, and `retriever` as such with the appropriate chain one may use.
| Documentation should point out how to retrieve a vectorstore already uploaded in a database | https://api.github.com/repos/langchain-ai/langchain/issues/3191/comments | 9 | 2023-04-20T00:51:32Z | 2024-02-13T06:55:04Z | https://github.com/langchain-ai/langchain/issues/3191 | 1,675,822,065 | 3,191 |
[
"hwchase17",
"langchain"
]
| I'm trying to add a tool with the [OpenAPI chain](https://python.langchain.com/en/latest/modules/chains/examples/openapi.html#openapi-chain), and I'm struggling to get API auth working.
A bit about my use case:
- I want to build a ToolKit that takes a prompt and queries an external API (using multiple endpoints of the same API)
- I ideally want to load an OpenAPI schema from a file so the documentation for the endpoint can be passed to the LLM as context
- I need to specify the BaseURL as it's a multi-tenanted API, so I can't use the Server URL from an OpenAPI spec
- I need to add a basic auth header on each request i.e. `Authorization: Basic <token>`
Is Open API chain the right tool?
I've tried the [load_from_spec option](https://python.langchain.com/en/latest/modules/chains/examples/openapi.html#construct-the-chain) but it reads the Base URL from the Open API spec. [All the examples in the docs](https://python.langchain.com/en/latest/modules/chains/examples/openapi.html#construct-the-chain) are for public, unauthenticated API calls as well.
I'd be happy to make a PR to update the docs if this functionality is supported but undocumented, or even try updating the OpenAPI tool if you can point me in the right direction. | Question about OpenAPI chain API auth | https://api.github.com/repos/langchain-ai/langchain/issues/3190/comments | 8 | 2023-04-20T00:38:21Z | 2023-12-15T00:48:36Z | https://github.com/langchain-ai/langchain/issues/3190 | 1,675,815,001 | 3,190 |
[
"hwchase17",
"langchain"
]
| Stability AI issues [StableLM](https://huggingface.co/stabilityai/stablelm-tuned-alpha-7b).
And, llama.cpp & his ggml repo is being updated accordingly as [ggerganov](https://github.com/ggerganov/ggml/tree/stablelm/examples/stablelm).
I hope Langchain will support it soon. | Support StableLM | https://api.github.com/repos/langchain-ai/langchain/issues/3189/comments | 1 | 2023-04-20T00:32:59Z | 2023-04-20T00:42:11Z | https://github.com/langchain-ai/langchain/issues/3189 | 1,675,812,005 | 3,189 |
[
"hwchase17",
"langchain"
]
| The following error indicates to `pip install bs4`. However, to install BeautifulSoup, one should `pip install beautifulsoup4`
Error:
```
File [ENVPATH/lib/python3.10/site-packages/langchain/document_loaders/html_bs.py:26], in BSHTMLLoader.__init__(self, file_path, open_encoding, bs_kwargs)
24 import bs4 # noqa:F401
25 except ImportError:
---> 26 raise ValueError(
27 "bs4 package not found, please install it with " "`pip install bs4`"
28 )
30 self.file_path = file_path
31 self.open_encoding = open_encoding
ValueError: bs4 package not found, please install it with `pip install bs4`
``` | BSHTMLLoader Incorrect error message; bs4 -> beautifulsoup4 | https://api.github.com/repos/langchain-ai/langchain/issues/3188/comments | 1 | 2023-04-20T00:21:09Z | 2023-04-24T19:59:20Z | https://github.com/langchain-ai/langchain/issues/3188 | 1,675,804,485 | 3,188 |
[
"hwchase17",
"langchain"
]
| Hi I have one question I want to use search_distance with ConversationalRetrievalChain
Here is my code:
```
vectordbkwargs = {"search_distance": 0.9}
bot_message = qa.run({"question": history[-1][0], "chat_history": history[:-1], "vectordbkwargs": vectordbkwargs})
```
But I am getting the following error
```
raise ValueError(f"One input key expected got {prompt_input_keys}")
ValueError: One input key expected got ['question', 'vectordbkwargs']
Anybody has any idea what I am doing wrong
```
Am I doing something wrong | Error in running search_distance with ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/3178/comments | 6 | 2023-04-19T21:10:44Z | 2023-11-22T16:09:14Z | https://github.com/langchain-ai/langchain/issues/3178 | 1,675,643,417 | 3,178 |
[
"hwchase17",
"langchain"
]
| null | create_python_agent doesnt return intermediate step | https://api.github.com/repos/langchain-ai/langchain/issues/3177/comments | 1 | 2023-04-19T21:09:07Z | 2023-09-10T16:30:25Z | https://github.com/langchain-ai/langchain/issues/3177 | 1,675,640,827 | 3,177 |
[
"hwchase17",
"langchain"
]
| The following link describes the VectorStoreRetrieverMemory class, which would be extremely useful for referencing an external vector DB & it's text/vectors:
https://python.langchain.com/en/latest/modules/memory/types/vectorstore_retriever_memory.html#create-your-the-vectorstoreretrievermemory)
However, I'm following the documentation in the following link, and used the import copied from the documentation:
`from langchain.memory import VectorStoreRetrieverMemory`
Here's the error that I'm receiving:
ImportError: cannot import name 'VectorStoreRetrieverMemory' from 'langchain.memory' (D:\Anaconda_3\lib\site-packages\langchain\memory\__init__.py)
I've updated langchain as a dependency, and the issue is persisting. Is there a workaround or an issue with langchain? This is also my first github issue, so please let me know if more information is needed. | VectorStoreRetrieverMemory Not Available In Langchain.memory | https://api.github.com/repos/langchain-ai/langchain/issues/3175/comments | 2 | 2023-04-19T21:01:08Z | 2023-05-03T14:28:51Z | https://github.com/langchain-ai/langchain/issues/3175 | 1,675,628,226 | 3,175 |
[
"hwchase17",
"langchain"
]
| When I get a rate limit or API key error, I get the following:
```
File "/Users/vanessacai/workspace/chef/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 216, in run
return self(kwargs)[self.output_keys[0]]
File "/Users/vanessacai/workspace/chef/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/Users/vanessacai/workspace/chef/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/Users/vanessacai/workspace/chef/venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 57, in _call
return self.apply([inputs])[0]
File "/Users/vanessacai/workspace/chef/venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 118, in apply
response = self.generate(input_list)
File "/Users/vanessacai/workspace/chef/venv/lib/python3.10/site-packages/langchain/chains/llm.py", line 62, in generate
return self.llm.generate_prompt(prompts, stop)
File "/Users/vanessacai/workspace/chef/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 107, in generate_prompt
return self.generate(prompt_strings, stop=stop)
File "/Users/vanessacai/workspace/chef/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 140, in generate
raise e
File "/Users/vanessacai/workspace/chef/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 137, in generate
output = self._generate(prompts, stop=stop)
File "/Users/vanessacai/workspace/chef/venv/lib/python3.10/site-packages/langchain/llms/base.py", line 324, in _generate
text = self._call(prompt, stop=stop)
File "/Users/vanessacai/workspace/chef/venv/lib/python3.10/site-packages/langchain/llms/anthropic.py", line 184, in _call
return response["completion"]
KeyError: 'completion'
```
This is confusing, and could be a more informative error message. | Anthropic error handling is unclear | https://api.github.com/repos/langchain-ai/langchain/issues/3170/comments | 2 | 2023-04-19T19:43:52Z | 2023-04-20T00:51:18Z | https://github.com/langchain-ai/langchain/issues/3170 | 1,675,517,935 | 3,170 |
[
"hwchase17",
"langchain"
]
| When installing requirements via `poetry install -E all` getting error for debugpy and SQLAlchemy:
debugpy:
```
• Installing debugpy (1.6.7): Failed
CalledProcessError
Command '['/Users/cyzanfar/Desktop/llm/langchain/.venv/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--isolated', '--no-input', '--prefix', '/Users/cyzanfar/Desktop/llm/langchain/.venv', '--no-deps', '/Users/cyzanfar/Library/Caches/pypoetry/artifacts/a2/72/f2/f92a409c1ebe3f157f1f797e08448b8b58e6ac55cf7e01d26828907568/debugpy-1.6.7-cp39-cp39-macosx_11_0_x86_64.whl']' returned non-zero exit status 1.
at /opt/anaconda3/lib/python3.9/subprocess.py:528 in run
524│ # We don't call process.wait() as .__exit__ does that for us.
525│ raise
526│ retcode = process.poll()
527│ if check and retcode:
→ 528│ raise CalledProcessError(retcode, process.args,
529│ output=stdout, stderr=stderr)
530│ return CompletedProcess(process.args, retcode, stdout, stderr)
531│
532│
The following error occurred when trying to handle this error:
EnvCommandError
Command ['/Users/cyzanfar/Desktop/llm/langchain/.venv/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--isolated', '--no-input', '--prefix', '/Users/cyzanfar/Desktop/llm/langchain/.venv', '--no-deps', '/Users/cyzanfar/Library/Caches/pypoetry/artifacts/a2/72/f2/f92a409c1ebe3f157f1f797e08448b8b58e6ac55cf7e01d26828907568/debugpy-1.6.7-cp39-cp39-macosx_11_0_x86_64.whl'] errored with the following return code 1
Output:
ERROR: debugpy-1.6.7-cp39-cp39-macosx_11_0_x86_64.whl is not a supported wheel on this platform.
at ~/Library/Application Support/pypoetry/venv/lib/python3.9/site-packages/poetry/utils/env.py:1545 in _run
1541│ return subprocess.call(cmd, stderr=stderr, env=env, **kwargs)
1542│ else:
1543│ output = subprocess.check_output(cmd, stderr=stderr, env=env, **kwargs)
1544│ except CalledProcessError as e:
→ 1545│ raise EnvCommandError(e, input=input_)
1546│
1547│ return decode(output)
1548│
1549│ def execute(self, bin: str, *args: str, **kwargs: Any) -> int:
The following error occurred when trying to handle this error:
PoetryException
Failed to install /Users/cyzanfar/Library/Caches/pypoetry/artifacts/a2/72/f2/f92a409c1ebe3f157f1f797e08448b8b58e6ac55cf7e01d26828907568/debugpy-1.6.7-cp39-cp39-macosx_11_0_x86_64.whl
at ~/Library/Application Support/pypoetry/venv/lib/python3.9/site-packages/poetry/utils/pip.py:58 in pip_install
54│
55│ try:
56│ return environment.run_pip(*args)
57│ except EnvCommandError as e:
→ 58│ raise PoetryException(f"Failed to install {path.as_posix()}") from e
59│
```
SQLAlchemy:
```
• Installing sqlalchemy (1.4.47): Failed
CalledProcessError
Command '['/Users/cyzanfar/Desktop/llm/langchain/.venv/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--isolated', '--no-input', '--prefix', '/Users/cyzanfar/Desktop/llm/langchain/.venv', '--no-deps', '/Users/cyzanfar/Library/Caches/pypoetry/artifacts/ee/bd/08/6d08c28abb942c2089808a2dbc720d1ee4d8a7260724e7fc5cbaeba134/SQLAlchemy-1.4.47-cp39-cp39-macosx_11_0_x86_64.whl']' returned non-zero exit status 1.
at /opt/anaconda3/lib/python3.9/subprocess.py:528 in run
524│ # We don't call process.wait() as .__exit__ does that for us.
525│ raise
526│ retcode = process.poll()
527│ if check and retcode:
→ 528│ raise CalledProcessError(retcode, process.args,
529│ output=stdout, stderr=stderr)
530│ return CompletedProcess(process.args, retcode, stdout, stderr)
531│
532│
The following error occurred when trying to handle this error:
EnvCommandError
Command ['/Users/cyzanfar/Desktop/llm/langchain/.venv/bin/python', '-m', 'pip', 'install', '--disable-pip-version-check', '--isolated', '--no-input', '--prefix', '/Users/cyzanfar/Desktop/llm/langchain/.venv', '--no-deps', '/Users/cyzanfar/Library/Caches/pypoetry/artifacts/ee/bd/08/6d08c28abb942c2089808a2dbc720d1ee4d8a7260724e7fc5cbaeba134/SQLAlchemy-1.4.47-cp39-cp39-macosx_11_0_x86_64.whl'] errored with the following return code 1
Output:
ERROR: SQLAlchemy-1.4.47-cp39-cp39-macosx_11_0_x86_64.whl is not a supported wheel on this platform.
at ~/Library/Application Support/pypoetry/venv/lib/python3.9/site-packages/poetry/utils/env.py:1545 in _run
1541│ return subprocess.call(cmd, stderr=stderr, env=env, **kwargs)
1542│ else:
1543│ output = subprocess.check_output(cmd, stderr=stderr, env=env, **kwargs)
1544│ except CalledProcessError as e:
→ 1545│ raise EnvCommandError(e, input=input_)
1546│
1547│ return decode(output)
1548│
1549│ def execute(self, bin: str, *args: str, **kwargs: Any) -> int:
The following error occurred when trying to handle this error:
PoetryException
Failed to install /Users/cyzanfar/Library/Caches/pypoetry/artifacts/ee/bd/08/6d08c28abb942c2089808a2dbc720d1ee4d8a7260724e7fc5cbaeba134/SQLAlchemy-1.4.47-cp39-cp39-macosx_11_0_x86_64.whl
at ~/Library/Application Support/pypoetry/venv/lib/python3.9/site-packages/poetry/utils/pip.py:58 in pip_install
54│
55│ try:
56│ return environment.run_pip(*args)
57│ except EnvCommandError as e:
→ 58│ raise PoetryException(f"Failed to install {path.as_posix()}") from e
59│
``` | poetry install -E all on M1 13.2.1 | https://api.github.com/repos/langchain-ai/langchain/issues/3169/comments | 2 | 2023-04-19T19:31:31Z | 2023-09-10T16:30:30Z | https://github.com/langchain-ai/langchain/issues/3169 | 1,675,500,196 | 3,169 |
[
"hwchase17",
"langchain"
]
| I'm exploring [this great notebook](https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html) and trying to produce something similar, however I get an error when `fetch_memories` calls `self.memory_retriever.get_relevant_documents(observation)`:
```
AttributeError: 'FAISS' object has no attribute 'similarity_search_with_relevance_scores'
```
I'm running `langchain` `0.0.144` and I've installed `faiss` using `conda install -c conda-forge faiss-cpu` (as indicated [here](https://github.com/facebookresearch/faiss/blob/main/INSTALL.md)) which installed version `1.7.2`. Also running python `3.10`.
Poking inside `faiss.py` I indeed can't find a method called `similarity_search_with_relevance_scores`, only one called `_similarity_search_with_relevance_scores`.
Which version(s) of `langchain` and `faiss` should I be running for this to work?
Any help would be greatly appreciated, thanks 🙏 | Error using TimeWeightedVectorStoreRetriever.get_relevant_documents with FAISS: 'FAISS' object has no attribute 'similarity_search_with_relevance_scores' | https://api.github.com/repos/langchain-ai/langchain/issues/3167/comments | 6 | 2023-04-19T19:21:29Z | 2023-09-18T09:55:40Z | https://github.com/langchain-ai/langchain/issues/3167 | 1,675,484,274 | 3,167 |
[
"hwchase17",
"langchain"
]
| I am trying to improve BabyAGI with agents example from this repo to read a file, optimize it and write the file.
I added this to the tools as from the AutoGPT example next to search and todo:
` WriteFileTool(),
ReadFileTool(),`
Prompt is to "Write the optimized code into a file".
ReadFileTool is working fine...
It looks like type is missing as a parameter even. Do I need to pass something to the tool or what am I doing wrong?
I get the following error:
`Traceback (most recent call last):
File "baby_agi_with_agent.py", line 136, in <module>
baby_agi({"objective": OBJECTIVE})
File "/home/dev/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/home/dev/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/home/dev/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/experimental/autonomous_agents/baby_agi/baby_agi.py", line 130, in _call
result = self.execute_task(objective, task["task_name"])
File "/home/dev/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/experimental/autonomous_agents/baby_agi/baby_agi.py", line 111, in execute_task
return self.execution_chain.run(
File "/home/dev/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/chains/base.py", line 216, in run
return self(kwargs)[self.output_keys[0]]
File "/home/dev/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/home/dev/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
File "/home/dev/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/agents/agent.py", line 792, in _call
next_step_output = self._take_next_step(
File "/home/dev/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/agents/agent.py", line 695, in _take_next_step
observation = tool.run(
File "/home/dev/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/tools/base.py", line 90, in run
self._parse_input(tool_input)
File "/home/dev/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/tools/base.py", line 58, in _parse_input
input_args.validate({key_: tool_input})
File "pydantic/main.py", line 711, in pydantic.main.BaseModel.validate
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for WriteFileInput
text
field required (type=value_error.missing)`
| File cannot be written: WriteFileTool() throws validation error for WriteFileInput text | https://api.github.com/repos/langchain-ai/langchain/issues/3165/comments | 6 | 2023-04-19T18:21:47Z | 2023-09-28T16:07:55Z | https://github.com/langchain-ai/langchain/issues/3165 | 1,675,398,921 | 3,165 |
[
"hwchase17",
"langchain"
]
| Just wanted to flag this. Not sure if it's a me problem or if there's an issue elsewhere. Searched the issues for the same problem and couldn't find it.
 | Docs Chat Rate Limited | https://api.github.com/repos/langchain-ai/langchain/issues/3162/comments | 1 | 2023-04-19T17:07:40Z | 2023-09-10T16:30:36Z | https://github.com/langchain-ai/langchain/issues/3162 | 1,675,301,097 | 3,162 |
[
"hwchase17",
"langchain"
]
| The console output when running a tool is missing the "Observation" and "Thought" prefixes.
I noticed this when using the SQL Toolkit, but other tools are likely affected.
Here is the current INCORRECT output format:
```
> Entering new AgentExecutor chain...
Action: list_tables_sql_db
Action Input: ""invoice_items, invoices, tracks, sqlite_sequence, employees, media_types, sqlite_stat1, customers, playlists, playlist_track, albums, genres, artistsThere is a table called "employees" that I can query.
Action: schema_sql_db
Action Input: "employees"
```
Here is the expected output format:
```
> Entering new AgentExecutor chain...
Action: list_tables_sql_db
Action Input: ""
Observation: invoice_items, invoices, tracks, sqlite_sequence, employees, media_types, sqlite_stat1, customers, playlists, playlist_track, albums, genres, artists
Thought:There is a table called "employees" that I can query.
Action: schema_sql_db
Action Input: "employees"
```
Note: this appears to only affect the console output. The `agent_scratchpad` is updated correctly with the "Observation" and "Thought" prefixes. | Missing Observation and Thought prefix in output | https://api.github.com/repos/langchain-ai/langchain/issues/3157/comments | 4 | 2023-04-19T15:15:26Z | 2023-04-20T16:46:28Z | https://github.com/langchain-ai/langchain/issues/3157 | 1,675,129,012 | 3,157 |
[
"hwchase17",
"langchain"
]
| I am considering implementing a new tool to give LLMs the ability to send SMS text using the Twilio API. @hwchase17 is this worth implementing? if so I'll submit a PR shortly. | New Tool: Twilio | https://api.github.com/repos/langchain-ai/langchain/issues/3156/comments | 7 | 2023-04-19T14:37:14Z | 2023-09-27T16:07:56Z | https://github.com/langchain-ai/langchain/issues/3156 | 1,675,043,935 | 3,156 |
[
"hwchase17",
"langchain"
]
| For writing more abstracted code the variables in langchain.chat_model and model in langchain.llms should have the same way of retrieving the model version. Therefor both variables should have the same name. | [feat] the variable model_name from langchain.chat_model should have the same name as the variable model from langchain.llms | https://api.github.com/repos/langchain-ai/langchain/issues/3154/comments | 1 | 2023-04-19T13:51:42Z | 2023-09-10T16:30:41Z | https://github.com/langchain-ai/langchain/issues/3154 | 1,674,945,820 | 3,154 |
[
"hwchase17",
"langchain"
]
| I tried to merge two FAISS indices:
index1 = FAISS.from_documents(doc1, embeddings)
index2=FAISS.from_documents(doc2, embeddings)
after I do index1.merge_from(index2), and then do
ret = index2.as_retriever()
ret.get_relevant_documents(query)
it returns and empty list [].
If index1 is saved_local, can I add index2 without loading index1 to memory, perhaps called merge_local? (open index.faiss and index.pkl and write line by line)
| does merging index1.merge_from(index2) dump index2? | https://api.github.com/repos/langchain-ai/langchain/issues/3152/comments | 2 | 2023-04-19T11:17:25Z | 2023-07-21T15:11:27Z | https://github.com/langchain-ai/langchain/issues/3152 | 1,674,697,052 | 3,152 |
[
"hwchase17",
"langchain"
]
| I'm talking about langchain, langchain-backend and langchain-frontend here.
https://python.langchain.com/en/latest/tracing/local_installation.html
Could give people a starting off point if they want to use langchain with tracing. I'll share mine here:
```
version: '3.7'
services:
your-app-name-containing-langchain:
container_name: your-app-name-containing-langchain
image: your/image
ports:
- 5000:5000
build:
context: .
dockerfile: Dockerfile
env_file: .env
volumes:
- ./:/app
langchain-frontend:
container_name: langchain-frontend
image: notlangchain/langchainplus-frontend:latest
ports:
- 4173:4173
environment:
- BACKEND_URL=http://langchain-backend:8000
- PUBLIC_BASE_URL=http://localhost:8000
- PUBLIC_DEV_MODE=true
depends_on:
- langchain-backend
langchain-backend:
container_name: langchain-backend
image: notlangchain/langchainplus:latest
environment:
- PORT=8000
- LANGCHAIN_ENV=local
ports:
- 8000:8000
depends_on:
- langchain-db
langchain-db:
container_name: langchain-db
image: postgres:14.1
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
ports:
- 5432:5432
``` | Thought about a docker-compose file to run langchain's whole ecosystem? | https://api.github.com/repos/langchain-ai/langchain/issues/3150/comments | 6 | 2023-04-19T10:33:28Z | 2023-09-24T16:09:43Z | https://github.com/langchain-ai/langchain/issues/3150 | 1,674,632,635 | 3,150 |
[
"hwchase17",
"langchain"
]
| I would like to implement the following feature.
### Description:
currently the sitemap loader (at `document_laoders.sitemap`) only works if the sitemap URL as web_path is passed directly. I propose adding a feature to improve the sitemap loader's functionality by enabling it to automatically discover sitemap URLs given any website URL (not necessarily the root URL). Typically, sitemap URLs can be found in the robots.txt or the homepage HTML as an href attribute.
The new feature would involve:
Checking the robots.txt file for sitemap URLs.
if not found in robots.txt, searching the homepage HTML for href attributes containing sitemap URLs.
If you believe this feature would be useful and beneficial for the project, please le me know, and I can submit a PR.
Best,
Pi | FEAT: Extend SitemapLoader to automatically discover sitemap URLs. | https://api.github.com/repos/langchain-ai/langchain/issues/3149/comments | 1 | 2023-04-19T10:19:24Z | 2023-09-10T16:30:46Z | https://github.com/langchain-ai/langchain/issues/3149 | 1,674,612,447 | 3,149 |
[
"hwchase17",
"langchain"
]
| I have a gpt-3.5-turbo model deployed on AzureOpenAI, however, I always keep getting this error.
**openai.error.InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.**
`index_creator = VectorstoreIndexCreator(
embedding=OpenAIEmbeddings(
openai_api_key=openai_api_key,
model="gpt-3.5-turbo",
chunk_size=1,
)
)`
`indexed_document = index_creator.from_loaders([file_loader])`
`chain = RetrievalQA.from_chain_type(
llm=AzureOpenAI(
openai_api_key=openai_api_key,
deployment_name="gpt_35_turbo",
model_name="gpt-3.5-turbo",
),
chain_type="stuff",
retriever=indexed_document.vectorstore.as_retriever(),
input_key="user_prompt",
return_source_documents=True,
)`
`open_ai_response = chain({"user_prompt": query_param})` | Unable to use gpt-3.5-turbo deployed on Azure OpenAI with langchain embeddings. | https://api.github.com/repos/langchain-ai/langchain/issues/3148/comments | 10 | 2023-04-19T10:00:10Z | 2023-08-10T14:50:58Z | https://github.com/langchain-ai/langchain/issues/3148 | 1,674,581,862 | 3,148 |
[
"hwchase17",
"langchain"
]
| Recently I got a strange error when using FAISS `similarity_search_with_score_by_vector`. The line (https://github.com/hwchase17/langchain/blob/575b717d108984676e25afd0910ccccfdaf9693d/langchain/vectorstores/faiss.py#L170) generates errors:
```
TypeError: IndexFlat.search() missing 3 required positional arguments: 'k', 'distances', and 'labels'
```
It looks like FAISS own `search` function has different arguments (using `IndexFlat`)
```
search(self, n, x, k, distances, labels)
```
But `similarity_search_with_score_by_vector` worked one day ago. So is there any hints for this? | FAISS similarity search issue | https://api.github.com/repos/langchain-ai/langchain/issues/3147/comments | 11 | 2023-04-19T09:55:13Z | 2023-06-11T23:46:27Z | https://github.com/langchain-ai/langchain/issues/3147 | 1,674,572,859 | 3,147 |
[
"hwchase17",
"langchain"
]
| Hi, I'm trying to implement the memory stream in <[Generative Agents: Interactive Simulacra of Human Behavior](https://arxiv.org/abs/2304.03442)>.
This memory mode is built on top of a concept called `observation` which is something like `Lily is watching a movie`, `desk is idle`.
I think the most closest concept to `observation` in Langchain are entity memory and KG triple. All these three are describing a simple fact about a entity.
But what confuses me is that, in theory, a KG also foucses on entities(vertices). It seems KG can be seen as a storage form for the `entity memory`. In langchain, they differ in the prompt template for info extraction, which affects how LLM extract the info. Once the info has been extracted, the format of the text used to present it to the LLM is also nearly identical.
So my questions are, what's the positioning for entity memory and KG memory, and if I try to implement the memory stream, is it appropriate to reuse one of these two to generate `observation`?
| Implementing the memory stream in <Generative Agents: Interactive Simulacra of Human Behavior> | https://api.github.com/repos/langchain-ai/langchain/issues/3145/comments | 1 | 2023-04-19T09:28:23Z | 2023-09-10T16:30:51Z | https://github.com/langchain-ai/langchain/issues/3145 | 1,674,529,524 | 3,145 |
[
"hwchase17",
"langchain"
]
| ```
from langchain import OpenAI
from langchain.chains import RetrievalQAWithSourcesChain
from langchain.prompts import PromptTemplate
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=rds.as_retriever())
```
While running above codes, it outputs the following error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[14], line 5
4 from langchain.chains.qa_with_sources import load_qa_with_sources_chain
----> 5 chain = RetrievalQAWithSourcesChain.from_chain_type(OpenAI(temperature=0), chain_type="stuff", retriever=rds.as_retriever())
File ~/opt/anaconda3/lib/python3.10/site-packages/langchain/chains/qa_with_sources/base.py:76, in BaseQAWithSourcesChain.from_chain_type(cls, llm, chain_type, chain_type_kwargs, **kwargs)
74 """Load chain from chain type."""
75 _chain_kwargs = chain_type_kwargs or {}
---> 76 combine_document_chain = load_qa_with_sources_chain(
77 llm, chain_type=chain_type, **_chain_kwargs
78 )
79 return cls(combine_documents_chain=combine_document_chain, **kwargs)
File ~/opt/anaconda3/lib/python3.10/site-packages/langchain/chains/qa_with_sources/loading.py:171, in load_qa_with_sources_chain(llm, chain_type, verbose, **kwargs)
166 raise ValueError(
167 f"Got unsupported chain type: {chain_type}. "
168 f"Should be one of {loader_mapping.keys()}"
169 )
170 _func: LoadingCallable = loader_mapping[chain_type]
--> 171 return _func(llm, verbose=verbose, **kwargs)
File ~/opt/anaconda3/lib/python3.10/site-packages/langchain/chains/qa_with_sources/loading.py:56, in _load_stuff_chain(llm, prompt, document_prompt, document_variable_name, verbose, **kwargs)
48 def _load_stuff_chain(
49 llm: BaseLanguageModel,
50 prompt: BasePromptTemplate = stuff_prompt.PROMPT,
(...)
54 **kwargs: Any,
55 ) -> StuffDocumentsChain:
---> 56 llm_chain = LLMChain(llm=llm, prompt=prompt, verbose=verbose)
57 return StuffDocumentsChain(
58 llm_chain=llm_chain,
59 document_variable_name=document_variable_name,
(...)
62 **kwargs,
63 )
File ~/opt/anaconda3/lib/python3.10/site-packages/pydantic/main.py:339, in pydantic.main.BaseModel.__init__()
File ~/opt/anaconda3/lib/python3.10/site-packages/pydantic/main.py:1076, in pydantic.main.validate_model()
File ~/opt/anaconda3/lib/python3.10/site-packages/pydantic/fields.py:867, in pydantic.fields.ModelField.validate()
File ~/opt/anaconda3/lib/python3.10/site-packages/pydantic/fields.py:1151, in pydantic.fields.ModelField._apply_validators()
File ~/opt/anaconda3/lib/python3.10/site-packages/pydantic/class_validators.py:304, in pydantic.class_validators._generic_validator_cls.lambda4()
File ~/opt/anaconda3/lib/python3.10/site-packages/langchain/chains/base.py:57, in Chain.set_verbose(cls, verbose)
52 """If verbose is None, set it.
53
54 This allows users to pass in None as verbose to access the global setting.
55 """
56 if verbose is None:
---> 57 return _get_verbosity()
58 else:
59 return verbose
File ~/opt/anaconda3/lib/python3.10/site-packages/langchain/chains/base.py:17, in _get_verbosity()
16 def _get_verbosity() -> bool:
---> 17 return langchain.verbose
AttributeError: module 'langchain' has no attribute 'verbose'
```
However, when I run with langchain==0.0.142, this error doesn't occur.
| Error occured after updating langchain version to the latest 0.0.144 for the example code using RetrievalQAWithSourcesChain | https://api.github.com/repos/langchain-ai/langchain/issues/3144/comments | 2 | 2023-04-19T09:26:44Z | 2024-05-21T06:02:37Z | https://github.com/langchain-ai/langchain/issues/3144 | 1,674,526,880 | 3,144 |
[
"hwchase17",
"langchain"
]
| null | How to change the max_token for different requests | https://api.github.com/repos/langchain-ai/langchain/issues/3138/comments | 1 | 2023-04-19T07:05:24Z | 2023-09-10T16:31:01Z | https://github.com/langchain-ai/langchain/issues/3138 | 1,674,296,816 | 3,138 |
[
"hwchase17",
"langchain"
]
| ### Discussed in https://github.com/hwchase17/langchain/discussions/3132
<div type='discussions-op-text'>
<sup>Originally posted by **srithedesigner** April 19, 2023</sup>
We used to use AzureOpenAI llm from langchain.llms with the text-davinci-003 model but after deploying GPT4 in Azure when trying to use GPT4, this error is being thrown:
`Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.chat_completion.ChatCompletion'>`
This is how we are initializing the model:
```python
model = AzureOpenAI(
streaming=streaming,
client=openai.ChatCompletion(),
callback_manager=callback,
deployment_name= "gpt4",
model_name="gpt-4-32k",
openai_api_key=env.cloud.openai_api_key,
temperature=temperature,
max_tokens=max_tokens,
verbose=verbose,
)
```
How do we use GPT4 withe the AzureOpenAI chain? Is it currently being supported? or are we initializing it wrong?
</div> | Not able to use GPT4 model with AzureOpenAI from from langchain.llms | https://api.github.com/repos/langchain-ai/langchain/issues/3137/comments | 12 | 2023-04-19T06:53:50Z | 2023-10-16T19:48:27Z | https://github.com/langchain-ai/langchain/issues/3137 | 1,674,281,383 | 3,137 |
[
"hwchase17",
"langchain"
]
| <br>
I'm trying to integrate gpt 4 azure open ai with Langchain but when i try to use it inside ConversationalRetrievalChain it is throwing some error <be>
```
raise error.InvalidRequestError(\\n *~*
openai.error.InvalidRequestError: Must provide an \'engine\' or
\'deployment_id\' parameter to create a <class
\'openai.api_resources.chat_completion.ChatCompletion\
```
<br>
but if I run standalone openai instance with Azure openai config it is working, I'm confused about whether langchain supports gpt 4 or if am I missing anything | InvalidRequestError Must provide an engine ( gpt 4 azure open ai ) | https://api.github.com/repos/langchain-ai/langchain/issues/3134/comments | 3 | 2023-04-19T05:57:43Z | 2023-09-24T16:09:53Z | https://github.com/langchain-ai/langchain/issues/3134 | 1,674,221,239 | 3,134 |
[
"hwchase17",
"langchain"
]
| Python 3.11.1
langchain==0.0.143
llama-cpp-python==0.1.34
Model works when I use Dalai. Also happens with Llama 7B.
Code here (from [langchain documentation](https://python.langchain.com/en/latest/modules/models/llms/integrations/llamacpp.html)):
```
from langchain.llms import LlamaCpp
from langchain import PromptTemplate, LLMChain
import os
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
file_path = os.path.abspath("<my path>/dalai/alpaca/models/7B/ggml-model-q4_0.bin")
llm = LlamaCpp(model_path=file_path, verbose=True)
llm_chain = LLMChain(prompt=prompt, llm=llm, verbose=True)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
```
Output:
```
llama.cpp: loading model from <my_path>
llama.cpp: can't use mmap because tensors are not aligned; convert to new format to avoid this
llama_model_load_internal: format = 'ggml' (old version with low tokenizer quality and no mmap support)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 512
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 32
llama_model_load_internal: n_layer = 32
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: n_ff = 11008
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size = 4113739.11 KB
llama_model_load_internal: mem required = 5809.32 MB (+ 2052.00 MB per state)
...................................................................................................
.
llama_init_from_file: kv self size = 512.00 MB
AVX = 1 | AVX2 = 1 | AVX512 = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 |
> Entering new LLMChain chain...
Prompt after formatting:
Question: What NFL team won the Super Bowl in the year Justin Bieber was born?
Answer: Let's think step by step.
llama_print_timings: load time = 906.87 ms
llama_print_timings: sample time = 173.02 ms / 123 runs ( 1.41 ms per run)
llama_print_timings: prompt eval time = 3821.62 ms / 34 tokens ( 112.40 ms per token)
llama_print_timings: eval time = 24772.41 ms / 122 runs ( 203.05 ms per run)
llama_print_timings: total time = 28788.58 ms
> Finished chain.
```
What could be causing this? It seems like the model loads properly and does something. Thanks!
| Alpaca 7B loads with LlamaCpp, no response from model | https://api.github.com/repos/langchain-ai/langchain/issues/3129/comments | 1 | 2023-04-19T04:56:54Z | 2023-04-19T06:45:04Z | https://github.com/langchain-ai/langchain/issues/3129 | 1,674,171,032 | 3,129 |
[
"hwchase17",
"langchain"
]
| I'm trying to use the WebBrowserTool and I got the foirbidden code (403) on some sites. I tried to add the user agent and also use proxies but I still get 403. I tried to send the headers directly on the tool contructor or the axios config.
```
const headers = {
'user-agent':
'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36',
'upgrade-insecure-requests': '1',
'accept':
'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-US,en;q=0.9,en;q=0.8',
};
new WebBrowser({
model,
headers,
embeddings,
axiosConfig: {
headers
},
})
``` | WebBrowserTool 403 error | https://api.github.com/repos/langchain-ai/langchain/issues/3118/comments | 0 | 2023-04-18T22:48:32Z | 2023-04-18T23:24:13Z | https://github.com/langchain-ai/langchain/issues/3118 | 1,673,926,522 | 3,118 |
[
"hwchase17",
"langchain"
]
| I'm not sure if it's a typo or not but the default prompt in [langchain](https://github.com/hwchase17/langchain/tree/master/langchain)/[langchain](https://github.com/hwchase17/langchain/tree/master/langchain)/[chains](https://github.com/hwchase17/langchain/tree/master/langchain/chains)/[summarize](https://github.com/hwchase17/langchain/tree/master/langchain/chains/summarize)/[refine_prompts.py](https://github.com/hwchase17/langchain/tree/master/langchain/chains/summarize/refine_prompts.py) seems to miss a empty string or a `\n `
```
REFINE_PROMPT_TMPL = (
"Your job is to produce a final summary\n"
"We have provided an existing summary up to a certain point: {existing_answer}\n"
"We have the opportunity to refine the existing summary"
"(only if needed) with some more context below.\n"
"------------\n"
"{text}\n"
"------------\n"
"Given the new context, refine the original summary"
"If the context isn't useful, return the original summary."
)
```
It will produce `refine the original summaryIf the context isn't useful` and `existing summary(only if needed)`
I could proabbly fix it with a PR ( if it's unintentionnal), but I prefer to let someone more competent to do it as i'm not used to create PR's in large projects like this. | Missing new lines or empty spaces in refine default prompt. | https://api.github.com/repos/langchain-ai/langchain/issues/3117/comments | 4 | 2023-04-18T22:32:58Z | 2023-08-31T14:29:51Z | https://github.com/langchain-ai/langchain/issues/3117 | 1,673,914,308 | 3,117 |
[
"hwchase17",
"langchain"
]
| I run the following code:
```py
from langchain.chat_models import ChatOpenAI
from langchain import PromptTemplate, LLMChain
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
gpt_4 = ChatOpenAI(model_name="gpt-4", streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)
template="You are ChatGPT, a large language model trained by OpenAI. Follow the user's instructions carefully. Respond using markdown."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
chain = LLMChain(llm=gpt_4, prompt=prompt)
from langchain.callbacks import get_openai_callback
with get_openai_callback() as cb:
text = "How are you?"
res = chain.run(text=text)
print(cb)
```
However when I print the callback value, I get back info that I used 0 credits, even though I know I used some.
```
I'm an AI language model, so I don't have feelings or emotions like humans do. However, I'm here to help you with any questions or information you need. What can I help you with today?Tokens Used: 0
Prompt Tokens: 0
Completion Tokens: 0
Successful Requests: 0
Total Cost (USD): $0.0
```
Am I doing something wrong, or is this an issue? | get_openai_callback doesn't return the credits for ChatGPT chain | https://api.github.com/repos/langchain-ai/langchain/issues/3114/comments | 22 | 2023-04-18T21:28:20Z | 2024-02-19T08:46:12Z | https://github.com/langchain-ai/langchain/issues/3114 | 1,673,855,423 | 3,114 |
[
"hwchase17",
"langchain"
]
| langchain was installed via `pip`
```
Traceback (most recent call last):
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/test.py", line 1, in <module>
from langchain.agents import load_tools, initialize_agent, AgentType
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/agents/agent.py", line 17, in <module>
from langchain.chains.base import Chain
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/chains/__init__.py", line 2, in <module>
from langchain.chains.api.base import APIChain
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/chains/api/base.py", line 8, in <module>
from langchain.chains.api.prompt import API_RESPONSE_PROMPT, API_URL_PROMPT
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/chains/api/prompt.py", line 2, in <module>
from langchain.prompts.prompt import PromptTemplate
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/prompts/__init__.py", line 3, in <module>
from langchain.prompts.chat import (
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/prompts/chat.py", line 10, in <module>
from langchain.memory.buffer import get_buffer_string
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/memory/__init__.py", line 11, in <module>
from langchain.memory.entity import (
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/memory/entity.py", line 8, in <module>
from langchain.chains.llm import LLMChain
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/chains/llm.py", line 11, in <module>
from langchain.prompts.prompt import PromptTemplate
File "/Users/oleksandrdanshyn/github.com/dalazx/langchain/langchain/prompts/prompt.py", line 8, in <module>
from jinja2 import Environment, meta
ModuleNotFoundError: No module named 'jinja2'
```
since `prompt.py` is one of the fundamental modules and it unconditionally imports `jinja2`, `jinja2` should probably be added to the list of required dependencies. | jinja2 is not optional | https://api.github.com/repos/langchain-ai/langchain/issues/3113/comments | 4 | 2023-04-18T21:19:47Z | 2023-09-24T16:09:58Z | https://github.com/langchain-ai/langchain/issues/3113 | 1,673,847,319 | 3,113 |
[
"hwchase17",
"langchain"
]
| Gathering more details on this... but the solution needs to include `search` or `searchlight` as a starting point. (see https://github.com/hwchase17/langchain/issues/2113) | Support more robust redis module list | https://api.github.com/repos/langchain-ai/langchain/issues/3111/comments | 1 | 2023-04-18T20:35:46Z | 2023-05-03T13:54:45Z | https://github.com/langchain-ai/langchain/issues/3111 | 1,673,798,260 | 3,111 |
[
"hwchase17",
"langchain"
]
| The ChatOpenAI LLM retries a completion if a content-moderation exception is raised by OpenAI.
Code [here](https://github.com/hwchase17/langchain/blob/d54c88aa2140f27c36fa18375f942e5b239799ee/langchain/chat_models/openai.py#L45)
#### Request : Do not retry if exception type is 'content moderation'
In our experience, Content Moderation errors have a near 100% reproducibility, which means that the prompt fails on every retry. This means that langchain racks up unnecessary billable calls for an unfixable exception.
#### Related [Request](https://github.com/hwchase17/langchain/issues/3109) - Allow custom retry_decorator to be passed by the user at LLM definition
| Do not retry if content moderation exception is raised by OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/3110/comments | 1 | 2023-04-18T19:48:04Z | 2023-09-10T16:31:06Z | https://github.com/langchain-ai/langchain/issues/3110 | 1,673,736,825 | 3,110 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.