issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| ### Feature request
Is there any way to store the word2vec/glove/fasttext based embeddings in the vector database using langchain
```
pages= "page content"
embeddings = OpenAIEmbeddings()
persist_directory = 'db'
vectordb = Chroma.from_documents(documents=pages, embedding=embeddings, persist_directory=persist_directory)
```
shall use **word2vec/glove/fasttext** embeddings instead of **OpenAIEmbeddings()** in the above code?
if possible then what is the syntax for that?
### Motivation
For using native embedding formats
### Your contribution
For using native embedding formats like word2vec/glove/fasttext in langchain | Word2vec/Glove/FastText embedding support | https://api.github.com/repos/langchain-ai/langchain/issues/6868/comments | 2 | 2023-06-28T12:26:22Z | 2024-01-30T00:45:35Z | https://github.com/langchain-ai/langchain/issues/6868 | 1,778,835,710 | 6,868 |
[
"hwchase17",
"langchain"
]
| ### Feature request
PGVector lacks Upsert and deletion capabilities.
I have to custom create this functionality.
### Motivation
I want to use PGVector because it's easy to implement and I don't require to deal with DB vector providers.
### Your contribution
If you deem this useful I will try propose a pull request. | PGVector is Lacking Basic Features | https://api.github.com/repos/langchain-ai/langchain/issues/6866/comments | 7 | 2023-06-28T11:32:05Z | 2024-01-25T11:42:00Z | https://github.com/langchain-ai/langchain/issues/6866 | 1,778,756,225 | 6,866 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
db_chain = SQLDatabaseChain(llm=llm, database=db, prompt=PROMPT, verbose=True, return_intermediate_steps=True, top_k=3)
result = db_chain("Make a list of those taking the exam")
result['result'] is incomplete,why ?token ?
### Suggestion:
_No response_ | sqldatabasechain result incomplete | https://api.github.com/repos/langchain-ai/langchain/issues/6861/comments | 8 | 2023-06-28T08:26:32Z | 2023-12-08T16:06:35Z | https://github.com/langchain-ai/langchain/issues/6861 | 1,778,460,142 | 6,861 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.217
python=3.10
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hi! I am working on a 'question answering' use case. I am loading PDF docs which I am storing in the Chroma vectorDB along with the instructor embeddings. I use the following command to do that:
```
vectordb = Chroma.from_documents(documents=texts,
embedding=embedding,
persist_directory='db')
```
Here I am storing the vectordb on my local machine in the 'db' folder.
When I use this vectordb as retriever and then use RetrievalQA to ask questions I get 'X' answers.
After storing the vectordb on my local, the next time I directly load the db from the directory:
`vectordb = Chroma(persist_directory='db', embedding_function=embedding)`
When I use this vectordb as retriever and then use RetrievalQA to ask the same questions, I get different answers.
I hope I was able to explain the issue properly.
### Expected behavior
My understanding is that I should get the same answer after loading the vectordb from my local. Why am I getting different answers? Is this an issue with langchain or am I doing this incorrectly? Can you please help me understand? | Different results when loading Chroma() vs Chroma.from_documents | https://api.github.com/repos/langchain-ai/langchain/issues/6854/comments | 6 | 2023-06-28T04:06:56Z | 2023-12-14T09:25:45Z | https://github.com/langchain-ai/langchain/issues/6854 | 1,778,137,863 | 6,854 |
[
"hwchase17",
"langchain"
]
| ### System Info
TL;DR
The error is reported in the error reproduction section.
Here's a guess at the solution:
HuggingFaceTextGenInference [docs](https://python.langchain.com/docs/modules/model_io/models/llms/integrations/huggingface_textgen_inference) and [code](https://github.com/hwchase17/langchain/blob/master/langchain/llms/huggingface_text_gen_inference.py#L77-L90) don't yet support [huggingface's native max_length generation kwarg](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig.max_length)
I'm guessing adding max_length below max_new_tokens in places like [here](https://github.com/hwchase17/langchain/blob/master/langchain/llms/huggingface_text_gen_inference.py#L142)
would provide the desired behavior? Ctrl-F for max_length shows other places the addition may be required
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The code snippet below works for local models
```
pipe = pipeline("text-generation", model=hf_llm, tokenizer=tokenizer, max_new_tokens=200)
llm = HuggingFacePipeline(pipeline=pipe)
chain = RetrievalQAWithSourcesChain.from_chain_type(
llm, chain_type="stuff", retriever=db.as_retriever()
)
chain(
{"question": "What did the president say about Justice Breyer"},
return_only_outputs=True,
)
```
However, when replacing the llm definition with this snippet
```
llm = HuggingFaceTextGenInference(
inference_server_url="http://hf-inference-server:80/",
max_new_tokens=256,
top_k=10,
top_p=0.95,
typical_p=0.95,
temperature=0.01,
repetition_penalty=1.03,
)
```
Yields this error
```
ValidationError: Input validation error: `inputs` tokens + `max_new_tokens` must be <= 1512. Given: 2342 `inputs`
tokens and 512 `max_new_tokens`
```
The code snippet that fails here works on it's own when used like this
`generated_text = llm("<|prompter|>What is the capital of Hungary?<|endoftext|><|assistant|>")`
### Expected behavior
Expecting a text based answer with no error. | max_length support for HuggingFaceTextGenInference | https://api.github.com/repos/langchain-ai/langchain/issues/6851/comments | 6 | 2023-06-28T01:07:18Z | 2023-12-13T16:38:12Z | https://github.com/langchain-ai/langchain/issues/6851 | 1,777,979,636 | 6,851 |
[
"hwchase17",
"langchain"
]
| ### System Info
❯ pip list |grep unstructured
unstructured 0.7.9
❯ pip list |grep langchain
langchain 0.0.215
langchainplus-sdk 0.0.17
### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.document_loaders import UnstructuredFileLoader
loader = UnstructuredFileLoader("../modules/tk.txt")
document = loader.load()
```
errors:
```
UnpicklingError Traceback (most recent call last)
Cell In[11], line 3
1 from langchain.document_loaders import UnstructuredFileLoader
2 loader = UnstructuredFileLoader("../modules/tk.txt")
----> 3 document = loader.load()
File [~/micromamba/envs/openai/lib/python3.11/site-packages/langchain/document_loaders/unstructured.py:71](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/langchain/document_loaders/unstructured.py:71), in UnstructuredBaseLoader.load(self)
69 def load(self) -> List[Document]:
70 """Load file."""
---> 71 elements = self._get_elements()
72 if self.mode == "elements":
73 docs: List[Document] = list()
File [~/micromamba/envs/openai/lib/python3.11/site-packages/langchain/document_loaders/unstructured.py:133](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/langchain/document_loaders/unstructured.py:133), in UnstructuredFileLoader._get_elements(self)
130 def _get_elements(self) -> List:
131 from unstructured.partition.auto import partition
--> 133 return partition(filename=self.file_path, **self.unstructured_kwargs)
File [~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/partition/auto.py:193](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/partition/auto.py:193), in partition(filename, content_type, file, file_filename, url, include_page_breaks, strategy, encoding, paragraph_grouper, headers, ssl_verify, ocr_languages, pdf_infer_table_structure, xml_keep_tags, data_source_metadata, **kwargs)
183 elements = partition_image(
184 filename=filename, # type: ignore
185 file=file, # type: ignore
(...)
190 **kwargs,
191 )
192 elif filetype == FileType.TXT:
--> 193 elements = partition_text(
194 filename=filename,
195 file=file,
196 encoding=encoding,
197 paragraph_grouper=paragraph_grouper,
198 **kwargs,
199 )
200 elif filetype == FileType.RTF:
201 elements = partition_rtf(
202 filename=filename,
203 file=file,
204 include_page_breaks=include_page_breaks,
205 **kwargs,
206 )
File [~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/documents/elements.py:118](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/documents/elements.py:118), in process_metadata..decorator..wrapper(*args, **kwargs)
116 @wraps(func)
117 def wrapper(*args, **kwargs):
--> 118 elements = func(*args, **kwargs)
119 sig = inspect.signature(func)
120 params = dict(**dict(zip(sig.parameters, args)), **kwargs)
File [~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/file_utils/filetype.py:493](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/file_utils/filetype.py:493), in add_metadata_with_filetype..decorator..wrapper(*args, **kwargs)
491 @wraps(func)
492 def wrapper(*args, **kwargs):
--> 493 elements = func(*args, **kwargs)
494 sig = inspect.signature(func)
495 params = dict(**dict(zip(sig.parameters, args)), **kwargs)
File [~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/partition/text.py:92](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/partition/text.py:92), in partition_text(filename, file, text, encoding, paragraph_grouper, metadata_filename, include_metadata, **kwargs)
89 ctext = ctext.strip()
91 if ctext:
---> 92 element = element_from_text(ctext)
93 element.metadata = metadata
94 elements.append(element)
File [~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/partition/text.py:104](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/partition/text.py:104), in element_from_text(text)
102 elif is_us_city_state_zip(text):
103 return Address(text=text)
--> 104 elif is_possible_narrative_text(text):
105 return NarrativeText(text=text)
106 elif is_possible_title(text):
File [~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/partition/text_type.py:86](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/partition/text_type.py:86), in is_possible_narrative_text(text, cap_threshold, non_alpha_threshold, language, language_checks)
83 if under_non_alpha_ratio(text, threshold=non_alpha_threshold):
84 return False
---> 86 if (sentence_count(text, 3) < 2) and (not contains_verb(text)) and language == "en":
87 trace_logger.detail(f"Not narrative. Text does not contain a verb:\n\n{text}") # type: ignore # noqa: E501
88 return False
File [~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/partition/text_type.py:189](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/partition/text_type.py:189), in contains_verb(text)
186 if text.isupper():
187 text = text.lower()
--> 189 pos_tags = pos_tag(text)
190 return any(tag in POS_VERB_TAGS for _, tag in pos_tags)
File [~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/nlp/tokenize.py:57](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/unstructured/nlp/tokenize.py:57), in pos_tag(text)
55 for sentence in sentences:
56 tokens = _word_tokenize(sentence)
---> 57 parts_of_speech.extend(_pos_tag(tokens))
58 return parts_of_speech
File [~/micromamba/envs/openai/lib/python3.11/site-packages/nltk/tag/__init__.py:165](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/nltk/tag/__init__.py:165), in pos_tag(tokens, tagset, lang)
140 def pos_tag(tokens, tagset=None, lang="eng"):
141 """
142 Use NLTK's currently recommended part of speech tagger to
143 tag the given list of tokens.
(...)
163 :rtype: list(tuple(str, str))
164 """
--> 165 tagger = _get_tagger(lang)
166 return _pos_tag(tokens, tagset, tagger, lang)
File [~/micromamba/envs/openai/lib/python3.11/site-packages/nltk/tag/__init__.py:107](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/nltk/tag/__init__.py:107), in _get_tagger(lang)
105 tagger.load(ap_russian_model_loc)
106 else:
--> 107 tagger = PerceptronTagger()
108 return tagger
File [~/micromamba/envs/openai/lib/python3.11/site-packages/nltk/tag/perceptron.py:169](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/nltk/tag/perceptron.py:169), in PerceptronTagger.__init__(self, load)
165 if load:
166 AP_MODEL_LOC = "file:" + str(
167 find("taggers/averaged_perceptron_tagger/" + PICKLE)
168 )
--> 169 self.load(AP_MODEL_LOC)
File [~/micromamba/envs/openai/lib/python3.11/site-packages/nltk/tag/perceptron.py:252](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/nltk/tag/perceptron.py:252), in PerceptronTagger.load(self, loc)
246 def load(self, loc):
247 """
248 :param loc: Load a pickled model at location.
249 :type loc: str
250 """
--> 252 self.model.weights, self.tagdict, self.classes = load(loc)
253 self.model.classes = self.classes
File [~/micromamba/envs/openai/lib/python3.11/site-packages/nltk/data.py:755](https://vscode-remote+ssh-002dremote-002bubuntu-002eh.vscode-resource.vscode-cdn.net/home/john/project/testscratch/python/hello-world/llm/langchain/getting_started_guide_zh/~/micromamba/envs/openai/lib/python3.11/site-packages/nltk/data.py:755), in load(resource_url, format, cache, verbose, logic_parser, fstruct_reader, encoding)
753 resource_val = opened_resource.read()
754 elif format == "pickle":
--> 755 resource_val = pickle.load(opened_resource)
756 elif format == "json":
757 import json
UnpicklingError: pickle data was truncated
```
how to fix it
### Expected behavior
no | UnpicklingError: pickle data was truncated | https://api.github.com/repos/langchain-ai/langchain/issues/6850/comments | 2 | 2023-06-27T23:10:44Z | 2023-10-05T16:07:05Z | https://github.com/langchain-ai/langchain/issues/6850 | 1,777,879,256 | 6,850 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I tried to install langchain[llms] with pip on Windows 11. The installation did not throw any errors. But trying to import langchain in a python script gives the following error:
from numexpr.interpreter import MAX_THREADS, use_vml, __BLOCK_SIZE1__
ImportError: DLL load failed while importing interpreter: The specified module could not be found.
pip list shows that numexpr=2.8.4 is installed.
Uninstalling and reinstalling numexpr did not help.
Since I had a machine where langchain was working I could determine the difference that the one machine had Visual Studio installed with C compiling capabilities.
Uninstalling and reinstalling numexpr fixed it after installing Visual Studio on the second machine.
But since the documentation does not explain that Visual Studio is required in Windows I am confused and it took me a while to figure this out. Also that the installation had no errors but still didnt worked? Maybe someone looks into that.
Please delete the issue if this is not the right place.
### Suggestion:
_No response_ | Issue: Installing langchain[llms] is really difficult | https://api.github.com/repos/langchain-ai/langchain/issues/6848/comments | 2 | 2023-06-27T23:01:43Z | 2023-10-05T16:07:28Z | https://github.com/langchain-ai/langchain/issues/6848 | 1,777,867,466 | 6,848 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Have callbacks as an argument for BaseConversationalRetrievalChain._get_docs method and BaseRetriever.get_relevant_documents
### Motivation
I am using a custom retriever which has multiple intermediate steps, and I would like to store info from some of these steps for debugging and subsequent analyses. This information is specific to the request, and it cannot be stored at an individual document level.
### Your contribution
I think this can be addressed by having an option to pass callbacks to the BaseConversationalRetrievalChain._get_docs method and BaseRetriever.get_relevant_documents. BaseCallbackHandler may have to be modified too to extend a new class RetrieverManagerMixin which can contain methods like on_retriever_start and on_retriever_end. | Callbacks for retriever | https://api.github.com/repos/langchain-ai/langchain/issues/6846/comments | 1 | 2023-06-27T22:15:16Z | 2023-10-05T16:08:00Z | https://github.com/langchain-ai/langchain/issues/6846 | 1,777,829,041 | 6,846 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Currently, `langchain 0.0.217 depends on pydantic<2 and >=1`. Pydantic v2 is re-written in Rust and is between 5-50x faster than v1 depending on the use case. Given how much LangChain relies on Pydantic for both modeling and functional components, and given that FastAPI is now supporting (in beta) Pydantic v2, it'd be great to see LangChain handle a user-specified installation of Pydantic above v2.
The following is an example of what happens when a user specifies installing Pydantic above v2.
```bash
The conflict is caused by:
The user requested pydantic==2.0b2
fastapi 0.100.0b1 depends on pydantic!=1.8, !=1.8.1, <3.0.0 and >=1.7.4
inflect 6.0.4 depends on pydantic>=1.9.1
langchain 0.0.217 depends on pydantic<2 and >=1
```
### Motivation
Pydantic v2 is re-written in Rust and is between 5-50x faster than v1 depending on the use case. Given how much LangChain relies on Pydantic for both modeling and functional components, and given that FastAPI is now supporting (in beta) Pydantic v2, it'd be great to see LangChain handle a user-specified installation of Pydantic above v2.
### Your contribution
Yes! I'm currently opening just an issue to document my request, and because I'm fairly backlogged. But I have contributed to LangChain in the past and would love to write a pull request to facilitate this in full. | Support for Pydantic v2 | https://api.github.com/repos/langchain-ai/langchain/issues/6841/comments | 30 | 2023-06-27T20:24:25Z | 2023-08-17T21:20:44Z | https://github.com/langchain-ai/langchain/issues/6841 | 1,777,698,608 | 6,841 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain git+https://github.com/hwchase17/langchain@8392ca602c03d3ae660d05981154f17ee0ad438e
Archcraft x86_64
Python 3.11.3
### Who can help?
@eyurtsev @dev2049
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Export the chat from WhatsApp, of a conversation with media and deleted messages.
2. The exported chat contains deleted messages and omitted media during the export. For example : `6/29/23, 12:16 am - User 4: This message was deleted` and `4/20/23, 9:42 am - User 3: <Media omitted>`.
3. Currently these messages are also processed and stored in the index.
### Expected behavior
We can avoid embedding these messages in the index. | WhatsappChatLoader doesn't ignore deleted messages and omitted media | https://api.github.com/repos/langchain-ai/langchain/issues/6838/comments | 1 | 2023-06-27T19:11:54Z | 2023-06-28T02:21:59Z | https://github.com/langchain-ai/langchain/issues/6838 | 1,777,599,060 | 6,838 |
[
"hwchase17",
"langchain"
]
| ### System Info
python - 3.9
langchain - 0.0.213
OS - Mac Monterey
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code Snippet
```
from langchain import (
LLMMathChain,
OpenAI,
SerpAPIWrapper,
SQLDatabase,
SQLDatabaseChain,
)
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA, ConversationChain
from langchain.agents import ZeroShotAgent, Tool, AgentExecutor
from langchain import OpenAI, LLMChain
from langchain.utilities import GoogleSearchAPIWrapper
from langchain import OpenAI, LLMMathChain, SerpAPIWrapper
from langchain.agents import initialize_agent, Tool
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory, ConversationSummaryMemory, ConversationBufferWindowMemory
import time
from chainlit import AskUserMessage, Message, on_chat_start
import random
from langchain.document_loaders import TextLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter, RecursiveCharacterTextSplitter
from langchain.vectorstores import Milvus
from langchain.docstore.document import Document
from langchain.document_loaders.csv_loader import CSVLoader
from langchain.document_loaders import WebBaseLoader
from langchain.document_loaders import UnstructuredHTMLLoader
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from pathlib import Path
from langchain.document_loaders import WebBaseLoader
from langchain.prompts import PromptTemplate
import json
import os
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo", openai_api_key=openai_api_key)
llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)
conversational_memory = ConversationBufferWindowMemory(
memory_key='chat_history',
k=50,
return_messages=True,
input_key="question",
output_key='output'
)
# load will vector db
relevant_parts = []
for p in Path(".").absolute().parts:
relevant_parts.append(p)
if relevant_parts[-3:] == ["langchain", "docs", "modules"]:
break
def get_meta(data:str):
split_data = [item.split(":") for item in data.split("\n")]
# Creating a dictionary from the split data
result = {}
for item in split_data:
if len(item) > 1:
key = item[0].strip()
value = item[1].strip()
result[key] = value
# Converting the dictionary to JSON format
json_data = json.dumps(result)
return json_data
template = """You're an customer care representative
You have the following products in your store based on the customer question. Answer politely to customer questions on any questions on product sold in the wesbite and provide product details. Send the product webpage links for each recommendation
{context}
"chat_history": {chat_history}
Question: {question}
Answer:"""
products_path = "ecommerce__20200101_20200131__10k_data_10.csv"
loader = CSVLoader(products_path, csv_args={
'delimiter': ',',
'quotechar': '"',
})
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=4000, chunk_overlap=0, length_function = len)
sources = text_splitter.split_documents(documents)
source_chunks = []
splitter = RecursiveCharacterTextSplitter(chunk_size=3000, chunk_overlap=100, length_function = len)
for source in sources:
for chunk in splitter.split_text(source.page_content):
chunk_metadata = json_meta = json.loads(get_meta(chunk))
source_chunks.append(Document(page_content=chunk, metadata=chunk_metadata))
embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key, model="ada")
products_db = Chroma.from_documents(source_chunks, embeddings, collection_name="products")
prompt = PromptTemplate(template=template, input_variables=["context", "question", "chat_history"])
product_inquery_agent = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=products_db.as_retriever(search_kwargs={"k": 4}),
chain_type_kwargs = {"prompt": prompt, "verbose": True, "memory": conversational_memory},
)
tools = [
Tool(
name="Product inquiries System",
func=product_inquery_agent.run,
description="useful for getting information about products, features, and specifications to make informed purchase decisions. Input should be a fully formed question.",
return_direct=True,
),
]
prefix = """Have a conversation with a human as a customer representative agent, answering the following questions as best you can in a polite and friendly manner. You have access to the following tools:"""
suffix = """Begin!"
{chat_history}
Question: {input}
{agent_scratchpad}"""
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=["input", "chat_history", "agent_scratchpad"],
)
llm_chain = LLMChain(llm=OpenAI(temperature=0, openai_api_key=openai_api_key), prompt=prompt)
agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)
agent_chain = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True, memory=conversational_memory)
```
### Expected behavior
query = "what products are available in your website ?"
response = agent_chain.run(query)
print(response)
The response should print the output captured but it throws a error with key not found
``` > Entering new chain...
Thought: I need to find out what products are available
Action: Product inquiries System
Action Input: What products are available in your website?
> Entering new chain...
> Entering new chain...
Prompt after formatting:
You're an customer care representative
You have the following products in your store based on the customer question. Answer politely to customer questions on any questions on product sold in the wesbite and provide product details. Send the product webpage links for each recommendation
Uniq Id: cc2083338a16c3fe2f7895289d2e98fe
Product Name: ARTSCAPE Etched Glass 24" x 36" Window Film, 24-by-36-Inch
Brand Name:
Asin:
Category: Home & Kitchen | Home Décor | Window Treatments | Window Stickers & Films | Window Films
Upc Ean Code:
List Price:
Selling Price: $12.99
Quantity:
Model Number: 01-0121
About Product: Make sure this fits by entering your model number. | The visual effect of textured glass and stained glass | Creates privacy / Provides UV protection | No adhesives / Applies easily | Patterns repeat to cover any size window | Made in USA
Product Specification: ProductDimensions:72x36x0inches|ItemWeight:11.2ounces|ShippingWeight:12.2ounces(Viewshippingratesandpolicies)|Manufacturer:Artscape|ASIN:B000Q3PRYA|DomesticShipping:ItemcanbeshippedwithinU.S.|InternationalShipping:ThisitemcanbeshippedtoselectcountriesoutsideoftheU.S.LearnMore|ShippingAdvisory:Thisitemmustbeshippedseparatelyfromotheritemsinyourorder.Additionalshippingchargeswillnotapply.|Itemmodelnumber:01-0121
Technical Details: show up to 2 reviews by default Product Description Artscape window films create the look of stained and etched glass. These thin, translucent films provide privacy while still allowing natural light to enter the room. Artscape films are easily applied to any smooth glass surface. They do not use adhesives and are easily removed if needed. They can be trimmed or combined to fit any size window. The images have a repeating pattern left to right and top to bottom and can be used either vertically or horizontally. These films provide UV protection and are the perfect decorative accent for windows that require continued privacy. Artscape patented products are all made in the USA. From the Manufacturer Artscape window films create the look of stained and etched glass. These thin, translucent films provide privacy while still allowing natural light to enter the room. Artscape films are easily applied to any smooth glass surface. They do not use adhesives and are easily removed if needed. They can be trimmed or combined to fit any size window. The images have a repeating pattern left to right and top to bottom and can be used either vertically or horizontally. These films provide UV protection and are the perfect decorative accent for windows that require continued privacy. Artscape patented products are all made in the USA. | 12.2 ounces (View shipping rates and policies)
Shipping Weight: 12.2 ounces
Product Dimensions:
Image: [https://images-na.ssl-images-amazon.com/images/I/51iAe3LF7FL.jpg|https://images-na.ssl-images-amazon.com/images/I/417B20oEehL.jpg|https://images-na.ssl-images-amazon.com/images/I/41J3bDg633L.jpg|https://images-na.ssl-images-amazon.com/images/I/51hhnuGWwdL.jpg|https://images-na.ssl-images-amazon.com/images/G/01/x-locale/common/transparent-pixel.jpg](https://images-na.ssl-images-amazon.com/images/I/51iAe3LF7FL.jpg%7Chttps://images-na.ssl-images-amazon.com/images/I/417B20oEehL.jpg%7Chttps://images-na.ssl-images-amazon.com/images/I/41J3bDg633L.jpg%7Chttps://images-na.ssl-images-amazon.com/images/I/51hhnuGWwdL.jpg%7Chttps://images-na.ssl-images-amazon.com/images/G/01/x-locale/common/transparent-pixel.jpg)
Variants:
Sku:
Product Url: https://www.amazon.com/ARTSCAPE-Etched-Glass-Window-Film/dp/B000Q3PRYA
Stock:
Product Details:
Dimensions:
Color:
Ingredients:
Stock:
Product Details:
Dimensions:
Color:
Ingredients:
Direction To Use:
Is Amazon Seller: Y
Size Quantity Variant:
Product Description:
Uniq Id: f8c32a45e507a177992973cf0d46d20c
Product Name: Terra by Battat – 4 Dinosaur Toys, Medium – Dinosaurs for Kids & Collectors, Scientifically Accurate & Designed by A Paleo-Artist; Age 3+ (4 Pc)
Brand Name:
Asin:
Category:
Upc Ean Code:
List Price:
Selling Price: $18.66
Quantity:
Model Number: AN4054Z
About Product: Make sure this fits by entering your model number. | 4 medium-sized dinosaurs for kids, with lifelike pose, accurate ratio, and exquisitely detailed paint | Includes: Parasaurolophus walkeri, Stegosaurus ungulatus, Pachyrhinosaurus, and euoplocephalus tutus toy dinos | Museum Quality: classic toy dinosaurs designed by an internationally renowned paleo-artist | Educational toy: dinosaur toys for kids spark curiosity about paleontology, science, and natural History | Dimensions: Dimensions: miniature figurines measure 6.25-8.5 (L) 1.5-2.25 (W) 2.25-3.5 (H) inches approximately toys 6.25-8.5 (L) 1.5-2.25 (W) 2.25-3.5 (H) inches approximately | No batteries required, just imagination! | Earth-friendly recyclable packaging | Age: suggested for ages 3+ | Collect them all! Discover the entire Terra by Battat family of animal toy figurines and dinosaur playsets!
Product Specification: ProductDimensions:8.7x3.9x3.4inches|ItemWeight:15.8ounces|ShippingWeight:1.4pounds(Viewshippingratesandpolicies)|ASIN:B07PF1R8LS|Itemmodelnumber:AN4054Z|Manufacturerrecommendedage:36months-10years
Technical Details: Go to your orders and start the return Select the ship method Ship it! | Go to your orders and start the return Select the ship method Ship it! | show up to 2 reviews by default Look out! Here come 4 incredible dinosaur toys that look like the real thing! Sought after by both kids and collectors, These scientifically precise and very collectable dinosaur replicas also feature bright, beautiful colors and look amazing in any dinosaur playset. And, they're made with high quality material, so these dinos will never go extinct! The intimidating-looking, and instantly recognizable Stegosaurus was the largest of plate-backed prehistoric herbivores. Scientists have suggested that the Stegosaurus’ iconic plates were for defense against predators, but this is unlikely as the plates were quite thin. Others have said the Stegosaurus’ spiked tail may have been used as a weapon. Terra’s Stegosaurus toy accurately recreates the formidable size and terrifying presence of this iconic predator. In particular, the large, heavily built frame and rounded backs of the mighty Stegosaurus are faithfully depicted in the dinosaur replica. This highly collectable playset also includes a toy Parasaurolophus, Stegosaurus, Pachyrhinosaurus, and euoplocephalus tutus. Collect your own miniature versions of 4 powerful and iconic dinosaurs from Terra by Battat! | 1.4 pounds (View shipping rates and policies)
Shipping Weight: 1.4 pounds
Product Dimensions:
Uniq Id: e04b990e95bf73bbe6a3fa09785d7cd0
Product Name: Woodstock- Collage 500 pc Puzzle
Brand Name:
Asin:
Category: Toys & Games | Puzzles | Jigsaw Puzzles
Upc Ean Code:
List Price:
Selling Price: $17.49
Quantity:
Model Number: 62151
About Product: Make sure this fits by entering your model number. | Puzzle has 500 pieces | Completed puzzle measure 14 x 19 | 100% officially licensed merchandise | Great for fans & puzzlers alike
Product Specification: ProductDimensions:1.9x8x10inches|ItemWeight:13.4ounces|ShippingWeight:13.4ounces(Viewshippingratesandpolicies)|ASIN:B07MX21WWX|Itemmodelnumber:62151|Manufacturerrecommendedage:14yearsandup
Technical Details: show up to 2 reviews by default 100% Officially licensed merchandise; complete puzzle measures 14 x 19 in. | 13.4 ounces (View shipping rates and policies)
Shipping Weight: 13.4 ounces
Product Dimensions:
Image: [https://images-na.ssl-images-amazon.com/images/I/61plo8Xv4vL.jpg|https://images-na.ssl-images-amazon.com/images/G/01/x-locale/common/transparent-pixel.jpg](https://images-na.ssl-images-amazon.com/images/I/61plo8Xv4vL.jpg%7Chttps://images-na.ssl-images-amazon.com/images/G/01/x-locale/common/transparent-pixel.jpg)
Variants:
Sku:
Product Url: https://www.amazon.com/Woodstock-Collage-500-pc-Puzzle/dp/B07MX21WWX
Stock:
Product Details:
Dimensions:
Color:
Ingredients:
Direction To Use:
Is Amazon Seller: Y
Size Quantity Variant:
Product Description:
"chat_history": []
Question: What products are available in your website?
Answer:
> Finished chain.
> Finished chain.
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[42], line 2
1 query = "what products are available in your website ?"
----> 2 response = agent_chain.run(query)
3 print(response)
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/chains/base.py:290, in Chain.run(self, callbacks, tags, *args, **kwargs)
288 if len(args) != 1:
289 raise ValueError("`run` supports only one positional argument.")
--> 290 return self(args[0], callbacks=callbacks, tags=tags)[_output_key]
292 if kwargs and not args:
293 return self(kwargs, callbacks=callbacks, tags=tags)[_output_key]
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/chains/base.py:166, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
168 final_outputs: Dict[str, Any] = self.prep_outputs(
169 inputs, outputs, return_only_outputs
170 )
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/chains/base.py:160, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
154 run_manager = callback_manager.on_chain_start(
155 dumpd(self),
156 inputs,
157 )
158 try:
159 outputs = (
--> 160 self._call(inputs, run_manager=run_manager)
161 if new_arg_supported
162 else self._call(inputs)
163 )
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/agents/agent.py:987, in AgentExecutor._call(self, inputs, run_manager)
985 # We now enter the agent loop (until it returns something).
986 while self._should_continue(iterations, time_elapsed):
--> 987 next_step_output = self._take_next_step(
988 name_to_tool_map,
989 color_mapping,
990 inputs,
991 intermediate_steps,
992 run_manager=run_manager,
993 )
994 if isinstance(next_step_output, AgentFinish):
995 return self._return(
996 next_step_output, intermediate_steps, run_manager=run_manager
997 )
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/agents/agent.py:850, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
848 tool_run_kwargs["llm_prefix"] = ""
849 # We then call the tool on the tool input to get an observation
--> 850 observation = tool.run(
851 agent_action.tool_input,
852 verbose=self.verbose,
853 color=color,
854 callbacks=run_manager.get_child() if run_manager else None,
855 **tool_run_kwargs,
856 )
857 else:
858 tool_run_kwargs = self.agent.tool_run_logging_kwargs()
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/tools/base.py:299, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, **kwargs)
297 except (Exception, KeyboardInterrupt) as e:
298 run_manager.on_tool_error(e)
--> 299 raise e
300 else:
301 run_manager.on_tool_end(
302 str(observation), color=color, name=self.name, **kwargs
303 )
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/tools/base.py:271, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, **kwargs)
268 try:
269 tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
270 observation = (
--> 271 self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
272 if new_arg_supported
273 else self._run(*tool_args, **tool_kwargs)
274 )
275 except ToolException as e:
276 if not self.handle_tool_error:
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/tools/base.py:414, in Tool._run(self, run_manager, *args, **kwargs)
411 """Use the tool."""
412 new_argument_supported = signature(self.func).parameters.get("callbacks")
413 return (
--> 414 self.func(
415 *args,
416 callbacks=run_manager.get_child() if run_manager else None,
417 **kwargs,
418 )
419 if new_argument_supported
420 else self.func(*args, **kwargs)
421 )
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/chains/base.py:290, in Chain.run(self, callbacks, tags, *args, **kwargs)
288 if len(args) != 1:
289 raise ValueError("`run` supports only one positional argument.")
--> 290 return self(args[0], callbacks=callbacks, tags=tags)[_output_key]
292 if kwargs and not args:
293 return self(kwargs, callbacks=callbacks, tags=tags)[_output_key]
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/chains/base.py:166, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
168 final_outputs: Dict[str, Any] = self.prep_outputs(
169 inputs, outputs, return_only_outputs
170 )
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/chains/base.py:160, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
154 run_manager = callback_manager.on_chain_start(
155 dumpd(self),
156 inputs,
157 )
158 try:
159 outputs = (
--> 160 self._call(inputs, run_manager=run_manager)
161 if new_arg_supported
162 else self._call(inputs)
163 )
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/chains/retrieval_qa/base.py:120, in BaseRetrievalQA._call(self, inputs, run_manager)
117 question = inputs[self.input_key]
119 docs = self._get_docs(question)
--> 120 answer = self.combine_documents_chain.run(
121 input_documents=docs, question=question, callbacks=_run_manager.get_child()
122 )
124 if self.return_source_documents:
125 return {self.output_key: answer, "source_documents": docs}
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/chains/base.py:293, in Chain.run(self, callbacks, tags, *args, **kwargs)
290 return self(args[0], callbacks=callbacks, tags=tags)[_output_key]
292 if kwargs and not args:
--> 293 return self(kwargs, callbacks=callbacks, tags=tags)[_output_key]
295 if not kwargs and not args:
296 raise ValueError(
297 "`run` supported with either positional arguments or keyword arguments,"
298 " but none were provided."
299 )
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/chains/base.py:168, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
166 raise e
167 run_manager.on_chain_end(outputs)
--> 168 final_outputs: Dict[str, Any] = self.prep_outputs(
169 inputs, outputs, return_only_outputs
170 )
171 if include_run_info:
172 final_outputs[RUN_KEY] = RunInfo(run_id=run_manager.run_id)
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/chains/base.py:233, in Chain.prep_outputs(self, inputs, outputs, return_only_outputs)
231 self._validate_outputs(outputs)
232 if self.memory is not None:
--> 233 self.memory.save_context(inputs, outputs)
234 if return_only_outputs:
235 return outputs
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/memory/chat_memory.py:34, in BaseChatMemory.save_context(self, inputs, outputs)
32 def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
33 """Save context from this conversation to buffer."""
---> 34 input_str, output_str = self._get_input_output(inputs, outputs)
35 self.chat_memory.add_user_message(input_str)
36 self.chat_memory.add_ai_message(output_str)
File /opt/miniconda3/envs/discord_dev/lib/python3.9/site-packages/langchain/memory/chat_memory.py:30, in BaseChatMemory._get_input_output(self, inputs, outputs)
28 else:
29 output_key = self.output_key
---> 30 return inputs[prompt_input_key], outputs[output_key]
KeyError: 'output' ``` | agent with memory unable to execute and throwing a output key error | https://api.github.com/repos/langchain-ai/langchain/issues/6837/comments | 5 | 2023-06-27T18:45:58Z | 2024-06-24T07:06:27Z | https://github.com/langchain-ai/langchain/issues/6837 | 1,777,551,770 | 6,837 |
[
"hwchase17",
"langchain"
]
| ### Large observation handling limit.
Hey langchain community,
I have a tool which takes a database query as input and does database query. This is similar to what `QuerySQLDataBaseTool` does. The problem is the output of the query is out of control, it can be large and the agent exceeded the token limit.
The solution I have tried:
1. Do pagination:
- Chunk the large output, summarize each chunk according to the target question
- Combine all chunks' summarization, which is much smaller than the original output.
Problems:
- Even though I did the summarization according to the target question, the summarization will still lose information.
- The pagination can be slow.
2. Vectorization:
- Chunk the large output
- Embed each chunk and put them into a Vector DB.
- Do a similarity search based on the target question, and take number of chunks within the token limit.
Problems:
- The embedding take times, so it can be slow for a single thought.
- The output of the query is semantic continuously as a a whole, the chunks can break the semantic meaning.
Does anyone have a solution for this problem? I appreciated any idea!
### Suggestion:
_No response_ | Issue: Large observation handling limit | https://api.github.com/repos/langchain-ai/langchain/issues/6836/comments | 6 | 2023-06-27T17:57:22Z | 2023-11-08T16:08:35Z | https://github.com/langchain-ai/langchain/issues/6836 | 1,777,470,667 | 6,836 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.208
OS version: macOS 13.4
Python version: 3.10.12
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Queries without an `order by` clause aren't guaranteed to have any particular order.
### Expected behavior
Chat history should be in order. | PostgresChatMessageHistory message order isn't guaranteed | https://api.github.com/repos/langchain-ai/langchain/issues/6829/comments | 1 | 2023-06-27T15:22:50Z | 2023-06-30T17:13:58Z | https://github.com/langchain-ai/langchain/issues/6829 | 1,777,227,042 | 6,829 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hello everyone,
I am currently utilizing the OpenAIFunctions agent along with some custom tools that I've developed. I'm trying to incorporate a custom property named `source_documents` into one of these tools. My intention is to assign a value to this property within the tool and subsequently utilize this value outside of the tool.
To illustrate, this is how I invoke my custom tool: `tools = [JamesAllenRetrievalSearchTool(source_documents), OrderTrackingTool()]`. Here, `source_documents `is the property that I wish to update within the `JamesAllenRetrievalSearchTool `class.
I attempted to create a constructor within the custom tool and pass the desired variable for updating, but unfortunately, this approach was unsuccessful. If anyone has knowledge of a solution to my problem, I would greatly appreciate your assistance.
Thank you!
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Here is My custom tool Class:
`class JamesAllenRetrivalSearchTool(BaseTool):
name = "jamesallen-search"
description = "Use this tool as the primary source of context information. Always search for the answers using this tool first, don't make up answers yourself"
return_direct = True
args_schema: Type[BaseModel] = JamesAllenRetrivalSearchInput
def _run(self, question: str, run_manager: Optional[CallbackManagerForToolRun] = None) -> str:
knowledge_base_retrival_chain = RetrivalQAChain(prompt)
result = knowledge_base_retrival_chain.run(question)
# **HERE -> I need to update returned source_documents from the chain outside the custom tool class**
# source_documents = result["source_documents"]
# self.retrival_source_docs = source_documents
return result["result"]
async def _arun(self, question: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("Not implemented")
`
### Expected behavior
When I run the tool I expect that the answer from the chain will return and the source documents will be update. | Custom tool class not working with extra properties | https://api.github.com/repos/langchain-ai/langchain/issues/6828/comments | 5 | 2023-06-27T14:54:25Z | 2024-07-29T22:20:46Z | https://github.com/langchain-ai/langchain/issues/6828 | 1,777,167,767 | 6,828 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.27 (I installed over pip install lanchain)
Python v: 3.8
OS: Windows 11
When I try to ``from langchain.llms import GPT4All`
I am getting the error that says there is no Gpt4All module.
When I check the installed library it does not exist. However when I check github repo it exist in the latest version as per today.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.llms import GPT4All
model_path = r"C:\Users\suat.atan\AppData\Local\nomic.ai\GPT4All\ggml-gpt4all-j-v1.3-groovy.bin"
llm = GPT4All(model= model_path)
llm("Where is Paris?")
```
### Expected behavior
At least the first line of the code should work.
This code should answer my prompt | Import Error for Gpt4All | https://api.github.com/repos/langchain-ai/langchain/issues/6825/comments | 6 | 2023-06-27T13:31:27Z | 2023-11-28T16:10:20Z | https://github.com/langchain-ai/langchain/issues/6825 | 1,776,978,548 | 6,825 |
[
"hwchase17",
"langchain"
]
| ### System Info
Most recent version of langchain, 0.0.216.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Causes bazel build fail due to small typo in __init__.py filename (filenames cannot have spaces).
### Expected behavior
Should rename __init__. py to __init__.py | "office365/__init__ .py" filename contains typo | https://api.github.com/repos/langchain-ai/langchain/issues/6822/comments | 3 | 2023-06-27T12:42:09Z | 2023-10-09T16:06:26Z | https://github.com/langchain-ai/langchain/issues/6822 | 1,776,845,482 | 6,822 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain v0.0.216, Python 3.11.3 on WSL2
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Follow the first example at https://python.langchain.com/docs/modules/chains/foundational/router
### Expected behavior
[This line](https://github.com/hwchase17/langchain/blob/v0.0.216/langchain/chains/llm.py#L275) gets triggered:
> The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.
As suggested by the error, we can make the following code changes to pass the output parser directly to LLMChain by changing [this line](https://github.com/hwchase17/langchain/blob/v0.0.216/langchain/chains/router/llm_router.py#L83) to this:
```python
llm_chain = LLMChain(llm=llm, prompt=prompt, output_parser=prompt.output_parser)
```
And calling `LLMChain.__call__` instead of `LLMChain.predict_and_parse` by changing [these lines](https://github.com/hwchase17/langchain/blob/v0.0.216/langchain/chains/router/llm_router.py#L58-L61) to this:
```python
cast(
Dict[str, Any],
self.llm_chain(inputs, callbacks=callbacks),
)
```
Unfortunately, while this avoids the warning, it creates a new error:
> ValueError: Missing some output keys: {'destination', 'next_inputs'}
because LLMChain currently [assumes the existence of a single `self.output_key`](https://github.com/hwchase17/langchain/blob/v0.0.216/langchain/chains/llm.py#L220) and produces this as output:
> {'text': {'destination': 'physics', 'next_inputs': {'input': 'What is black body radiation?'}}}
Even modifying that function to return the keys if the parsed output is a dict triggers the same error, but for the missing key of "text" instead. `predict_and_parse` avoids this fate by skipping output validation entirely.
It appears changes may have to be a bit more involved here if LLMRouterChain is to keep using LLMChain. | LLMRouterChain uses deprecated predict_and_parse method | https://api.github.com/repos/langchain-ai/langchain/issues/6819/comments | 21 | 2023-06-27T11:45:08Z | 2024-02-29T01:21:01Z | https://github.com/langchain-ai/langchain/issues/6819 | 1,776,735,480 | 6,819 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I am proposing an enhancement for the `langchain` implementation of `qdrant`. As of the current version, `langchain` only supports single text searches. My feature proposal involves extending `langchain` to integrate the [search_batch](https://github.com/qdrant/qdrant-client/blob/master/qdrant_client/qdrant_client.py#L171) method from the `qdrant` client. This would allow us to conduct batch searches, increasing efficiency, and potentially speeding up the process for large volumes of text.
### Motivation
This feature request is born out of the need for more efficient text searches when dealing with large data sets. Currently, using `langchain` for search functionality becomes cumbersome and time-consuming due to the lack of batch search capabilities. Running single text searches one after another restricts the speed of operations and is not scalable when dealing with large text corpuses.
Given the increasing size and complexity of modern text datasets and applications, it is more pertinent than ever to have a robust and efficient method of search that can handle bulk operations. With this enhancement, we can perform multiple text searches simultaneously, thus saving considerable time and computing resources.
### Your contribution
While I would love to contribute, I simply do not have the time right now, so I, therefore, hope that the great community, currently building `langchain` sees some sense in the above-mentioned paragraphs. | Batch search for Qdrant database | https://api.github.com/repos/langchain-ai/langchain/issues/6818/comments | 1 | 2023-06-27T11:37:00Z | 2023-10-05T16:08:11Z | https://github.com/langchain-ai/langchain/issues/6818 | 1,776,722,095 | 6,818 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.200 in Debian 11
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Pydantic Model:
```py
class SortCondition(BaseModel):
field: str = Field(description="Field name")
order: str = Field(description="Sort order", enum=["desc", "asc"])
class RecordQueryCondition(BaseModel):
datasheet_id: str = Field(description="The ID of the datasheet to retrieve records from.")
filter_condition: Optional[Dict[str, str]] = Field(
description="""
Find records that meet specific conditions.
This object should contain a key-value pair where the key is the field name and the value is the lookup value. For instance: {"title": "test"}.
"""
)
sort_condition: Optional[List[SortCondition]] = Field(min_items=1,
description="Sort returned records by specific field"
)
maxRecords_condition: Optional[int] = Field(
description="Limit the number of returned values."
)
```
OpenAI return parameters:
```json
{
"datasheet_id": "dsti6VpNpuKQpHVSnh",
"sort_condition": [
{
"field": "Timestamp",
"direction": "desc" # error key!
}
],
"maxRecords_condition": 1
}
```
So, LangChain raise a error:
ValidationError: 1 validation error for RecordQueryCondition
sort_condition -> 0 -> order
field required (type=value_error.missing)
This is source code: https://github.com/xukecheng/APITable-LLMs-Enhancement-Experiments/blob/main/apitable_langchain_function.ipynb
### Expected behavior
I use the OpenAI API to define functions, so it will work properly: https://github.com/xukecheng/APITable-LLMs-Enhancement-Experiments/blob/main/apitable_openai_function.ipynb | Issue: The parameters passed by the OpenAI function agent seem to have a problem. | https://api.github.com/repos/langchain-ai/langchain/issues/6814/comments | 2 | 2023-06-27T10:01:30Z | 2023-07-04T07:42:15Z | https://github.com/langchain-ai/langchain/issues/6814 | 1,776,544,976 | 6,814 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi,
I was trying to get `PythonREPL `or in this case `PythonAstREPL `to work with the `OpenAIMultiFunctionsAgent` reliably, because I came across the same problem as mentioned in this issue: https://github.com/hwchase17/langchain/issues/6364.
I applied the mentioned fix, which worked very well for fixing the REPL tool, but sadly it also broke the usage of any other tools. The agent repeatedly reports `tool_selection is not a valid tool, try another one.`
This is my code to create the agent and tools, as well as for applying the fix (i'm using chainlit for ui):
```
@cl.langchain_factory(use_async=False)
def factory():
# Initialize the OpenAI language model
model = llms[str(use_model)]
llm = ChatOpenAI(
temperature=0,
model=model,
streaming=True,
client="openai",
# callbacks=[cl.ChainlitCallbackHandler()]
)
# Initialize the SerpAPIWrapper for search functionality
search = SerpAPIWrapper(search_engine="google")
# Define a list of tools offered by the agent
tools = [
Tool(
name="Search",
func=search.run,
description="Useful when you need to answer questions about current events or if you have to search the web. You should ask targeted questions like for google."
),
CustomPythonAstREPLTool(),
WriteFileTool(),
ReadFileTool(),
CopyFileTool(),
MoveFileTool(),
DeleteFileTool(),
FileSearchTool(),
ListDirectoryTool(),
ShellTool(),
HumanInputRun(),
]
memory = ConversationTokenBufferMemory(
memory_key="memory",
return_messages=True,
max_token_limit=2000,
llm=llm
)
# needed for memory with openai functions agent
agent_kwargs = {
"extra_prompt_messages": [MessagesPlaceholder(variable_name="memory")],
}
prompt = OpenAIFunctionsAgent.create_prompt(
extra_prompt_messages=[MessagesPlaceholder(variable_name="memory")],
),
print("Prompt: ", prompt[0])
cust_agent = CustomOpenAIMultiFunctionsAgent(
tools=tools,
llm=llm,
prompt=prompt[0],
# kwargs=agent_kwargs,
# return_intermediate_steps=True,
)
mrkl = AgentExecutor.from_agent_and_tools(
agent=cust_agent,
tools=tools,
memory=memory,
# kwargs=agent_kwargs,
# return_intermediate_steps=True,
)
return mrkl
# ----- Custom classes and functions ----- #
class CustomPythonAstREPLTool(PythonAstREPLTool):
name = "python"
description = (
"A Python shell. Use this to execute python commands. "
"The input must be an object as follows: "
"{'__arg1': 'a valid python command.'} "
"When using this tool, sometimes output is abbreviated - "
"Make sure it does not look abbreviated before using it in your answer. "
"Don't add comments to your python code."
)
def _parse_ai_message(message: BaseMessage) -> Union[AgentAction, AgentFinish]:
"""Parse an AI message."""
if not isinstance(message, AIMessage):
raise TypeError(f"Expected an AI message got {type(message)}")
function_call = message.additional_kwargs.get("function_call", {})
if function_call:
function_call = message.additional_kwargs["function_call"]
function_name = function_call["name"]
try:
_tool_input = json.loads(function_call["arguments"])
except JSONDecodeError:
print(
f"Could not parse tool input: {function_call} because "
f"the `arguments` is not valid JSON."
)
_tool_input = function_call["arguments"]
# HACK HACK HACK:
# The code that encodes tool input into Open AI uses a special variable
# name called `__arg1` to handle old style tools that do not expose a
# schema and expect a single string argument as an input.
# We unpack the argument here if it exists.
# Open AI does not support passing in a JSON array as an argument.
if "__arg1" in _tool_input:
tool_input = _tool_input["__arg1"]
else:
tool_input = _tool_input
content_msg = "responded: {content}\n" if message.content else "\n"
return _FunctionsAgentAction(
tool=function_name,
tool_input=tool_input,
log=f"\nInvoking: `{function_name}` with `{tool_input}`\n{content_msg}\n",
message_log=[message],
)
return AgentFinish(return_values={"output": message.content}, log=message.content)
class CustomOpenAIMultiFunctionsAgent(OpenAIMultiFunctionsAgent):
def plan(
self,
intermediate_steps: List[Tuple[AgentAction, str]],
callbacks: Callbacks = None,
**kwargs: Any,
) -> Union[AgentAction, AgentFinish]:
"""Given input, decided what to do.
Args:
intermediate_steps: Steps the LLM has taken to date, along with observations
**kwargs: User inputs.
Returns:
Action specifying what tool to use.
"""
user_input = kwargs["input"]
agent_scratchpad = _format_intermediate_steps(intermediate_steps)
memory = kwargs["memory"]
prompt = self.prompt.format_prompt(
input=user_input, agent_scratchpad=agent_scratchpad, memory=memory
)
messages = prompt.to_messages()
predicted_message = self.llm.predict_messages(
messages, functions=self.functions, callbacks=callbacks
)
agent_decision = _parse_ai_message(predicted_message)
return agent_decision
```
And these are the console outputs with `langchain.debug = true`:
```
Prompt: input_variables=['memory', 'agent_scratchpad', 'input'] output_parser=None partial_variables={}
messages=[SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), MessagesPlaceholder(variable_name='memory'), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], output_parser=None, partial_variables={}, template='{input}', template_format='f-string', validate_template=True), additional_kwargs={}), MessagesPlaceholder(variable_name='agent_scratchpad')]
[chain/start] [1:chain:AgentExecutor] Entering Chain run with input:
{
"input": "write a text file to my desktop",
"memory": []
}
[llm/start] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System: You are a helpful AI assistant.\nHuman: write a text file to my desktop"
]
}
[llm/end] [1:chain:AgentExecutor > 2:llm:ChatOpenAI] [3.00s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "",
"generation_info": null,
"message": {
"content": "",
"additional_kwargs": {
"function_call": {
"name": "tool_selection",
"arguments": "{\n \"actions\": [\n {\n \"action_name\": \"write_file\",\n \"action\": {\n \"file_path\": \"~/Desktop/my_file.txt\",\n \"text\": \"This is the content of my file.\"\n }\n }\n ]\n}"
}
},
"example": false
}
}
]
],
"llm_output": null,
"run": null
}
[tool/start] [1:chain:AgentExecutor > 3:tool:invalid_tool] Entering Tool run with input:
"tool_selection"
[tool/end] [1:chain:AgentExecutor > 3:tool:invalid_tool] [0.0ms] Exiting Tool run with output:
"tool_selection is not a valid tool, try another one."
[llm/start] [1:chain:AgentExecutor > 4:llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System: You are a helpful AI assistant.\nHuman: write a text file to my desktop\nAI: {'name': 'tool_selection', 'arguments': '{\\n \"actions\": [\\n {\\n \"action_name\": \"write_file\",\\n \"action\": {\\n \"file_path\": \"~/Desktop/my_file.txt\",\\n \"text\": \"This is the content of my file.\"\\n }\\n }\\n ]\\n}'}\nFunction: tool_selection is not a valid tool, try another one."
]
}
[llm/end] [1:chain:AgentExecutor > 4:llm:ChatOpenAI] [2.42s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "",
"generation_info": null,
"message": {
"content": "",
"additional_kwargs": {
"function_call": {
"name": "tool_selection",
"arguments": "{\n \"actions\": [\n {\n \"action_name\": \"write_file\",\n \"action\": {\n \"file_path\": \"~/Desktop/my_file.txt\",\n \"text\": \"This is the content of my file.\"\n }\n }\n ]\n}"
}
},
"example": false
}
}
]
],
"llm_output": null,
"run": null
}
[tool/start] [1:chain:AgentExecutor > 5:tool:invalid_tool] Entering Tool run with input:
"tool_selection"
[tool/end] [1:chain:AgentExecutor > 5:tool:invalid_tool] [0.0ms] Exiting Tool run with output:
"tool_selection is not a valid tool, try another one."
[llm/start] [1:chain:AgentExecutor > 6:llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System: You are a helpful AI assistant.\nHuman: write a text file to my desktop\nAI: {'name': 'tool_selection', 'arguments': '{\\n \"actions\": [\\n {\\n \"action_name\": \"write_file\",\\n \"action\": {\\n \"file_path\": \"~/Desktop/my_file.txt\",\\n \"text\": \"This is the content of my file.\"\\n }\\n }\\n ]\\n}'}\nFunction: tool_selection is not a valid tool, try another one.\nAI: {'name': 'tool_selection', 'arguments': '{\\n \"actions\": [\\n {\\n \"action_name\": \"write_file\",\\n \"action\": {\\n \"file_path\": \"~/Desktop/my_file.txt\",\\n \"text\": \"This is the content of my file.\"\\n }\\n }\\n ]\\n}'}\nFunction: tool_selection is not a valid tool, try another one."
]
}
[llm/end] [1:chain:AgentExecutor > 6:llm:ChatOpenAI] [2.35s] Exiting LLM run with output:
{
"generations": [
[
{
"text": "",
"generation_info": null,
"message": {
"content": "",
"additional_kwargs": {
"function_call": {
"name": "tool_selection",
"arguments": "{\n \"actions\": [\n {\n \"action_name\": \"write_file\",\n \"action\": {\n \"file_path\": \"~/Desktop/my_file.txt\",\n \"text\": \"This is the content of my file.\"\n }\n }\n ]\n}"
}
},
"example": false
}
}
]
],
"llm_output": null,
"run": null
}
2023-06-27 11:07:24 - Error in ChainlitCallbackHandler.on_tool_start callback: Task stopped by user
[chain/error] [1:chain:AgentExecutor] [7.86s] Chain run errored with error:
"InterruptedError('Task stopped by user')"
```
Langchain Plus run: https://www.langchain.plus/public/b6c08e7e-bdb0-4792-a291-545e055ad966/r
### Suggestion:
_No response_ | Issue[Bug]: OpenAIMultiFunctionsAgent stuck - 'tool_selection is not a valid tool' | https://api.github.com/repos/langchain-ai/langchain/issues/6813/comments | 1 | 2023-06-27T09:09:28Z | 2023-10-05T16:07:36Z | https://github.com/langchain-ai/langchain/issues/6813 | 1,776,446,588 | 6,813 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.215 and langchain 0.0.216
python 3.9
chromadb 0.3.21
### Who can help?
@agola11 @hw
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.vectorstores import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.llms import OpenAI
from langchain.chains import VectorDBQA
from langchain.document_loaders import TextLoader
from langchain.embeddings import HuggingFaceEmbeddings, HuggingFaceInstructEmbeddings
loader = TextLoader('state_of_the_union.txt')
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
persist_directory = 'db'
embedding = HuggingFaceInstructEmbeddings(model_name=“hkunlp/instructor-large”)
vectordb = Chroma.from_documents(documents=documents, embedding=embedding, persist_directory=persist_directory)
vectordb.persist()
vectordb = None
[state_of_the_union.txt](https://github.com/hwchase17/langchain/files/11879392/state_of_the_union.txt)
The detail error information is attached as follows,
[error_info.txt](https://github.com/hwchase17/langchain/files/11879458/error_info.txt)
I don't know why there will be a error "AttributeError: 'Collection' object has no attribute 'upsert'"
And when i degrade the langchain version to 0.0.177, everything go back normal
### Expected behavior
The document could be stored locally for the further retrieval. | The latest version langchain encountered errors when saving Chroma locally, "error "AttributeError: 'Collection' object has no attribute 'upsert'"" | https://api.github.com/repos/langchain-ai/langchain/issues/6811/comments | 3 | 2023-06-27T08:25:25Z | 2024-02-16T17:31:05Z | https://github.com/langchain-ai/langchain/issues/6811 | 1,776,368,368 | 6,811 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Would you please consider supporting kwargs in GoogleSearchApiWrapper's run / result call,
https://python.langchain.com/docs/modules/agents/tools/integrations/google_search
for the extra filtering on search. for example, I'd like to add "cr" option in cse search, but it seems that I cannot pass any options to run / result method, although internal function "_google_search_results" supports passing extra option to search engine.
### Motivation
I'd like to add "cr" option in cse search, but it seems that I cannot pass any options to run / result method, although internal function "_google_search_results" supports passing extra option to search engine.
### Your contribution
If you allow me, I'd like to make pr for this change. | Support kwargs on GoogleSearchApiWrapper run / result | https://api.github.com/repos/langchain-ai/langchain/issues/6810/comments | 1 | 2023-06-27T07:52:22Z | 2023-08-31T04:06:32Z | https://github.com/langchain-ai/langchain/issues/6810 | 1,776,309,539 | 6,810 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I have a sentence, and I'd like to extract entities from it. On each entity, I'd like to run a custom tool for validating. Is this possible via agents? I've looked through the documentation but couldn't find any related topics
### Suggestion:
_No response_ | Issue: How to iterate using agents | https://api.github.com/repos/langchain-ai/langchain/issues/6809/comments | 1 | 2023-06-27T07:49:29Z | 2023-10-05T16:07:41Z | https://github.com/langchain-ai/langchain/issues/6809 | 1,776,305,294 | 6,809 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version: 0.0.215
python version: Python 3.8.8
### Who can help?
@hwchase17 @agola11 @ey
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I take the example from https://python.langchain.com/docs/modules/chains/additional/question_answering#the-map_reduce-chain . I ignore the retrieval part and inject the whole document into `load_qa_chain` with set `chain_type="map_reduce"`:
```
from langchain.document_loaders import TextLoader
from langchain.chains.question_answering import load_qa_chain
from langchain.llms import OpenAI
loader = TextLoader("state_of_the_union.txt")
documents = loader.load()
chain = load_qa_chain(OpenAI(temperature=0), chain_type="map_reduce")
query = "What did the president say about Justice Breyer"
chain({"input_documents": documents, "question": query}, return_only_outputs=True)
```
comes error below:
```
InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 9640 tokens (9384 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.
```
when `documents` is long document, set `chain_type="map_reduce"` seems do not work, why and how to solve it?
Thanks a lot!
### Expected behavior
load_qa_chain with `chain_type="map_reduce"` setting should can process long document directly,does it? | load_qa_chain with chain_type="map_reduce" can not process long document | https://api.github.com/repos/langchain-ai/langchain/issues/6805/comments | 3 | 2023-06-27T06:25:37Z | 2023-10-05T16:09:37Z | https://github.com/langchain-ai/langchain/issues/6805 | 1,776,178,595 | 6,805 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python 3.10, Langchain > v0.0.212
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Enable caching with langchain.llm_cache = RedisCache
### Expected behavior
This works with version <= v0.0.212 | ValueError: RedisCache only supports caching of normal LLM generations, got <class 'langchain.schema.ChatGeneration'> | https://api.github.com/repos/langchain-ai/langchain/issues/6803/comments | 1 | 2023-06-27T05:41:52Z | 2023-06-29T12:08:37Z | https://github.com/langchain-ai/langchain/issues/6803 | 1,776,131,453 | 6,803 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Have SOURCES info in map_rerank's answer similar to the information available for 'map_reduce' and 'stuff' chain_type options.
### Motivation
Standardization of output
Indicate answer source when map-rerank is used with ConversationalRetrievalChain
### Your contribution
https://github.com/hwchase17/langchain/pull/6794 | Source info in map_rerank's answer | https://api.github.com/repos/langchain-ai/langchain/issues/6795/comments | 1 | 2023-06-27T01:33:01Z | 2023-10-05T16:07:51Z | https://github.com/langchain-ai/langchain/issues/6795 | 1,775,936,801 | 6,795 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain .216, OS X 11.6, Python 3.11.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Setup OpenAIEmbeddings method with Azure arguments
2. Split text with a splitter like RecursiveCharacterTextSplitter
3. Use text and embedding function in chroma.from_texts
```python
import openai
import os
from dotenv import load_dotenv, find_dotenv
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
_ = load_dotenv(find_dotenv())
API_KEY = os.environ.get('STAGE_API_KEY')
API_VERSION = os.environ.get('API_VERSION')
RESOURCE_ENDPOINT = os.environ.get('RESOURCE_ENDPOINT')
openai.api_type = "azure"
openai.api_key = API_KEY
openai.api_base = RESOURCE_ENDPOINT
openai.api_version = API_VERSION
openai.log = "debug"
sample_text = 'This metabolite causes atherosclerosis in the liver[55]. Strengths and limitations This is the first thorough bibliometric analysis of nutrition and gut microbiota research conducted on a global level.'
embed_deployment_id = 'text-embedding-ada-002'
embed_model = 'text-embedding-ada-002'
persist_directory = "./storage_openai_chunks" # will be created if not existing
embeddings = OpenAIEmbeddings(
deployment=embed_deployment_id,
model=embed_model,
openai_api_key=API_KEY,
openai_api_base=RESOURCE_ENDPOINT,
openai_api_type="azure",
openai_api_version=API_VERSION,
)
text_splitter = RecursiveCharacterTextSplitter(chunk_size=40, chunk_overlap=10)
texts = text_splitter.split_text(sample_text)
vectordb = Chroma.from_texts(collection_name='test40',
texts=texts,
embedding=embeddings,
persist_directory=persist_directory)
vectordb.persist()
print(vectordb.get())
message='Request to OpenAI API' method=post path=https://***/openai/deployments/text-embedding-ada-002/embeddings?api-version=2023-05-15
api_version=2023-05-15 data='{"input": [[2028, 28168, 635, 11384, 264, 91882, 91711], [258, 279, 26587, 58, 2131, 948, 32937, 82, 323], [438, 9669, 1115, 374, 279, 1176], [1820, 1176, 17879, 44615, 24264], [35584, 315, 26677, 323, 18340], [438, 18340, 53499, 6217, 3495, 13375], [444, 55015, 389, 264, 3728, 2237, 13]], "encoding_format": "base64"}' message='Post details'
message='OpenAI API response' path=https://***/openai/deployments/text-embedding-ada-002/embeddings?api-version=2023-05-15 processing_ms=None request_id=None response_code=400
body='{\n "error": "/input/6 expected type: String, found: JSONArray\\n/input/5 expected type: String, found: JSONArray\\n/input/4 expected type: String, found: JSONArray\\n/input/3 expected type: String, found: JSONArray\\n/input/2 expected type: String, found: JSONArray\\n/input/1 expected type: String, found: JSONArray\\n/input/0 expected type: String, found: JSONArray\\n/input expected: null, found: JSONArray\\n/input expected type: String, found: JSONArray"\n}' headers="{'Date': 'Tue, 27 Jun 2023 00:08:56 GMT', 'Content-Type': 'application/json; charset=UTF-8', 'Content-Length': '454', 'Connection': 'keep-alive', 'Strict-Transport-Security': 'max-age=16070400; includeSubDomains', 'Set-Cookie': 'TS01bd4155=0179bf738063e38fbf3fffb70b7f9705fd626c2df1126f29599084aa69d137b77c61d6377a118a5ebe5a1f1f9f314c22a777a0e861; Path=/; Domain=.***', 'Vary': 'Accept-Encoding'}" message='API response body'
Traceback (most recent call last):
File "/Users/A/dev/python/openai/langchain_embed_issue.py", line 39, in <module>
vectordb = Chroma.from_texts(collection_name='test40',
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 403, in from_texts
chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 148, in add_texts
embeddings = self._embedding_function.embed_documents(list(texts))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 465, in embed_documents
return self._get_len_safe_embeddings(texts, engine=self.deployment)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 302, in _get_len_safe_embeddings
response = embed_with_retry(
^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 97, in embed_with_retry
return _embed_with_retry(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 95, in _embed_with_retry
return embeddings.client.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/openai/api_resources/embedding.py", line 33, in create
response = super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/openai/api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/openai/api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/A/anaconda3/envs/openai1/lib/python3.11/site-packages/openai/api_requestor.py", line 418, in handle_error_response
error_code=error_data.get("code"),
^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'get'
Process finished with exit code 1
```
### Expected behavior
OpenAIEmbeddings should return embeddings instead of an error.
Because Azure currently only accepts str input, in contrast to OpenAI which accepts tokens or strings, the input is rejected because OpenAIEmbeddings sends tokens only. Azure embedding API docs confirm this, where the request body input parameter is of type string: https://learn.microsoft.com/en-us/azure/cognitive-services/openai/reference#embeddings
Second, after modifying openai.py to send strings, Azure complains that it currently accepts one input at a time--in other words, it doesn't accept batches of strings (or even tokens if it accepted tokens). So the for loop increment was modified to send one decoded batch of tokens (in other words, the original str chunk) at a time.
Modifying embeddings/openai.py with:
```python
# batched_embeddings = []
# _chunk_size = chunk_size or self.chunk_size
# for i in range(0, len(tokens), _chunk_size):
# response = embed_with_retry(
# self,
# input=tokens[i : i + _chunk_size],
# **self._invocation_params,
# )
# batched_embeddings += [r["embedding"] for r in response["data"]]
batched_embeddings = []
_chunk_size = chunk_size or self.chunk_size if 'azure' not in self.openai_api_type else 1
#
#
for i in range(0, len(tokens), _chunk_size):
embed_input = encoding.decode(tokens[i]) if 'azure' in self.openai_api_type else tokens[i : i + _chunk_size]
response = embed_with_retry(
self,
input=embed_input,
**self._invocation_params,
)
batched_embeddings += [r["embedding"] for r in response["data"]]
```
and re-running the code:
```text
# same code
...
message='Request to OpenAI API' method=post path=https://***/openai/deployments/text-embedding-ada-002/embeddings?api-version=2023-05-15
api_version=2023-05-15 data='{"input": "This metabolite causes atherosclerosis", "encoding_format": "base64"}' message='Post details'
message='OpenAI API response' path=https://***/openai/deployments/text-embedding-ada-002/embeddings?api-version=2023-05-15 processing_ms=27.0109 request_id=47bee143-cb00-4782-8560-f267ee839af4 response_code=200
body='{\n "object": "list",\n "data": [\n {\n "object": "embedding",\n "index": 0,\n "embedding": "5zPWu+V2e7w75Ia7HeCavKhhE71NQhA865WYvE+Y9DuB8ce8Xak7uhgQgble4z48H8L4uyePnzu2XVq8ucg+u7ZdWj28ofq7Jzd6PMFMkbvQiIq8nbuwPFJMLTxGe5i83c2lPIXQsjzPToc8taB/vZlZ7ryVjwM8jsiLPIvLfrywnBG9RjLEO2XkuTpOMz
... (removed for brevity)
/gP7uzTTC8RZf5PMOULTv2D4C7caQfvR60EbyqjZ48yqxUuzHeLzhSFJW8qDu5uwcj7zyeDnO8UMKvPNLEezxNixm6X7U3vBeDqzumrI08jzQqPDZObLzZS2c843itO9a+y7w+mJG8gChjPAIHqLqEeLg6ysUTvfqaizzT2yo77Di/u3A3azyziva8ct9VvI80Kry1n5U7ipJvvHy2FjuAQSK9"\n }\n ],\n "model": "ada",\n "usage": {\n "prompt_tokens": 7,\n "total_tokens": 7\n }\n}\n' headers="{'Date': 'Tue, 27 Jun 2023 00:20:13 GMT', 'Content-Type': 'application/json', 'Content-Length': '8395', 'Connection': 'keep-alive', 'x-ms-region': 'East US', 'apim-request-id': 'b932333d-1eb9-415a-a84b-da1c5f95433b', 'x-content-type-options': 'nosniff, nosniff', 'openai-processing-ms': '26.8461', 'access-control-allow-origin': '*', 'x-request-id': '0677d084-2449-486c-9bff-b6ef07df004f', 'x-ms-client-request-id': 'b932333d-1eb9-415a-a84b-da1c5f95433b', 'strict-transport-security': 'max-age=31536000; includeSubDomains; preload, max-age=16070400; includeSubDomains', 'X-Frame-Options': 'SAMEORIGIN', 'X-XSS-Protection': '1; mode=block'}" message='API response body'
{'ids': ['60336172-1480-11ee-b223-acde48001122', '6033621c-1480-11ee-b223-acde48001122', '60336280-1480-11ee-b223-acde48001122', '603362b2-1480-11ee-b223-acde48001122', '603362da-1480-11ee-b223-acde48001122', '603362f8-1480-11ee-b223-acde48001122', '60336370-1480-11ee-b223-acde48001122'], 'embeddings': None, 'documents': ['This metabolite causes atherosclerosis', 'in the liver[55]. Strengths and', 'and limitations This is the first', 'the first thorough bibliometric', 'analysis of nutrition and gut', 'and gut microbiota research conducted', 'conducted on a global level.'], 'metadatas': [None, None, None, None, None, None, None]}
```
Also made the following change to openai.py a few lines later, although this is untested:
```python
batched_embeddings = []
_chunk_size = chunk_size or self.chunk_size if 'azure' not in self.openai_api_type else 1
# azure only accepts str input, currently one list element at a time
for i in range(0, len(tokens), _chunk_size):
embed_input = encoding.decode(tokens[i]) if 'azure' in self.openai_api_type else tokens[i : i + _chunk_size]
response = await async_embed_with_retry(
self,
input=embed_input,
**self._invocation_params,
)
batched_embeddings += [r["embedding"] for r in response["data"]]
```
| Azure rejects tokens sent by OpenAIEmbeddings, expects strings | https://api.github.com/repos/langchain-ai/langchain/issues/6793/comments | 2 | 2023-06-27T01:01:39Z | 2024-05-28T14:17:44Z | https://github.com/langchain-ai/langchain/issues/6793 | 1,775,913,567 | 6,793 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Right now only `text-bison` model is support by Google PaLM. When tried `code-bison` its throwing below error:
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[39], line 1
----> 1 llm = VertexAI(model_name='code-bison')
File /opt/conda/envs/python310/lib/python3.10/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for VertexAI
__root__
Unknown model publishers/google/models/code-bison; {'gs://google-cloud-aiplatform/schema/predict/instance/text_generation_1.0.0.yaml': <class 'vertexai.language_models._language_models._PreviewTextGenerationModel'>} (type=value_error)
```
Can we get other model supports as well?
### Motivation
Support for many other PaLM models
### Your contribution
I can update and test the code for different other models | Request to Support other VertexAI's LLM model Support | https://api.github.com/repos/langchain-ai/langchain/issues/6779/comments | 10 | 2023-06-26T19:48:47Z | 2024-05-17T10:39:46Z | https://github.com/langchain-ai/langchain/issues/6779 | 1,775,480,124 | 6,779 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.216 (I have had this since i started with Langchain (198)
python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [x] Callbacks/Tracing
- [ ] Async
### Reproduction
I run this function with
```json
{
"prompt": "some prompt",
"temperature": 0.2
}
```
```python3
from typing import Any
from langchain.chains import RetrievalQAWithSourcesChain
from langchain.chat_models import ChatOpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.prompts import PromptTemplate
from app.config.api_keys import ApiKeys
from app.repositories.pgvector_repository import PGVectorRepository
def completions_service(data) -> dict[str, Any]:
pgvector = PGVectorRepository().instance
tech_template = """Some prompt info (redacted)
{summaries}
Q: {question}
A: """
prompt_template = PromptTemplate(
template=tech_template, input_variables=["summaries", "question"]
)
qa = RetrievalQAWithSourcesChain.from_chain_type(llm=ChatOpenAI(temperature=data['temperature'],
model_name="gpt-3.5-turbo",
openai_api_key=ApiKeys.openai),
chain_type="stuff",
retriever=pgvector.as_retriever(),
chain_type_kwargs={"prompt": prompt_template},
return_source_documents=True,
verbose=True,
)
output = qa({"question": data['prompt']})
return output
```
I get this in the command line
```bash
> Entering new chain...
> Finished chain.
```
For some reason, there is a double space in the Entering line (as if there is meant to be something there) and then nothing until it says finish, I have set verbose = True but not luck
### Expected behavior
I would expect to see the embeddings, the full prompt being sent to openai etc. but I get nothing. | Verbose flag not outputting anything other than Entering chain and Finished chain | https://api.github.com/repos/langchain-ai/langchain/issues/6778/comments | 2 | 2023-06-26T19:31:15Z | 2023-07-09T13:23:46Z | https://github.com/langchain-ai/langchain/issues/6778 | 1,775,453,879 | 6,778 |
[
"hwchase17",
"langchain"
]
| ### System Info
Windows 10
Name: langchain
Version: 0.0.208
Summary: Building applications with LLMs through composability
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from pydantic import BaseModel, Field
from langchain.chat_models import ChatOpenAI, AzureChatOpenAI
from langchain.agents import Tool
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.document_loaders import PyPDFLoader
from langchain.chains import RetrievalQA
# Here a proper Azure OpenAI service needs to be defined
OPENAI_API_KEY=" "
OPENAI_DEPLOYMENT_ENDPOINT="https://gptkbopenai.openai.azure.com/"
OPENAI_DEPLOYMENT_NAME = "gptkbopenai"
OPENAI_MODEL_NAME = "GPT4"
#OPENAI_EMBEDDING_DEPLOYMENT_NAME = os.getenv("OPENAI_EMBEDDING_DEPLOYMENT_NAME")
OPENAI_EMBEDDING_MODEL_NAME = "text-embedding-ada-002"
OPENAI_DEPLOYMENT_VERSION = "2023-03-15-preview"
OPENAI_API_TYPE = "azure"
OPENAI_API_BASE = "https://gptkbopenai.openai.azure.com/"
OPENAI_EMBEDDING_DEPLOYMENT_NAME = "text-embedding-ada-002"
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.chat_models import AzureChatOpenAI
import os
os.environ["OPENAI_API_TYPE"] = OPENAI_API_TYPE
os.environ["OPENAI_API_BASE"] = OPENAI_API_BASE
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview"
import faiss
# Code adapted from Langchain document comparison toolkit
class DocumentInput(BaseModel):
question: str = Field()
tools = []
files = [
{
"name": "belfast",
"path": "C:\\Users\\625050\\OneDrive - Clifford Chance LLP\\Documents\\Projects\\ChatGPT\\LeaseTest\\Belfast.pdf",
},
{
"name": "bournemouth",
"path": "C:\\Users\\625050\\OneDrive - Clifford Chance LLP\\Documents\\Projects\\ChatGPT\\LeaseTest\\Bournemouth.pdf",
}
]
llm = AzureChatOpenAI(deployment_name= "GPT4")
for file in files:
loader = PyPDFLoader(file["path"])
pages = loader.load_and_split()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(pages)
print(docs[0])
embeddings = OpenAIEmbeddings(deployment=OPENAI_EMBEDDING_DEPLOYMENT_NAME , chunk_size=1)
#embeddings = OpenAIEmbeddings(model='text-embedding-ada-002',
#deployment=OPENAI_DEPLOYMENT_NAME,
#openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
#openai_api_type='azure',
#openai_api_key= OPENAI_API_KEY,
#chunk_size = 1
# )
# = OpenAIEmbeddings()
retriever = FAISS.from_documents(docs, embeddings).as_retriever()
# Wrap retrievers in a Tool
tools.append(
Tool(
args_schema=DocumentInput,
name=file["name"],
description=f"useful when you want to answer questions about {file['name']}",
func=RetrievalQA.from_chain_type(llm=llm, retriever=retriever)
)
)
agent = initialize_agent(
agent=AgentType.OPENAI_FUNCTIONS,
tools=tools,
llm=llm,
verbose=True,
)
agent({"input": "Who are the landlords?"})
Error:
Entering new chain...
/openai/deployments/GPT4/chat/completions?api-version=2023-03-15-preview None False None None
---------------------------------------------------------------------------
InvalidRequestError Traceback (most recent call last)
Cell In[5], line 15
1 #llm = AzureChatOpenAI()#model_kwargs = {'deployment': "GPT4"},
2 #model_name=OPENAI_MODEL_NAME,
3 #openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
4 #openai_api_version=OPENAI_DEPLOYMENT_VERSION,
5 #openai_api_key=OPENAI_API_KEY
6 #)
8 agent = initialize_agent(
9 agent=AgentType.OPENAI_FUNCTIONS,
10 tools=tools,
11 llm=llm,
12 verbose=True,
13 )
---> 15 agent({"input": "Who are the landlords?"})
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\chains\base.py:166, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
168 final_outputs: Dict[str, Any] = self.prep_outputs(
169 inputs, outputs, return_only_outputs
170 )
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\chains\base.py:160, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
154 run_manager = callback_manager.on_chain_start(
155 dumpd(self),
156 inputs,
157 )
158 try:
159 outputs = (
--> 160 self._call(inputs, run_manager=run_manager)
161 if new_arg_supported
162 else self._call(inputs)
163 )
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\agents\agent.py:957, in AgentExecutor._call(self, inputs, run_manager)
955 # We now enter the agent loop (until it returns something).
956 while self._should_continue(iterations, time_elapsed):
--> 957 next_step_output = self._take_next_step(
958 name_to_tool_map,
959 color_mapping,
960 inputs,
961 intermediate_steps,
962 run_manager=run_manager,
963 )
964 if isinstance(next_step_output, AgentFinish):
965 return self._return(
966 next_step_output, intermediate_steps, run_manager=run_manager
967 )
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\agents\agent.py:762, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
756 """Take a single step in the thought-action-observation loop.
757
758 Override this to take control of how the agent makes and acts on choices.
759 """
760 try:
761 # Call the LLM to see what to do.
--> 762 output = self.agent.plan(
763 intermediate_steps,
764 callbacks=run_manager.get_child() if run_manager else None,
765 **inputs,
766 )
767 except OutputParserException as e:
768 if isinstance(self.handle_parsing_errors, bool):
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\agents\openai_functions_agent\base.py:209, in OpenAIFunctionsAgent.plan(self, intermediate_steps, callbacks, **kwargs)
207 prompt = self.prompt.format_prompt(**full_inputs)
208 messages = prompt.to_messages()
--> 209 predicted_message = self.llm.predict_messages(
210 messages, functions=self.functions, callbacks=callbacks
211 )
212 agent_decision = _parse_ai_message(predicted_message)
213 return agent_decision
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\chat_models\base.py:258, in BaseChatModel.predict_messages(self, messages, stop, **kwargs)
256 else:
257 _stop = list(stop)
--> 258 return self(messages, stop=_stop, **kwargs)
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\chat_models\base.py:208, in BaseChatModel.__call__(self, messages, stop, callbacks, **kwargs)
201 def __call__(
202 self,
203 messages: List[BaseMessage],
(...)
206 **kwargs: Any,
207 ) -> BaseMessage:
--> 208 generation = self.generate(
209 [messages], stop=stop, callbacks=callbacks, **kwargs
210 ).generations[0][0]
211 if isinstance(generation, ChatGeneration):
212 return generation.message
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\chat_models\base.py:102, in BaseChatModel.generate(self, messages, stop, callbacks, tags, **kwargs)
100 except (KeyboardInterrupt, Exception) as e:
101 run_manager.on_llm_error(e)
--> 102 raise e
103 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
104 generations = [res.generations for res in results]
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\chat_models\base.py:94, in BaseChatModel.generate(self, messages, stop, callbacks, tags, **kwargs)
90 new_arg_supported = inspect.signature(self._generate).parameters.get(
91 "run_manager"
92 )
93 try:
---> 94 results = [
95 self._generate(m, stop=stop, run_manager=run_manager, **kwargs)
96 if new_arg_supported
97 else self._generate(m, stop=stop)
98 for m in messages
99 ]
100 except (KeyboardInterrupt, Exception) as e:
101 run_manager.on_llm_error(e)
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\chat_models\base.py:95, in <listcomp>(.0)
90 new_arg_supported = inspect.signature(self._generate).parameters.get(
91 "run_manager"
92 )
93 try:
94 results = [
---> 95 self._generate(m, stop=stop, run_manager=run_manager, **kwargs)
96 if new_arg_supported
97 else self._generate(m, stop=stop)
98 for m in messages
99 ]
100 except (KeyboardInterrupt, Exception) as e:
101 run_manager.on_llm_error(e)
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\chat_models\openai.py:359, in ChatOpenAI._generate(self, messages, stop, run_manager, **kwargs)
351 message = _convert_dict_to_message(
352 {
353 "content": inner_completion,
(...)
356 }
357 )
358 return ChatResult(generations=[ChatGeneration(message=message)])
--> 359 response = self.completion_with_retry(messages=message_dicts, **params)
360 return self._create_chat_result(response)
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\chat_models\openai.py:307, in ChatOpenAI.completion_with_retry(self, **kwargs)
303 @retry_decorator
304 def _completion_with_retry(**kwargs: Any) -> Any:
305 return self.client.create(**kwargs)
--> 307 return _completion_with_retry(**kwargs)
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\tenacity\__init__.py:289, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw)
287 @functools.wraps(f)
288 def wrapped_f(*args: t.Any, **kw: t.Any) -> t.Any:
--> 289 return self(f, *args, **kw)
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\tenacity\__init__.py:379, in Retrying.__call__(self, fn, *args, **kwargs)
377 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
378 while True:
--> 379 do = self.iter(retry_state=retry_state)
380 if isinstance(do, DoAttempt):
381 try:
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\tenacity\__init__.py:314, in BaseRetrying.iter(self, retry_state)
312 is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain)
313 if not (is_explicit_retry or self.retry(retry_state)):
--> 314 return fut.result()
316 if self.after is not None:
317 self.after(retry_state)
File c:\Users\625050\Anaconda3\envs\DD\lib\concurrent\futures\_base.py:451, in Future.result(self, timeout)
449 raise CancelledError()
450 elif self._state == FINISHED:
--> 451 return self.__get_result()
453 self._condition.wait(timeout)
455 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
File c:\Users\625050\Anaconda3\envs\DD\lib\concurrent\futures\_base.py:403, in Future.__get_result(self)
401 if self._exception:
402 try:
--> 403 raise self._exception
404 finally:
405 # Break a reference cycle with the exception in self._exception
406 self = None
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\tenacity\__init__.py:382, in Retrying.__call__(self, fn, *args, **kwargs)
380 if isinstance(do, DoAttempt):
381 try:
--> 382 result = fn(*args, **kwargs)
383 except BaseException: # noqa: B902
384 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\langchain\chat_models\openai.py:305, in ChatOpenAI.completion_with_retry.<locals>._completion_with_retry(**kwargs)
303 @retry_decorator
304 def _completion_with_retry(**kwargs: Any) -> Any:
--> 305 return self.client.create(**kwargs)
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\openai\api_resources\chat_completion.py:25, in ChatCompletion.create(cls, *args, **kwargs)
23 while True:
24 try:
---> 25 return super().create(*args, **kwargs)
26 except TryAgain as e:
27 if timeout is not None and time.time() > start + timeout:
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py:154, in EngineAPIResource.create(cls, api_key, api_base, api_type, request_id, api_version, organization, **params)
138 (
139 deployment_id,
140 engine,
(...)
150 api_key, api_base, api_type, api_version, organization, **params
151 )
153 print(url, headers, stream, request_id, request_timeout)
--> 154 response, a, api_key = requestor.request(
155 "post",
156 url,
157 params=params,
158 headers=headers,
159 stream=stream,
160 request_id=request_id,
161 request_timeout=request_timeout,
162 )
164 print(response, a, api_key)
166 if stream:
167 # must be an iterator
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\openai\api_requestor.py:230, in APIRequestor.request(self, method, url, params, headers, files, stream, request_id, request_timeout)
209 def request(
210 self,
211 method,
(...)
218 request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
219 ) -> Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], bool, str]:
220 result = self.request_raw(
221 method.lower(),
222 url,
(...)
228 request_timeout=request_timeout,
229 )
--> 230 resp, got_stream = self._interpret_response(result, stream)
231 return resp, got_stream, self.api_key
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\openai\api_requestor.py:624, in APIRequestor._interpret_response(self, result, stream)
616 return (
617 self._interpret_response_line(
618 line, result.status_code, result.headers, stream=True
619 )
620 for line in parse_stream(result.iter_lines())
621 ), True
622 else:
623 return (
--> 624 self._interpret_response_line(
625 result.content.decode("utf-8"),
626 result.status_code,
627 result.headers,
628 stream=False,
629 ),
630 False,
631 )
File c:\Users\625050\Anaconda3\envs\DD\lib\site-packages\openai\api_requestor.py:687, in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream)
685 stream_error = stream and "error" in resp.data
686 if stream_error or not 200 <= rcode < 300:
--> 687 raise self.handle_error_response(
688 rbody, rcode, resp.data, rheaders, stream_error=stream_error
689 )
690 return resp
InvalidRequestError: Unrecognized request argument supplied: functions
### Expected behavior
An answer should be generated and no error should be thrown. | Functions might not be supported through Azure OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/6777/comments | 11 | 2023-06-26T19:28:21Z | 2024-02-21T22:18:31Z | https://github.com/langchain-ai/langchain/issues/6777 | 1,775,450,165 | 6,777 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.214
Python 3.11.1
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a `SequentialChain` that contains 2 `LLMChain`s, and add a memory to the first one.
2. When running, you'll get a validation error:
```
Missing required input keys: {'chat_history'}, only had {'human_input'} (type=value_error)
```
### Expected behavior
You should be able to add memory to one chain, not just the Sequential Chain | Can't use memory for an internal LLMChain inside a SequentialChain | https://api.github.com/repos/langchain-ai/langchain/issues/6768/comments | 0 | 2023-06-26T16:09:11Z | 2023-07-13T06:47:46Z | https://github.com/langchain-ai/langchain/issues/6768 | 1,775,129,370 | 6,768 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain: 0.0.215
python: 3.10.11
OS: Ubuntu 18.04
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm implementing two custom tools that return column names and unique values in them. Their definition is as follows
```
def get_unique_values(column_name):
all_values = metadata[column_name]
return all_values
def get_column_names():
return COLUMN_NAMES
tools = [
Tool(
name="Get all column names",
func=get_column_names,
description="Useful for getting the names of all available columns. This doesn't have any arguments as input and simply invoking it would return the list of columns",
),
Tool(
name="Get distinct values of a column",
func=get_unique_values,
description="Useful for getting distinct values of a particular column. Knowing the distinct values is important to decide if a particular column should be considered in a given context or not. The input to this function should be a string representing the column name whose unique values are needed to be found out. For example, `gender` would be the input if you wanted to know what unique values are in `gender` column.",
),
]
```
I'm using a custom agent as well defined as follows: (The definition is quite straightforward and taken from the official docs)
```
prompt = CustomPromptTemplate(
template=template,
tools=tools,
input_variables=["input", "intermediate_steps"],
)
llm_chain = LLMChain(llm=llm, prompt=prompt)
tool_names = [tool.name for tool in tools]
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names,
)
agent_executor = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=True
)
```
When I provide a prompt using `agent_executor.run()` I get the error pasted below.
Surprisingly, if I define the as follows, I don't get an error. Rather it doesn't follow my prompt template because there's no LLMChain used here and hence gets stuck in meaningless back and forth actions.
```
agent_executor = initialize_agent(
tools,
llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
)
```
Error trace
```
> Entering new chain...
Traceback (most recent call last):
File "/home/user/proj/agent_exp.py", line 245, in <module>
agent_executor.run(
File "/home/user/anaconda3/envs/proj/lib/python3.10/site-packages/langchain/chains/base.py", line 290, in run
return self(args[0], callbacks=callbacks, tags=tags)[_output_key]
File "/home/user/anaconda3/envs/proj/lib/python3.10/site-packages/langchain/chains/base.py", line 166, in __call__
raise e
File "/home/user/anaconda3/envs/proj/lib/python3.10/site-packages/langchain/chains/base.py", line 160, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/user/anaconda3/envs/proj/lib/python3.10/site-packages/langchain/agents/agent.py", line 987, in _call
next_step_output = self._take_next_step(
File "/home/user/anaconda3/envs/proj/lib/python3.10/site-packages/langchain/agents/agent.py", line 803, in _take_next_step
raise e
File "/home/user/anaconda3/envs/proj/lib/python3.10/site-packages/langchain/agents/agent.py", line 792, in _take_next_step
output = self.agent.plan(
File "/home/user/anaconda3/envs/proj/lib/python3.10/site-packages/langchain/agents/agent.py", line 345, in plan
return self.output_parser.parse(output)
File "/home/user/proj/agent_exp.py", line 102, in parse
raise OutputParserException(f"Could not parse LLM output: `{llm_output}`")
langchain.schema.OutputParserException: Could not parse LLM output: `Thought: I need to get all the column names
Action: Get all column names`
```
### Expected behavior
The agent should have called these functions appropriately and returned a list of columns that have the string datatype | Unknown parsing error on custom tool + custom agent implementation | https://api.github.com/repos/langchain-ai/langchain/issues/6767/comments | 5 | 2023-06-26T15:20:23Z | 2023-06-26T17:15:43Z | https://github.com/langchain-ai/langchain/issues/6767 | 1,775,037,115 | 6,767 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Qdrant does not support the vector_size parameter, which is a very common and frequently used parameter. I hope it can be supported.
langchain/vectorstores/qdrant.py
```
# Just do a single quick embedding to get vector size
partial_embeddings = embedding.embed_documents(texts[:1])
vector_size = len(partial_embeddings[0])
```
### Suggestion:
I hope it can be supported. | Issue: Qdrant does not support the vector_size parameter, which is a very common and frequently used parameter. I hope it can be supported. | https://api.github.com/repos/langchain-ai/langchain/issues/6766/comments | 5 | 2023-06-26T15:12:51Z | 2023-10-18T16:06:58Z | https://github.com/langchain-ai/langchain/issues/6766 | 1,775,023,349 | 6,766 |
[
"hwchase17",
"langchain"
]
| ### System Info
Ubuntu 22.04.2
Python 3.10.11
langchain 0.0.215
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from pathlib import Path
from langchain.chains.router.embedding_router import EmbeddingRouterChain
from langchain.embeddings import LlamaCppEmbeddings
from langchain.vectorstores import Weaviate
model_dir = Path.home() / 'models'
llama_path = model_dir / 'llama-7b.ggmlv3.q4_0.bin'
encoder = LlamaCppEmbeddings(model_path=str(llama_path))
encoder.client.verbose = False
names_and_descriptions = [
("physics", ["for questions about physics"]),
("math", ["for questions about math"]),
]
router_chain = EmbeddingRouterChain.from_names_and_descriptions(names_and_descriptions, Weaviate, encoder,
weaviate_url='http://localhost:8080',
routing_keys=["input"])
```
### Expected behavior
I expect to be able to configure the underlying vectorizer with kwargs passed into the `from_names_and_descriptions` e.g. `weaviate_url` | `EmbeddingRouterChain.from_names_and_descriptions` doesn't accept vectorstore kwargs | https://api.github.com/repos/langchain-ai/langchain/issues/6764/comments | 4 | 2023-06-26T14:58:12Z | 2023-10-02T16:05:14Z | https://github.com/langchain-ai/langchain/issues/6764 | 1,774,991,189 | 6,764 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain == 0.0.205
python == 3.10.11
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from fastapi import FastAPI, Depends, Request, Response
from typing import Any, Dict, List, Generator
import asyncio
from langchain.llms import OpenAI
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.schema import LLMResult, HumanMessage, SystemMessage
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import StreamingResponse
from html import escape
app = FastAPI()
# 添加 CORS 中间件
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_methods=["*"],
allow_headers=["*"],
allow_credentials=True,
)
q = asyncio.Queue()
stop_item = "###finish###"
class StreamingStdOutCallbackHandlerYield(StreamingStdOutCallbackHandler):
async def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None:
"""Run when LLM starts running."""
# Clear the queue at the start
while not q.empty():
await q.get()
async def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
"""Run on new LLM token. Only available when streaming is enabled."""
await q.put(token)
async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Run when LLM ends running."""
await q.put(stop_item)
async def generate_llm_stream(prompt: str) -> Generator[str, None, None]:
llm = OpenAI(temperature=0.5, streaming=True, callbacks=[StreamingStdOutCallbackHandlerYield()])
result = await llm.agenerate([prompt])
while True:
item = await q.get()
if item == stop_item:
break
yield item
@app.get("/generate-song", status_code=200)
async def generate_song() -> Response:
prompt = "Write me a song about sparkling water."
async def event_stream() -> Generator[str, None, None]:
async for item in generate_llm_stream(prompt):
escaped_chunk = escape(item).replace("\n", "<br>").replace(" ", " ")
yield f"data:{escaped_chunk}\n\n"
return StreamingResponse(event_stream(), media_type="text/event-stream")
if __name__ == "__main__":
import uvicorn
uvicorn.run("easy:app", host="0.0.0.0", port=8000, reload=True)
```
### Expected behavior
```Retrying langchain.llms.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.llms.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.llms.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.llms.openai.acompletion_with_retry.<locals>._completion_with_retry in 8.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.llms.openai.acompletion_with_retry.<locals>._completion_with_retry in 10.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\aiohttp\connector.py", line 980, in _wrap_create_connection
return await self._loop.create_connection(*args, **kwargs) # type: ignore[return-value] # noqa
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\asyncio\base_events.py", line 1076, in create_connection
raise exceptions[0]
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\asyncio\base_events.py", line 1060, in create_connection
sock = await self._connect_sock(
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\asyncio\base_events.py", line 969, in _connect_sock
await self.sock_connect(sock, address)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\asyncio\selector_events.py", line 501, in sock_connect
return await fut
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\asyncio\selector_events.py", line 541, in _sock_connect_cb
raise OSError(err, f'Connect call failed {address}')
TimeoutError: [Errno 10060] Connect call failed ('108.160.166.253', 443)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\openai\api_requestor.py", line 592, in arequest_raw
result = await session.request(**request_kwargs)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\aiohttp\client.py", line 536, in _request
conn = await self._connector.connect(
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\aiohttp\connector.py", line 540, in connect
proto = await self._create_connection(req, traces, timeout)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\aiohttp\connector.py", line 901, in _create_connection
_, proto = await self._create_direct_connection(req, traces, timeout)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\aiohttp\connector.py", line 1206, in _create_direct_connection
raise last_exc
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\aiohttp\connector.py", line 1175, in _create_direct_connection
transp, proto = await self._wrap_create_connection(
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\aiohttp\connector.py", line 988, in _wrap_create_connection
raise client_error(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host api.openai.com:443 ssl:default [Connect call failed ('108.160.166.253', 443)]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\uvicorn\protocols\http\httptools_impl.py", line 435, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\fastapi\applications.py", line 282, in __call__
await super().__call__(scope, receive, send)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
raise exc
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\middleware\cors.py", line 91, in __call__
await self.simple_response(scope, receive, send, request_headers=headers)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\middleware\cors.py", line 146, in simple_response
await self.app(scope, receive, send)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
raise exc
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 20, in __call__
raise e
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 17, in __call__
await self.app(scope, receive, send)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\routing.py", line 69, in app
await response(scope, receive, send)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\responses.py", line 270, in __call__
async with anyio.create_task_group() as task_group:
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\anyio\_backends\_asyncio.py", line 662, in __aexit__
raise exceptions[0]
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\responses.py", line 273, in wrap
await func()
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\starlette\responses.py", line 262, in stream_response
async for chunk in self.body_iterator:
File "C:\AI\openai\easy.py", line 61, in event_stream
async for item in generate_llm_stream(prompt):
File "C:\AI\openai\easy.py", line 47, in generate_llm_stream
result = await llm.agenerate([prompt])
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\langchain\llms\base.py", line 287, in agenerate
raise e
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\langchain\llms\base.py", line 279, in agenerate
await self._agenerate(
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\langchain\llms\openai.py", line 355, in _agenerate
async for stream_resp in await acompletion_with_retry(
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\langchain\llms\openai.py", line 120, in acompletion_with_retry
return await _completion_with_retry(**kwargs)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\tenacity\_asyncio.py", line 88, in async_wrapped
return await fn(*args, **kwargs)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\tenacity\_asyncio.py", line 47, in __call__
do = self.iter(retry_state=retry_state)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\tenacity\__init__.py", line 325, in iter
raise retry_exc.reraise()
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\tenacity\__init__.py", line 158, in reraise
raise self.last_attempt.result()
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\concurrent\futures\_base.py", line 451, in result
return self.__get_result()
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\concurrent\futures\_base.py", line 403, in __get_result
raise self._exception
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\tenacity\_asyncio.py", line 50, in __call__
result = await fn(*args, **kwargs)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\langchain\llms\openai.py", line 118, in _completion_with_retry
return await llm.client.acreate(**kwargs)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\openai\api_resources\completion.py", line 45, in acreate
return await super().acreate(*args, **kwargs)
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 217, in acreate
response, _, api_key = await requestor.arequest(
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\openai\api_requestor.py", line 304, in arequest
result = await self.arequest_raw(
File "C:\Users\jhrsya\Anaconda3\envs\langchain\lib\site-packages\openai\api_requestor.py", line 609, in arequest_raw
raise error.APIConnectionError("Error communicating with OpenAI") from e
openai.error.APIConnectionError: Error communicating with OpenAI``` | Stream a response from LangChain's OpenAI with python fastapi | https://api.github.com/repos/langchain-ai/langchain/issues/6762/comments | 1 | 2023-06-26T14:40:55Z | 2023-10-02T16:05:19Z | https://github.com/langchain-ai/langchain/issues/6762 | 1,774,957,722 | 6,762 |
[
"hwchase17",
"langchain"
]
| ### System Info
Adding memory to a LLMChain with OpenAI functions enabled fails because of `AIMessage` are generated instead of `FunctionMessage`
```python
self.add_message(AIMessage(content=message))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/pas/development/advanced-stack/sandbox/sublime-plugin-maker/venv/lib/python3.11/site-packages/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for AIMessage
content
str type expected (type=type_error.str)
```
where `AIMessage.content` is in fact a dict.
Tested version : `0.0.215`
### Who can help?
@dev2049
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a LLMChain with both functions and memory
2. Enjoy :)
### Expected behavior
When LLMChain is created with `functions` and memory, generate FunctionMessage and accept dict (or json dumps) | ConversationBufferMemory fails to capture OpenAI functions messages in LLMChain | https://api.github.com/repos/langchain-ai/langchain/issues/6761/comments | 12 | 2023-06-26T14:36:44Z | 2024-06-20T15:48:16Z | https://github.com/langchain-ai/langchain/issues/6761 | 1,774,947,874 | 6,761 |
[
"hwchase17",
"langchain"
]
| ### System Info
There is a lack of support for the streaming option with AzureOpenAI. As you can see from the following article (https://thivy.hashnode.dev/streaming-response-with-azure-openai) official API support on Azure's side is present.
Specifically, when trying to utilize the streaming argument with AzureOpenAI, we recieve the following error (with gpt-35-turbo model, which is possible on direct Azure interfacing):
`InvalidRequestError: logprobs, best_of and echo parameters are not available on gpt-35-turbo model. Please remove the parameter and try again. For more details, see https://go.microsoft.com/fwlink/?linkid=2227346.`
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Run the following code in a notebook:
```
from langchain.callbacks.base import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.chains import (
ConversationalRetrievalChain,
LLMChain
)
from langchain.chains.question_answering import load_qa_chain
from langchain.prompts.prompt import PromptTemplate
from langchain.llms import AzureOpenAI
from langchain.chat_models import AzureChatOpenAI
template = """Given the following chat history and a follow up question, rephrase the follow up input question to be a standalone question.
Or end the conversation if it seems like it's done.
Chat History:\"""
{chat_history}
\"""
Follow Up Input: \"""
{question}
\"""
Standalone question:"""
condense_question_prompt = PromptTemplate.from_template(template)
template = """You are a friendly, conversational retail shopping assistant. Use the following context including product names, descriptions, and keywords to show the shopper whats available, help find what they want, and answer any questions.
It's ok if you don't know the answer.
Context:\"""
{context}
\"""
Question:\"""
\"""
Helpful Answer:"""
qa_prompt= PromptTemplate.from_template(template)
llm = AzureOpenAI(deployment_name="gpt-35-turbo", temperature=0)
streaming_llm = AzureOpenAI(
streaming=True,
callback_manager=CallbackManager([
StreamingStdOutCallbackHandler()]),
verbose=True,
engine=deployment_name,
temperature=0.2,
max_tokens=150
)
# use the LLM Chain to create a question creation chain
question_generator = LLMChain(
llm=llm,
prompt=condense_question_prompt
)
# use the streaming LLM to create a question answering chain
doc_chain = load_qa_chain(
llm=streaming_llm,
#llm=llm,
chain_type="stuff",
prompt=qa_prompt
)
chatbot = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(),
combine_docs_chain=doc_chain,
question_generator=question_generator
)
# create a chat history buffer
chat_history = []
# gather user input for the first question to kick off the bot
question = input("Hi! What are you looking for today?")
# keep the bot running in a loop to simulate a conversation
while True:
result = chatbot(
{"question": question, "chat_history": chat_history}
)
print("\n")
chat_history.append((result["question"], result["answer"]))
question = input()
```
Engine is "gpt-35-turbo" and the correct environment variables are provided.
2. Input anything in the input field.
3. Error Occurs.
Results in the following error:
`InvalidRequestError: logprobs, best_of and echo parameters are not available on gpt-35-turbo model. Please remove the parameter and try again. For more details, see https://go.microsoft.com/fwlink/?linkid=2227346.`
### Expected behavior
To execute without error.
### P.S.
This is my first bug report on GitHub, so if anything is missing please let me know. | Streaming Support For AzureOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/6760/comments | 7 | 2023-06-26T13:44:29Z | 2024-01-30T00:42:44Z | https://github.com/langchain-ai/langchain/issues/6760 | 1,774,817,963 | 6,760 |
[
"hwchase17",
"langchain"
]
| ### System Info
Windows 11,
python==3.10.5
langchain==0.0.215
openai==0.27.8
faiss-cpu==1.7.4
### Who can help?
@zeke
@sbusso
@deepblue
when run this example code:
https://python.langchain.com/docs/modules/model_io/models/chat/integrations/azure_chat_openai
I get the following error...
"InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again."
The code works if you downgrade langchain to 0.0.132
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Follow the instructions in the notebook.
https://python.langchain.com/docs/modules/model_io/models/chat/integrations/azure_chat_openai
### Expected behavior
Get a response. | AzureChatOpenAI: InvalidRequestError | https://api.github.com/repos/langchain-ai/langchain/issues/6759/comments | 4 | 2023-06-26T13:32:27Z | 2023-10-05T16:07:56Z | https://github.com/langchain-ai/langchain/issues/6759 | 1,774,795,023 | 6,759 |
[
"hwchase17",
"langchain"
]
| https://github.com/hwchase17/langchain/blob/1742db0c3076772db652c747df1524cd07695f51/langchain/vectorstores/faiss.py#L458
this from method can set 'normalize_L2' for un-norm embeddings
https://github.com/hwchase17/langchain/blob/1742db0c3076772db652c747df1524cd07695f51/langchain/vectorstores/faiss.py#L588
but load_local no 'normalize_L2' argument, so cache&load cant't work as expected
please add 'normalize_L2',or cache 'normalize_L2' to cache_file and restore it when load_local
| vectorstores/faiss.py load_local can't set normalize_L2 | https://api.github.com/repos/langchain-ai/langchain/issues/6758/comments | 4 | 2023-06-26T12:26:30Z | 2023-10-19T16:06:58Z | https://github.com/langchain-ai/langchain/issues/6758 | 1,774,668,475 | 6,758 |
[
"hwchase17",
"langchain"
]
| ### System Info
- Langchain: 0.0.215
- Platform: ubuntu
- Python 3.10.12
### Who can help?
@vowelparrot
https://github.com/hwchase17/langchain/blob/d84a3bcf7ab3edf8fe1d49083e066d51c9b5f621/langchain/agents/initialize.py#L54
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Fails if agent initialized as follows:
```python
agent = initialize_agent(
agent='zero-shot-react-description',
tools=tools,
llm=llm,
verbose=True,
max_iterations=30,
memory=ConversationBufferMemory(),
handle_parsing_errors=True)
```
With
```
...
lib/python3.10/site-packages/langchain/agents/initialize.py", line 54, in initialize_agent
tags_.append(agent.value)
AttributeError: 'str' object has no attribute 'value'
````
### Expected behavior
Expected to work as before where agent is specified as a string (or if this is highlighting that agent should actually be an object, it should indicate that instead of the error being shown). | Recent tags change causes AttributeError: 'str' object has no attribute 'value' on initialize_agent call | https://api.github.com/repos/langchain-ai/langchain/issues/6756/comments | 4 | 2023-06-26T11:00:29Z | 2023-06-27T02:03:29Z | https://github.com/langchain-ai/langchain/issues/6756 | 1,774,503,627 | 6,756 |
[
"hwchase17",
"langchain"
]
| ### System Info
Ubuntu 22.04.2
Python 3.10.11
langchain 0.0.215
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from pathlib import Path
from langchain import LlamaCpp
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.chains import RetrievalQAWithSourcesChain
from langchain.document_loaders import UnstructuredMarkdownLoader
from langchain.embeddings import LlamaCppEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Weaviate
model_dir = Path.home() / 'models'
llama_path = model_dir / 'llama-7b.ggmlv3.q4_0.bin'
assert llama_path.exists()
encoder = LlamaCppEmbeddings(model_path=str(llama_path))
encoder.client.verbose = False
readme_path = Path(__file__).parent.parent / 'README.md'
loader = UnstructuredMarkdownLoader(str(readme_path))
data = loader.load()
text_splitter = CharacterTextSplitter(
separator="\n\n",
chunk_size=10,
chunk_overlap=2,
length_function=len,
)
texts = text_splitter.split_documents(data)
db = Weaviate.from_documents(texts, encoder, weaviate_url='http://localhost:8080', by_text=False)
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = LlamaCpp(model_path=str(llama_path), temperature=0, callback_manager=callback_manager, stop=[], verbose=False)
llm.client.verbose = False
chain = RetrievalQAWithSourcesChain.from_chain_type(llm, chain_type="stuff", retriever=db.as_retriever(),
reduce_k_below_max_tokens=True, max_tokens_limit=512)
query = "How do I install this package?"
chain({"question": query})
```
### Expected behavior
When setting `max_tokens_limit` I expect it to be the limit for the final prompt passed to the llm
Seeing this error message is very confusing after checking that the question and loaded source documents do not reach the token limit
When `BaseQAWithSourcesChain.from_llm` is called, it uses a long `combine_prompt_template` by default, which in the case of LlamaCpp is already over the token limit
I would expect `max_tokens_limit` to apply to the full prompt, or to receive an error message explaining that the limit was breached because of the template, and ideally an example of how to alter the template | `RetrievalQAWithSourcesChain` with `max_tokens_limit` throws error `Requested tokens exceed context window` | https://api.github.com/repos/langchain-ai/langchain/issues/6754/comments | 1 | 2023-06-26T10:47:36Z | 2023-10-02T16:05:29Z | https://github.com/langchain-ai/langchain/issues/6754 | 1,774,480,128 | 6,754 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python 3.9.6
_**Requirement**_
aiohttp==3.8.4
aiosignal==1.3.1
async-timeout==4.0.2
attrs==23.1.0
certifi==2023.5.7
charset-normalizer==3.1.0
dataclasses-json==0.5.8
docopt==0.6.2
frozenlist==1.3.3
idna==3.4
langchain==0.0.215
langchainplus-sdk==0.0.17
marshmallow==3.19.0
marshmallow-enum==1.5.1
multidict==6.0.4
mypy-extensions==1.0.0
numexpr==2.8.4
numpy==1.25.0
openai==0.27.8
openapi-schema-pydantic==1.2.4
packaging==23.1
pipreqs==0.4.13
pydantic==1.10.9
PyYAML==6.0
requests==2.31.0
SQLAlchemy==2.0.17
tenacity==8.2.2
tqdm==4.65.0
typing-inspect==0.9.0
typing_extensions==4.6.3
urllib3==2.0.3
yarg==0.1.9
yarl==1.9.2
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Endpoint:
**URL**: https://api.gopluslabs.io//api/v1/token_security/{chain_id}
**arguments**:
<img width="593" alt="image" src="https://github.com/hwchase17/langchain/assets/24714804/b2d1226d-2d0b-45d8-9400-7f420d463f38">
In openapi.py: 160
<img width="596" alt="image" src="https://github.com/hwchase17/langchain/assets/24714804/06d70353-47de-4e32-90c3-7f5f6c316047">
After the _format_url, url doesn't change, the result of printer is also https://api.gopluslabs.io//api/v1/token_security/{chain_id}
### Expected behavior
Expect URL and path parameters to be properly combined after formatting like:
**https://api.gopluslabs.io//api/v1/token_security/1**
| path_params and url format not work | https://api.github.com/repos/langchain-ai/langchain/issues/6753/comments | 1 | 2023-06-26T10:39:55Z | 2023-10-02T16:05:34Z | https://github.com/langchain-ai/langchain/issues/6753 | 1,774,468,457 | 6,753 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
_No response_
### Suggestion:
_No response_ | from langchain.utilities import RequestsWrapper ImportError: cannot import name 'RequestsWrapper' from 'langchain.utilities'Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/6752/comments | 2 | 2023-06-26T10:13:50Z | 2023-10-02T16:05:39Z | https://github.com/langchain-ai/langchain/issues/6752 | 1,774,423,676 | 6,752 |
[
"hwchase17",
"langchain"
]
| ### System Info
Ubuntu 22.04.2
Python 3.10.11
langchain 0.0.215
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from pathlib import Path
from langchain.document_loaders import UnstructuredMarkdownLoader
from langchain.embeddings import LlamaCppEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Weaviate
model_dir = Path.home() / 'models'
llama_path = model_dir / 'llama-7b.ggmlv3.q4_0.bin'
encoder = LlamaCppEmbeddings(model_path=str(llama_path))
readme_path = Path(__file__).parent.parent / 'README.md'
loader = UnstructuredMarkdownLoader(readme_path)
data = loader.load()
text_splitter = CharacterTextSplitter(
separator="\n\n",
chunk_size=10,
chunk_overlap=2,
length_function=len,
)
texts = text_splitter.split_documents(data)
db = Weaviate.from_documents(texts, encoder, weaviate_url='http://localhost:8080', by_text=False)
```
### Expected behavior
The `UnstructuredMarkdownLoader` loads the metadata as a `PosixPath` object
`Weaviate.from_documents` then throws an error because it can't post this metadata, as `PosixPath` is not serializable
If I change `loader = UnstructuredMarkdownLoader(readme_path)` to `loader = UnstructuredMarkdownLoader(str(readme_path))` then the metadata is loaded as a string, and the posting to Weaviate works
I would expect `UnstructuredMarkdownLoader` to have the same behaviour when I pass it a string or a path like object
I would expect Weaviate to handle serialising a path like object to a string
| Weaviate.from_documents throws PosixPath is not JSON serializable when documents loaded via Pathlib | https://api.github.com/repos/langchain-ai/langchain/issues/6751/comments | 2 | 2023-06-26T09:48:43Z | 2023-10-02T16:05:44Z | https://github.com/langchain-ai/langchain/issues/6751 | 1,774,373,240 | 6,751 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
The Faiss index is too large, the loading time is very long, and the experience is not good. Is there any way to optimize it?
### Suggestion:
_No response_ | Is there a way to improve the faiss index loading speed? | https://api.github.com/repos/langchain-ai/langchain/issues/6749/comments | 2 | 2023-06-26T09:17:14Z | 2023-11-02T09:53:30Z | https://github.com/langchain-ai/langchain/issues/6749 | 1,774,318,073 | 6,749 |
[
"hwchase17",
"langchain"
]
| Hello everyone.
Oddly enough, I've recently run into a problem with memory.
In the first version, I had no issues, but now it has stopped working. It's as though my agent has Alzheimer's disease.
Does anyone have any suggestions as to why it might have stopped working?
There doesn't seem to be any error message or any apparent reason. Thank you!
I already tried reinstalling chromedb.
```
def agent(tools):
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")
agent_kwargs = {
"user name is bob": [MessagesPlaceholder(variable_name="chat_history")],
}
template = """This is a conversation between a human and a bot:
{chat_history}
Write a summary of the conversation for {input}:
"""
prompt = PromptTemplate(input_variables=["input", "chat_history"], template=template)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
#agent_chain = initialize_agent(tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)
agent_chain=initialize_agent(tools, llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, prompt = prompt, verbose=True, agent_kwargs=agent_kwargs, memory=memory, max_iterations=5, early_stopping_method="generate")
return agent_chain
```
Perhaps this error is realated
´´´
WARNING:root:Failed to load default session, using empty session: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
Failed to load default session, using empty session: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
´´´
´´´
WARNING:root:Failed to persist run: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /chain-runs (Caused by NewConnectionError('<urllib3.connectio
n.HTTPConnection object at 0x7f1458a82770>: Failed to establish a new connection: [Errno 111] Connection refused')) Failed to persist run: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /chain-runs (Caused by NewConnectionError('<urllib3.connection.HTTPConnect
ion object at 0x7f1458a82770>: Failed to establish a new connection: [Errno 111] Connection refused')) I'm sorry, but I don't have access to personal information.
´´´
| Issue: ConversationBufferMemory stopped working | https://api.github.com/repos/langchain-ai/langchain/issues/6748/comments | 10 | 2023-06-26T09:14:44Z | 2023-10-07T16:06:05Z | https://github.com/langchain-ai/langchain/issues/6748 | 1,774,313,253 | 6,748 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I would like to perform a query on a database using natural language. However, running direct queries is not possible, and I have to do it via an API. For that, given a sentence, I'd like to extract some custom entities from it.
For example, if the sentence is: "How many more than 20 years old male users viewed a page or logged in in the last 30 days?"
The entities are:
```
<gender, equals, male>,
<age, greater than, 20>,
<event name, equals, view page>,
<event name, equals, login>,
<event timestamp, more than, 30 days>
```
The first element of each entity (triplet) comes from the list of columns
The second element is inferred from context (nature of the operator if it's a single value or array to compare with)
The third element is also inferred from the context and must belong to the chosen column (first element)
I'm not able to restrict either of these elements for the entity. I'd like an agent first to check all the columns that are available, choose one and view their unique values. Once it gets that, either choose that column (first element) and value (third element) or look again and repeat these steps.
Any help on this would be great!
### Suggestion:
_No response_ | Issue: Entity extraction using custom rules | https://api.github.com/repos/langchain-ai/langchain/issues/6747/comments | 9 | 2023-06-26T08:13:23Z | 2023-10-05T16:08:07Z | https://github.com/langchain-ai/langchain/issues/6747 | 1,774,202,161 | 6,747 |
[
"hwchase17",
"langchain"
]
|
LLM中文应用和技术交流群,如果二维码过期可加微信备注LLM应用:yydsa0007

| LLM中文应用交流微信群 | https://api.github.com/repos/langchain-ai/langchain/issues/6745/comments | 1 | 2023-06-26T07:53:56Z | 2024-01-23T15:40:33Z | https://github.com/langchain-ai/langchain/issues/6745 | 1,774,162,267 | 6,745 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version i got from
!pip install langchain
import nest_asyncio
nest_asyncio.apply()
from langchain.document_loaders.sitemap import SitemapLoader
SitemapLoader.requests_per_second = 2
# Optional: avoid `[SSL: CERTIFICATE_VERIFY_FAILED]` issue
#SitemapLoader.requests_kwargs = {'verify':True}
loader = SitemapLoader(
"https://www.infoblox.com/sitemap_index.xml",
)
docs = loader.load() #this is where it fails it works with version
TypeError Traceback (most recent call last)
[<ipython-input-5-609988fd11f7>](https://localhost:8080/#) in <cell line: 13>()
11 "https://www.infoblox.com/sitemap_index.xml",
12 )
---> 13 docs = loader.load()
2 frames
[/usr/local/lib/python3.10/dist-packages/langchain/document_loaders/web_base.py](https://localhost:8080/#) in _scrape(self, url, parser)
186 self._check_parser(parser)
187
--> 188 html_doc = self.session.get(url, verify=self.verify, **self.requests_kwargs)
189 html_doc.encoding = html_doc.apparent_encoding
190 return BeautifulSoup(html_doc.text, parser)
TypeError: requests.sessions.Session.get() got multiple values for keyword argument 'verify'
#this was working in older version
!pip install langchain==0.0.189
thanks
nick
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
follow the code above
### Expected behavior
it should run not crash | SitemapLoader is not working verify error from module | https://api.github.com/repos/langchain-ai/langchain/issues/6744/comments | 1 | 2023-06-26T07:02:41Z | 2023-10-02T16:06:04Z | https://github.com/langchain-ai/langchain/issues/6744 | 1,774,074,429 | 6,744 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version: 0.0.214
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Same as [Supabase docs](https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/supabase) with the exact table and function created, where `id` column has type of bigint.
```Python
from supabase.client import Client, create_client
from langchain.embeddings import OpenAIEmbeddings
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import SupabaseVectorStore
loader = TextLoader("../../../state_of_the_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
supabase_client: Client = create_client(
supabase_url=os.getenv("SUPABASE_URL"),
supabase_key=os.getenv("SUPABASE_SERVICE_KEY"),
)
supabase_vector_store = SupabaseVectorStore.from_documents(
documents=docs,
client=supabase_client,
embedding=embeddings,
)
```
Got
> APIError: {'code': '22P02', 'details': None, 'hint': None, 'message': 'invalid input syntax for type bigint: "64f03aff-0c0e-4f24-91e2-e01fcaxxxxxx"'}
### Expected behavior
Successfully insert and embed the split docs into Supabase.
To be helpful, I believe it was introduced by [this commit](https://github.com/hwchase17/langchain/commit/be02572d586bcb33fffe89c37b81d5ba26762bec) regarding the List[str] type `ids`. But not sure if it was intended with docs not updated yet, or otherwise. | id type in SupabaseVectorStore doesn't match SQL column | https://api.github.com/repos/langchain-ai/langchain/issues/6743/comments | 9 | 2023-06-26T06:17:47Z | 2023-10-10T03:06:10Z | https://github.com/langchain-ai/langchain/issues/6743 | 1,773,971,733 | 6,743 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain-0.0.207, Windows, Python-3.9.16
Memory (VectorStoreRetrieverMemory) Settings:
dimension = 768
index = faiss.IndexFlatL2(dimension)
embeddings = HuggingFaceEmbeddings()
vectorstore = FAISS(embeddings.embed_query, index, InMemoryDocstore({}), {})
retriever = vectorstore.as_retriever(search_kwargs=dict(k=3))
memory = VectorStoreRetrieverMemory(
memory_key="chat_history",
return_docs=True,
retriever=retriever,
return_messages=True,
)
ConversationalRetrievalChain Settings:
_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\
Make sure to avoid using any unclear pronouns.
Chat History:
{chat_history}
(You do not need to use these pieces of information if not relevant)
Follow Up Input: {question}
Standalone question:"""
CONDENSE_QUESTION_PROMPT = PromptTemplate(
input_variables=["chat_history", "question"], template=_template
)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_with_sources_chain(
llm, chain_type="refine"
) # stuff map_reduce refine
qa = ConversationalRetrievalChain(
retriever=chroma.as_retriever(search_kwargs=dict(k=5)),
memory=memory,
combine_docs_chain=doc_chain,
question_generator=question_generator,
return_source_documents=True,
# verbose=True,
)
responses = qa({"question": user_input})
ISSUE: At first cal, it is giving results. BUT in second call, it is throwing an error as follows:
ValueError: Unsupported chat history format: <class 'langchain.schema.Document'>. Full chat history:
What am I doing wrong here?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Memory (VectorStoreRetrieverMemory) Settings:
dimension = 768
index = faiss.IndexFlatL2(dimension)
embeddings = HuggingFaceEmbeddings()
vectorstore = FAISS(embeddings.embed_query, index, InMemoryDocstore({}), {})
retriever = vectorstore.as_retriever(search_kwargs=dict(k=3))
memory = VectorStoreRetrieverMemory(
memory_key="chat_history",
return_docs=True,
retriever=retriever,
return_messages=True,
)
2. ConversationalRetrievalChain Settings:
_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\
Make sure to avoid using any unclear pronouns.
Chat History:
{chat_history}
(You do not need to use these pieces of information if not relevant)
Follow Up Input: {question}
Standalone question:"""
CONDENSE_QUESTION_PROMPT = PromptTemplate(
input_variables=["chat_history", "question"], template=_template
)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_with_sources_chain(
llm, chain_type="refine"
) # stuff map_reduce refine
qa = ConversationalRetrievalChain(
retriever=chroma.as_retriever(search_kwargs=dict(k=5)),
memory=memory,
combine_docs_chain=doc_chain,
question_generator=question_generator,
return_source_documents=True,
# verbose=True,
)
responses = qa({"question": user_input})
ISSUE: At first cal, it is giving results. BUT in second call, it is throwing an error as follows:
ValueError: Unsupported chat history format: <class 'langchain.schema.Document'>. Full chat history:
What am I doing wrong here?
### Expected behavior
Chat history should be injected in chain | Getting "ValueError: Unsupported chat history format:" while using ConversationalRetrievalChain with memory type VectorStoreRetrieverMemory | https://api.github.com/repos/langchain-ai/langchain/issues/6741/comments | 8 | 2023-06-26T04:47:59Z | 2023-10-24T16:07:18Z | https://github.com/langchain-ai/langchain/issues/6741 | 1,773,840,557 | 6,741 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain Version: 0.0.74
openai Version: 0.27.8
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.llms import OpenAI
llm = OpenAI(openai_api_key=INSERT_API_KEY, temperature=0.9)
llm.predict("What would be a good company name for a company that makes colorful socks?")
### Expected behavior
I was following the langchain quickstart guide, expected to see something similar to the output. | AttributeError: 'OpenAI' object has no attribute 'predict' | https://api.github.com/repos/langchain-ai/langchain/issues/6740/comments | 9 | 2023-06-26T04:01:06Z | 2023-08-31T10:13:23Z | https://github.com/langchain-ai/langchain/issues/6740 | 1,773,789,387 | 6,740 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am looking for help to continue langchain Refine chain from a saved checkpoint. None of my code got completed because of GPT API overload. Any help would be appreciated.
### Suggestion:
_No response_ | Issue: Continue from a saved checkpoint | https://api.github.com/repos/langchain-ai/langchain/issues/6733/comments | 4 | 2023-06-26T02:17:33Z | 2024-01-30T00:44:50Z | https://github.com/langchain-ai/langchain/issues/6733 | 1,773,678,248 | 6,733 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain:0.0.215
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.indexes import VectorstoreIndexCreator
from langchain.embeddings import OpenAIEmbeddings
index = VectorstoreIndexCreator(embedding=OpenAIEmbeddings('gpt2')).from_loaders([loader])
```
error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[6], line 3
1 from langchain.indexes import VectorstoreIndexCreator
2 from langchain.embeddings import OpenAIEmbeddings
----> 3 index = VectorstoreIndexCreator(embedding=OpenAIEmbeddings('gpt2')).from_loaders([loader])
File [~/micromamba/envs/openai/lib/python3.11/site-packages/pydantic/main.py:332], in pydantic.main.BaseModel.__init__()
TypeError: __init__() takes exactly 1 positional argument (2 given)
```
### Expected behavior
no | TypeError: __init__() takes exactly 1 positional argument (2 given) | https://api.github.com/repos/langchain-ai/langchain/issues/6730/comments | 3 | 2023-06-25T23:58:49Z | 2023-06-26T23:52:15Z | https://github.com/langchain-ai/langchain/issues/6730 | 1,773,575,506 | 6,730 |
[
"hwchase17",
"langchain"
]
| ### System Info
no
### Who can help?
@hwchase17 @agola11
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [x] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
when running the following code:
```
from langchain.embeddings import OpenAIEmbeddings
embedding_model = OpenAIEmbeddings()
embeddings = embedding_model.embed_documents(
[
"Hi there!",
"Oh, hello!",
"What's your name?",
"My friends call me World",
"Hello World!"
]
)
```
such errors occurred:
```
ValueError Traceback (most recent call last)
Cell In[9], line 1
----> 1 embeddings = embedding_model.embed_documents(
2 [
3 "Hi there!",
4 "Oh, hello!",
5 "What's your name?",
6 "My friends call me World",
7 "Hello World!"
8 ]
9 )
10 len(embeddings), len(embeddings[0])
File ~/micromamba/envs/openai/lib/python3.11/site-packages/langchain/embeddings/openai.py:305, in OpenAIEmbeddings.embed_documents(self, texts, chunk_size)
293 """Call out to OpenAI's embedding endpoint for embedding search docs.
294
295 Args:
(...)
301 List of embeddings, one for each text.
302 """
303 # NOTE: to keep things simple, we assume the list may contain texts longer
304 # than the maximum context and use length-safe embedding function.
--> 305 return self._get_len_safe_embeddings(texts, engine=self.deployment)
File ~/micromamba/envs/openai/lib/python3.11/site-packages/langchain/embeddings/openai.py:225, in OpenAIEmbeddings._get_len_safe_embeddings(self, texts, engine, chunk_size)
223 tokens = []
224 indices = []
--> 225 encoding = tiktoken.get_encoding(self.model)
226 for i, text in enumerate(texts):
227 if self.model.endswith("001"):
228 # See: https://github.com/openai/openai-python/issues/418#issuecomment-1525939500
229 # replace newlines, which can negatively affect performance.
File ~/micromamba/envs/openai/lib/python3.11/site-packages/tiktoken/registry.py:60, in get_encoding(encoding_name)
57 assert ENCODING_CONSTRUCTORS is not None
59 if encoding_name not in ENCODING_CONSTRUCTORS:
---> 60 raise ValueError(f"Unknown encoding {encoding_name}")
62 constructor = ENCODING_CONSTRUCTORS[encoding_name]
63 enc = Encoding(**constructor())
ValueError: Unknown encoding text-embedding-ada-002
```
how to fix it?
### Expected behavior
no | ValueError: Unknown encoding text-embedding-ada-002 | https://api.github.com/repos/langchain-ai/langchain/issues/6726/comments | 6 | 2023-06-25T23:08:34Z | 2024-01-28T23:15:37Z | https://github.com/langchain-ai/langchain/issues/6726 | 1,773,551,290 | 6,726 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain: 0.0.215
platform: ubuntu 22.04 LTS
python: 3.10
### Who can help?
@eyurtsev :)
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://colab.research.google.com/drive/1vraycYUuF-BZ0UA8EoUT0hPt15HoeGPV?usp=sharing
### Expected behavior
The first result of DuckDuckGoSearch to be returned in the `get_snippets` and `results` methods of the DuckDuckGoSearchAPIWrapper. | DuckDuckGoSearchAPIWrapper Consumes results w/o returning them | https://api.github.com/repos/langchain-ai/langchain/issues/6724/comments | 2 | 2023-06-25T22:54:36Z | 2023-06-26T13:58:17Z | https://github.com/langchain-ai/langchain/issues/6724 | 1,773,540,838 | 6,724 |
[
"hwchase17",
"langchain"
]
| ### System Info
colab
### Who can help?
@hwchase17 hi i am trying to use Context aware text splitting and QA / Chat
the code doesnt work starting the vectore indes
# Build vectorstore and keep the metadata
from langchain.vectorstores import Chroma
vectorstore = Chroma.from_documents(texts=all_splits,metadatas=all_metadatas,embedding=OpenAIEmbeddings())
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
i am only following the notebood
### Expected behavior
it should work | https://python.langchain.com/docs/use_cases/question_answering/document-context-aware-QA code is not working, | https://api.github.com/repos/langchain-ai/langchain/issues/6723/comments | 8 | 2023-06-25T21:07:51Z | 2023-10-12T16:08:27Z | https://github.com/langchain-ai/langchain/issues/6723 | 1,773,490,075 | 6,723 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Currently, the API chain takes the result from the API and gives it directly to a LLM chain. If the result is larger then the context length of the LLM, the chain will give an error. To address this issue I propose that the chain should split the API result using a text splitter, then give the result to combine documents chain that answers the question.
### Motivation
I have found that certain question given to the API chain gives results from the API that exede the context length of the standard openai model. For example:
```python
from langchain.chains import APIChain
from langchain.prompts.prompt import PromptTemplate
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
from langchain.chains.api import open_meteo_docs
chain_new = APIChain.from_llm_and_api_docs(llm, open_meteo_docs.OPEN_METEO_DOCS, verbose=True)
chain_new.run('What is the weather of [latitude:52.52, longitude:13.419998]?')
```
The LLM creates the this URL: https://api.open-meteo.com/v1/forecast?latitude=52.52&longitude=13.419998&hourly=temperature_2m,weathercode,snow_depth,freezinglevel_height,visibility. The content from this URL is kind of large. This causes the prompt to have 6382 tokens and be lager then the max context length. I think the chain should be able to handle larger responses.
### Your contribution
I can try to submit a PR to implement this solution. | Handling larger responses in the API chain | https://api.github.com/repos/langchain-ai/langchain/issues/6722/comments | 4 | 2023-06-25T20:44:41Z | 2024-01-30T09:20:22Z | https://github.com/langchain-ai/langchain/issues/6722 | 1,773,482,364 | 6,722 |
[
"hwchase17",
"langchain"
]
| ### System Info
windows 11, python 3.9.16 , langchain-0.0.215
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The loaded documents succesfully get indexd in FAISS, Deeplake and Pinecone, so I guess the issue is not there. When using Chroma.from_documents the error is thrown:
```
File ~\...langchain\vectorstores\chroma.py:149 in add_texts
self._collection.upsert(
AttributeError: 'Collection' object has no attribute 'upsert'
```
Code:
```python
with open('docs.pkl', 'rb') as file:
documents= pickle.load(file)
text_splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=200)
docs = text_splitter.split_documents(documents)
chroma_dir = base_dir + "ChromaDB" + "_" + str(chunk_size) + "_" + str(overlap)
db2 = Chroma.from_documents(docs, embeddings, persist_directory=chroma_dir)
```
Uploaded the pickle zipped as .pkl is not supported for upload.
[docs.zip](https://github.com/hwchase17/langchain/files/11860249/docs.zip)
### Expected behavior
Index the docs | ChromaDB Chroma.from_documents error: AttributeError: 'Collection' object has no attribute 'upsert' | https://api.github.com/repos/langchain-ai/langchain/issues/6721/comments | 2 | 2023-06-25T16:40:21Z | 2023-06-25T17:48:54Z | https://github.com/langchain-ai/langchain/issues/6721 | 1,773,371,161 | 6,721 |
[
"hwchase17",
"langchain"
]
| Hi teams,
Can I make the embedding with tag or any key, I want to make user with different auth. to query specific vector of document.
I was trying to use Redis as vectorstore.Would it recommend to be my vectorstore if I want to implement something authentication.
I'll embed my document into vectorstore and each embedding has its permission. And then I'll have users with different permission. The user who be granted the permission of document can query that vector of document.
For example, Embed three docuemnts into vectorstore and there are two user with different permission.
| Docs | key |
| :----- | :--: |
| Docs_A | A |
| Docs_B | B |
| Docs_C | C |
| User| permission |
| :----- | :--: |
| User_A | A, C |
| User_B | B |
At this use case, User_A can query the vector of Docs_A and Docs_C, and User_B just Docs_B | search specific vector by tag or key | https://api.github.com/repos/langchain-ai/langchain/issues/6720/comments | 5 | 2023-06-25T15:52:23Z | 2023-10-24T21:10:03Z | https://github.com/langchain-ai/langchain/issues/6720 | 1,773,353,594 | 6,720 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
i've made some comments on https://langchainers.hashnode.dev/getting-started-with-langchainjs
and https://langchainers.hashnode.dev/learn-how-to-integrate-language-models-llms-with-sql-databases-using-langchainjs
some things to update to make it working with the last langchain JS/TS packages
### Idea or request for content:
_No response_ | DOC: some upgrades to https://langchainers.hashnode.dev/getting-started-with-langchainjs | https://api.github.com/repos/langchain-ai/langchain/issues/6718/comments | 1 | 2023-06-25T15:05:51Z | 2023-06-25T15:19:55Z | https://github.com/langchain-ai/langchain/issues/6718 | 1,773,336,111 | 6,718 |
[
"hwchase17",
"langchain"
]
| ### System Info
All regular retrievers have an add_document method that handles adding a list of documents, and this one retriever only handles add_text, a list of strings and not langchain documents.
For comparison, weaviate hybrid search is able to handle document lists:
` def add_documents(self, docs: List[Document], **kwargs: Any) -> List[str]:
"""Upload documents to Weaviate."""
from weaviate.util import get_valid_uuid
with self._client.batch as batch:
ids = []
for i, doc in enumerate(docs):
metadata = doc.metadata or {}
data_properties = {self._text_key: doc.page_content, **metadata}
# If the UUID of one of the objects already exists
# then the existing objectwill be replaced by the new object.
if "uuids" in kwargs:
_id = kwargs["uuids"][i]
else:
_id = get_valid_uuid(uuid4())
batch.add_data_object(data_properties, self._index_name, _id)
ids.append(_id)
return ids`
### Who can help?
@hw
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Import PineconeHybridSearch
### Expected behavior
retriever.add_documents(DocumentList) | Pinecone hybrid search is incomplete, missing add_documents method | https://api.github.com/repos/langchain-ai/langchain/issues/6716/comments | 2 | 2023-06-25T14:29:47Z | 2023-10-01T16:04:53Z | https://github.com/langchain-ai/langchain/issues/6716 | 1,773,318,895 | 6,716 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.125, Python 3.9 within Databricks
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain import PromptTemplate, HuggingFaceHub, LLMChain
### Expected behavior
Installation without issues | cannot import name 'get_callback_manager' from 'langchain.callbacks' | https://api.github.com/repos/langchain-ai/langchain/issues/6715/comments | 2 | 2023-06-25T12:28:06Z | 2023-10-01T16:04:58Z | https://github.com/langchain-ai/langchain/issues/6715 | 1,773,252,216 | 6,715 |
[
"hwchase17",
"langchain"
]
| ### Feature request
The value of token_max in mp_reduce should vary according to the model, not the fixed 3000.

### Motivation
When I use the gpt-3.5-turbo-16k model, I often encounter this ValueError 'A single document was so long it could not be combined' due to the small token_max.I always need to modify a larger value for token_max to resolve the issue.

### Your contribution
I hope to enhance the functionality by dynamically modifying the value of token_max through obtaining the model used by the current chain.

| Update the token_max value in mp_reduce.ValueError:"A single document was so long it could not be combined "、"A single document was longer than the context length," | https://api.github.com/repos/langchain-ai/langchain/issues/6714/comments | 3 | 2023-06-25T10:19:42Z | 2023-07-05T08:13:50Z | https://github.com/langchain-ai/langchain/issues/6714 | 1,773,197,314 | 6,714 |
[
"hwchase17",
"langchain"
]
| Hi there, awesome project!
https://github.com/buhe/langchain-swift is a swift langchain copy, for ios or mac apps.
Chatbots , QA bot and Agent is done.
Current step:
- [ ] LLMs
- [x] OpenAI
- [ ] Vectorstore
- [x] Supabase
- [ ] Embedding
- [x] OpenAI
- [ ] Chain
- [x] Base
- [x] LLM
- [ ] Tools
- [x] Dummy
- [x] InvalidTool
- [ ] Serper
- [ ] Zapier
- [ ] Agent
- [x] ZeroShotAgent
- [ ] Memory
- [x] BaseMemory
- [x] BaseChatMemory
- [x] ConversationBufferWindowMemory
- [ ] Text Splitter
- [x] CharacterTextSplitter
- [ ] Document Loader
- [x] TextLoader
### Suggestion:
_No response_ | A swift langchain copy, for ios or mac apps. | https://api.github.com/repos/langchain-ai/langchain/issues/6712/comments | 1 | 2023-06-25T07:29:55Z | 2023-10-01T16:05:03Z | https://github.com/langchain-ai/langchain/issues/6712 | 1,773,116,562 | 6,712 |
[
"hwchase17",
"langchain"
]
| I defined some agents and some tools.
How can i get execution order about llm, tool inside agent? Use Callback or other?
If i use callback print, there results is in order?
```
agent_orange = ChatAgent(llm_chain=orange_chain, allowed_tools=tool_names, output_parser=MyChatOutputParser())
agent_chatgpt = ChatAgent(llm_chain=chatgpt_chain, allowed_tools=tool_names,
output_parser=MyChatOutputParser())
agent_executor = MyAgentExecutor.from_agent_and_tools(agent=[agent_orange, agent_chatgpt],
tools=tools,
verbose=True
)
result = agent_executor.run("my query", callbacks=[my_handle])
``` | How can i get execution order inside agent? | https://api.github.com/repos/langchain-ai/langchain/issues/6711/comments | 2 | 2023-06-25T07:26:30Z | 2023-10-01T16:05:08Z | https://github.com/langchain-ai/langchain/issues/6711 | 1,773,115,263 | 6,711 |
[
"hwchase17",
"langchain"
]
| ### System Info
Apple M1
### Who can help?
I get this error : (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64'))
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just run the model
### Expected behavior
must run on apple m1 | Error while running on Apple M1 Pro | https://api.github.com/repos/langchain-ai/langchain/issues/6709/comments | 0 | 2023-06-25T06:37:18Z | 2023-06-25T08:28:05Z | https://github.com/langchain-ai/langchain/issues/6709 | 1,773,096,079 | 6,709 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain: v0.0.214
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://github.com/hwchase17/langchain/blob/v0.0.214/langchain/vectorstores/base.py#L410
https://github.com/hwchase17/langchain/blob/v0.0.214/langchain/vectorstores/chroma.py#L225
https://github.com/hwchase17/langchain/blob/v0.0.214/langchain/vectorstores/chroma.py#L198
The `filter` option does not work when search_type is similarity_score_threshold
### Expected behavior
work:
```python
def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
return self.similarity_search_with_score(query, k, **kwargs)
``` | chroma func _similarity_search_with_relevance_scores missing "kwargs" parameter | https://api.github.com/repos/langchain-ai/langchain/issues/6707/comments | 2 | 2023-06-25T05:57:30Z | 2023-10-01T16:05:14Z | https://github.com/langchain-ai/langchain/issues/6707 | 1,773,077,539 | 6,707 |
[
"hwchase17",
"langchain"
]
| ### System Info
- LangChain version: 0.0.214
- tiktoken version: 0.4.0
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Executing following code raises `TypeError: expected string or buffer`.
```python
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage
llm = ChatOpenAI(model_name="gpt-3.5-turbo-0613", temperature=0)
functions = [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
]
response = llm([HumanMessage(content="What is the weather like in Boston?")], functions=functions)
llm.get_num_tokens_from_messages([response])
```
`get_num_tokens_from_messages` internally converts messages to dict with `_convert_message_to_dict` and then interates all key-value pairs to count the number of tokens.
The code expects value to be a string, but when a function call is included, an exception seems to be raised because value contains a dictionary.
### Expected behavior
As far as I know, there is no officially documented way to calculate the exact token count consumption when using function call.
Someone on the OpenAI forum has [posted](https://community.openai.com/t/how-to-calculate-the-tokens-when-using-function-call/266573/10) a method for calculating the tokens, so perhaps that method could be adopted.
| ChatOpenAI.get_num_tokens_from_messages raises TypeError with function call response | https://api.github.com/repos/langchain-ai/langchain/issues/6706/comments | 2 | 2023-06-25T05:36:02Z | 2023-10-01T16:05:18Z | https://github.com/langchain-ai/langchain/issues/6706 | 1,773,070,930 | 6,706 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
when i use RecusiveUrlLoader ,
NotImplementedError: WebBaseLoader does not implement lazy_load()

### Suggestion:
_No response_ | NotImplementedError: WebBaseLoader does not implement lazy_load() | https://api.github.com/repos/langchain-ai/langchain/issues/6704/comments | 3 | 2023-06-25T05:06:40Z | 2023-10-05T16:08:30Z | https://github.com/langchain-ai/langchain/issues/6704 | 1,773,061,920 | 6,704 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I am trying to add memory to create_pandas_dataframe_agent to perform post processing on a model that I trained using Langchain. I am using the following code at the moment.
```
from langchain.llms import OpenAI
import pandas as pd
df = pd.read_csv('titanic.csv')
agent = create_pandas_dataframe_agent(OpenAI(temperature=0), [df], verbose=True)
```
I tried adding memory = ConversationBufferMemory(memory_key="chat_history") but that didnt help. Tried many other methods but seems like the memory for create_pandas_dataframe_agent is not implemented
### Motivation
There is a major need in pandas processing to save models as pickle files along with adding new features to the studied dataset which alters the original dataset for the next step. It seems like langchain currently doesnt support that.
### Your contribution
I can help with the implementation if necessary. | Memory seems not to be supported in create_pandas_dataframe_agent | https://api.github.com/repos/langchain-ai/langchain/issues/6699/comments | 7 | 2023-06-25T03:05:36Z | 2023-12-13T16:08:48Z | https://github.com/langchain-ai/langchain/issues/6699 | 1,773,025,734 | 6,699 |
[
"hwchase17",
"langchain"
]
|
### System Info
$ pip freeze | grep langchain
langchain==0.0.197
langchainplus-sdk==0.0.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
For some reason :
db = Chroma.from_documents(texts, self.embeddings, persist_directory=db_path, client_settings=settings)
persist_directory=db_path, has no effect ... upon db.persist() it stores into the default directory 'db', instead of using db_path.
Only if you explicitly set Settings(persist_directory=db_path, ... ) it works.
Probable reason is that in langchain chroma.py if you pass client_settings and 'persist_directory' is not part of the settings, it will not add 'persist_directory' as it is done in the ELSE case, but ...:
(line 77 ++)
```
if client is not None:
self._client = client
else:
if client_settings:
self._client_settings = client_settings <<< .... here ..........
else:
self._client_settings = chromadb.config.Settings()
if persist_directory is not None:
self._client_settings = chromadb.config.Settings(
chroma_db_impl="duckdb+parquet",
persist_directory=persist_directory,
)
self._client = chromadb.Client(self._client_settings)
self._embedding_function = embedding_function
self._persist_directory = persist_directory
```
but ... chromadb.__init__.py expects 'persist_directory' in settings i.e. (line 44), otherwise it will use the default :
```
elif setting == "duckdb+parquet":
require("persist_directory")
logger.warning(
f"Using embedded DuckDB with persistence: data will be stored in: {settings.persist_directory}"
)
```
### Expected behavior
db = Chroma.from_documents(texts, self.embeddings, persist_directory=db_path, client_settings=settings)
should use db_path instead of 'db' | Chroma.from_documents(...persist_directory=db_path) has no effect | https://api.github.com/repos/langchain-ai/langchain/issues/6696/comments | 3 | 2023-06-24T23:24:49Z | 2023-10-29T16:05:41Z | https://github.com/langchain-ai/langchain/issues/6696 | 1,772,962,572 | 6,696 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.214
Python 3.10
Ubuntu 22.04
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
agent = create_csv_agent(
ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613"),
["titanic.csv", "titanic_age_fillna.csv"],
verbose=True,
agent_type=AgentType.OPENAI_FUNCTIONS,
)
agent.run("how many rows in the age column are different between the two dfs?")
Got error: ValueError: Invalid file path or buffer object type:
### Expected behavior
According to Langchain documentation https://python.langchain.com/docs/modules/agents/toolkits/csv.html, the CSV agent "can interact with multiple csv files passed in as a list", and it should not generate an error. | Issue: create_csv_agent() error when loading a list of csv files | https://api.github.com/repos/langchain-ai/langchain/issues/6695/comments | 7 | 2023-06-24T23:06:32Z | 2023-07-08T15:24:50Z | https://github.com/langchain-ai/langchain/issues/6695 | 1,772,957,835 | 6,695 |
[
"hwchase17",
"langchain"
]
| ### System Info
windows 11 python 3.9.16 langchain 0.0.212
### Who can help?
Code from https://python.langchain.com/docs/modules/data_connection/document_loaders/integrations/sitemap
```python
from langchain.document_loaders.sitemap import SitemapLoader
sitemap_loader = SitemapLoader(web_path="https://langchain.readthedocs.io/sitemap.xml")
docs = sitemap_loader.load()
```
throws:
```python
self._request(hdrs.METH_GET, url, allow_redirects=allow_redirects, **kwargs)
TypeError: _request() got an unexpected keyword argument 'verify'
```python
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders.sitemap import SitemapLoader
sitemap_loader = SitemapLoader(web_path="https://langchain.readthedocs.io/sitemap.xml")
docs = sitemap_loader.load()
### Expected behavior
to work or get a doc update | sitemap loader throws error TypeError: _request() got an unexpected keyword argument 'verify', many docs refer to wrong links for sitemap as well. | https://api.github.com/repos/langchain-ai/langchain/issues/6691/comments | 8 | 2023-06-24T19:11:29Z | 2023-10-24T16:07:28Z | https://github.com/langchain-ai/langchain/issues/6691 | 1,772,877,397 | 6,691 |
[
"hwchase17",
"langchain"
]
| ### System Info
platform: macOS-13.2.1-arm64-arm-64bit
Python 3.11.3
langchain 0.0.212
langchainplus-sdk 0.0.17
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm trying to get my Agent to correctly use my tools - from it's internal dialogue I can see it knows it's about to use a tool incorrectly but then it just goes ahead and does it resulting in an exception from my arg schema.
Here's the input and output with my agent:
---
💬: add a todo to by tortillas to my grocery shopping todo
> Entering new chain...
Action:
```
{
"action": "save_sub_todo",
"action_input": {
"name": "Buy tortillas",
"tags": ["grocery shopping"],
"parent_id": "ID of the grocery shopping todo"
}
}
```
Replace "ID of the grocery shopping todo" with the actual ID of the todo for grocery shopping. You can use `get_all_todos()` to find the ID if you don't know it.
```
Traceback (most recent call last):
File "/Users/jacobbrooks/PythonProjects/jbbrsh/term.py", line 32, in <module>
Fire(run)
File "/Users/jacobbrooks/PythonProjects/jbbrsh/.venv/lib/python3.11/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jacobbrooks/PythonProjects/jbbrsh/.venv/lib/python3.11/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
^^^^^^^^^^^^^^^^^^^^
File "/Users/jacobbrooks/PythonProjects/jbbrsh/.venv/lib/python3.11/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jacobbrooks/PythonProjects/jbbrsh/term.py", line 24, in run
response = agent.run(user_input)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/jacobbrooks/PythonProjects/jbbrsh/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 290, in run
return self(args[0], callbacks=callbacks, tags=tags)[_output_key]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
Here are the tools I'm passing to my agent:
---
```python
class TodoInput(BaseModel):
name: str = Field(description="Todo name")
tags: List[str] = Field(description="Tags to categorize todos")
@tool(return_direct=False, args_schema=TodoInput)
def save_todo(name: str, tags: List[str]) -> str:
"""
Saves a todo for the user
The string returned contians the todos name and its ID. The ID can be used to add child todos.
"""
notion_todo = NotionTodo.new(
notion_client=_notion,
database_id=DATABASE_ID,
properties={
"Tags": tags,
"Name": name
}
)
_notion.todos.append(notion_todo)
return f"todo saved: {notion_todo}"
class SubTodoInput(TodoInput):
parent_id: str = Field(
description="ID for parent todo, only needed for sub-todos",
)
@root_validator
def validate_query(cls, values: Dict[str, Any]) -> Dict:
parent_id = values["parent_id"]
if re.match(r"[\d\w]{8}-[\d\w]{4}-[\d\w]{4}-[\d\w]{4}-[\d\w]{12}", parent_id) is None:
raise ValueError(f'Invalid parent ID "{values["parent_id"]}"')
return values
@tool(return_direct=False, args_schema=SubTodoInput)
def save_sub_todo(name: str, tags: List[str], parent_id:str) -> str:
"""
Saves a child todo with a parent todo for the user
The string returned contians the todos name and its ID, The ID is formatted like so "f1ab8b74-6b67-46b1-81ec-519805c7a1cb"
Do not make up IDs! use get_all_todos to find the best ID if a real one is unavailable.
"""
notion_todo = NotionTodo.new(
notion_client=_notion,
database_id=DATABASE_ID,
properties={
"Tags": tags,
"Name": name,
"Parent todo": parent_id
}
)
_notion.todos.append(notion_todo)
return f"todo saved: {notion_todo}"
@tool
def get_all_todos():
"""
Returns a list of all existing todos.
Useful for finding an ID for an existing todo when you have to adda child todo.
"""
return '\n'.join([
f"'{t.name}' {'Complete' if t.complete else 'Incomplete'} id={t.id} parent_id={t.parent_todo}"
for t in _notion.todos
])
```
Here's how I'm creating my agent
---
```python
_system_message = """
Your purpose is to store and track todos for your user
When using Tools with ID arguments don't make up IDs! Find the best ID from looking through all todos.
"""
llm = ChatOpenAI(temperature=0)
memory_key = "chat_history"
chat_history = MessagesPlaceholder(variable_name=memory_key)
memory = ConversationBufferMemory(memory_key=memory_key, return_messages=True)
agent = initialize_agent(
llm=llm,
tools = [HumanInputRun(), *tools],
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
memory = memory,
agent_kwargs={
"system_message": _system_message,
"memory_prompts": [chat_history],
"input_variables": [
"input", "agent_scratchpad", memory_key
]
},
**kwargs
)
return agent
```
### Expected behavior
The bot init's internal dialogue _knows_ that it's about to use an invalid ID, and it knows how to go about getting the real ID
> Replace "ID of the grocery shopping todo" with the actual ID of the todo for grocery shopping. You can use `get_all_todos()` to find the ID if you don't know it.
But it just goes ahead and uses the tool resulting in an exception.
The Agent should acknowledge this insight, use the tool it knows it should use to get the proper ID, and then reformat the initial tool attempt with the legitimate ID. | Agent knows how to correctly proceed, but uses tool incorrectly anyways | https://api.github.com/repos/langchain-ai/langchain/issues/6690/comments | 3 | 2023-06-24T18:58:15Z | 2024-04-08T05:17:16Z | https://github.com/langchain-ai/langchain/issues/6690 | 1,772,873,469 | 6,690 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I have a use case where the agent is supposed to perform certain activities (going over the metadata and telling if the currently selected column is fit for querying). This would need a `zero-shot-react-agent` to use several LLMs as tools instead of the present ones (like search being shown everywhere). The [documentation](https://python.langchain.com/docs/modules/agents/how_to/custom_mrkl_agent#multiple-inputs) shows that this is possible but is in itself quite ambiguous.
How do I create an LLMChain as a tool if it always needs a prompt while initialisation? And the prompt can be created only after mentioning this LLMChain as a tool in the `create_prompt` function?
### Suggestion:
_No response_ | Issue: Can an LLM be used as a tool? | https://api.github.com/repos/langchain-ai/langchain/issues/6687/comments | 4 | 2023-06-24T18:41:11Z | 2023-11-07T03:29:34Z | https://github.com/langchain-ai/langchain/issues/6687 | 1,772,868,158 | 6,687 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I would like to know what software was used to create the flowcharts in the document. They look very beautiful.
### Suggestion:
_No response_ | Issue: What software was used to create the flowcharts in the document? | https://api.github.com/repos/langchain-ai/langchain/issues/6681/comments | 1 | 2023-06-24T09:21:32Z | 2023-09-30T16:05:07Z | https://github.com/langchain-ai/langchain/issues/6681 | 1,772,562,004 | 6,681 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
This problem occurred when I tried to use the tool support for a conversation-type agent using an agent of type SRUCTURED_CHAT_SHOT_REACT_DESCRIPTION. Here is some of the source code and errors

env:
Python 3.10
Langchain Lastest
WIN 10 Profession Lastest
### Suggestion:
_No response_ | Help for an error that appears in the CONVERSATIONAL Agent | https://api.github.com/repos/langchain-ai/langchain/issues/6680/comments | 2 | 2023-06-24T08:20:06Z | 2023-09-30T16:05:13Z | https://github.com/langchain-ai/langchain/issues/6680 | 1,772,530,890 | 6,680 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain: 2.12
commit: #6455
version 2.11 does niet have this
### Who can help?
@rlancemartin, @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
workaround: install bs4 manually (pip install bs4)
`from langchain.agents import initialize_agent, AgentType`
leads to:
```
File "//main.py", line 17, in <module>
from langchain.agents import initialize_agent, AgentType
File "/usr/local/lib/python3.11/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/usr/local/lib/python3.11/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 16, in <module>
from langchain.agents.tools import InvalidTool
File "/usr/local/lib/python3.11/site-packages/langchain/agents/tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "/usr/local/lib/python3.11/site-packages/langchain/tools/__init__.py", line 3, in <module>
from langchain.tools.arxiv.tool import ArxivQueryRun
File "/usr/local/lib/python3.11/site-packages/langchain/tools/arxiv/tool.py", line 12, in <module>
from langchain.utilities.arxiv import ArxivAPIWrapper
File "/usr/local/lib/python3.11/site-packages/langchain/utilities/__init__.py", line 3, in <module>
from langchain.utilities.apify import ApifyWrapper
File "/usr/local/lib/python3.11/site-packages/langchain/utilities/apify.py", line 5, in <module>
from langchain.document_loaders import ApifyDatasetLoader
File "/usr/local/lib/python3.11/site-packages/langchain/document_loaders/__init__.py", line 97, in <module>
from langchain.document_loaders.recursive_url_loader import RecusiveUrlLoader
File "/usr/local/lib/python3.11/site-packages/langchain/document_loaders/recursive_url_loader.py", line 5, in <module>
from bs4 import BeautifulSoup
ModuleNotFoundError: No module named 'bs4'
```
requirements.txt:
```
openai==0.27.8
fastapi==0.97.0
websockets==11.0.3
pydantic==1.10.9
langchain==0.0.212
uvicorn[standard]
jinja2
lancedb==0.1.8
itsdangerous
tiktoken==0.4.0
```
### Expected behavior
I think
from bs4 import BeautifulSoup
in recursive_url_loader.py
should have been a local import | crash because of missing bs4 dependency in version 2.12 | https://api.github.com/repos/langchain-ai/langchain/issues/6679/comments | 3 | 2023-06-24T06:31:34Z | 2023-06-24T20:54:12Z | https://github.com/langchain-ai/langchain/issues/6679 | 1,772,489,819 | 6,679 |
[
"hwchase17",
"langchain"
]
| ### System Info
see: https://github.com/hwchase17/langchain/discussions/1533
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
use FAISS on windows and next try to reuse those embeddings on lunix
### Expected behavior
```
I have a problem, when I want to run a project with langchain on windows, everything works perfectly, and with the same conditions on linux (libraries, python version, etc.) it doesn't work and throws this error, does anyone know what it could be?
2023-03-08 15:13:24 Failed to run listener function (error: search() missing 3 required positional arguments: 'k', 'distances', and 'labels')
2023-03-08 15:13:24 Traceback (most recent call last):
2023-03-08 15:13:24 File "/usr/local/lib/python3.9/site-packages/slack_bolt/listener/thread_runner.py", line 120, in run_ack_function_asynchronously
2023-03-08 15:13:24 listener.run_ack_function(request=request, response=response)
2023-03-08 15:13:24 File "/usr/local/lib/python3.9/site-packages/slack_bolt/listener/custom_listener.py", line 50, in run_ack_function
2023-03-08 15:13:24 return self.ack_function(
2023-03-08 15:13:24 File "//./main.py", line 28, in question
2023-03-08 15:13:24 response = processQuestion(query)
2023-03-08 15:13:24 File "/api.py", line 42, in processQuestion
2023-03-08 15:13:24 sources = doSimilaritySearch(index, query)
2023-03-08 15:13:24 File "/utils.py", line 87, in doSimilaritySearch
2023-03-08 15:13:24 docs = indexFaiss.similarity_search(query, k=5)
2023-03-08 15:13:24 File "/usr/local/lib/python3.9/site-packages/langchain/vectorstores/faiss.py", line 166, in similarity_search
2023-03-08 15:13:24 docs_and_scores = self.similarity_search_with_score(query, k)
2023-03-08 15:13:24 File "/usr/local/lib/python3.9/site-packages/langchain/vectorstores/faiss.py", line 136, in similarity_search_with_score
2023-03-08 15:13:24 docs = self.similarity_search_with_score_by_vector(embedding, k)
2023-03-08 15:13:24 File "/usr/local/lib/python3.9/site-packages/langchain/vectorstores/faiss.py", line 110, in similarity_search_with_score_by_vector
2023-03-08 15:13:24 scores, indices = self.index.search(np.array([embedding], dtype=np.float32), k)
2023-03-08 15:13:24 TypeError: search() missing 3 required positional arguments: 'k', 'distances', and 'labels'
``` | Problem to run on linux but not on windows | https://api.github.com/repos/langchain-ai/langchain/issues/6678/comments | 3 | 2023-06-24T06:10:07Z | 2024-03-22T07:26:20Z | https://github.com/langchain-ai/langchain/issues/6678 | 1,772,484,021 | 6,678 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
In my application, I am using the ConversationalRetrievalChain with "stuff" chain type, FAISS as the vector store and ConversationBufferMemory. I have noticed that when I ask a question related to a previous response (i.e. a request that is only looking for answers from chat history, such as a summarization request), the system still searches for answers from both the vector store and chat history. This often leads to incorrect or irrelevant answers (the summarization will include data from chat history as well as data that is never presented in previous conversation from vector store).
I've tried to address this issue by passing a custom prompt using combine_docs_chain_kwargs to specify whether a response should be generated based on the chat history only, or the chat history and the vector store. However, this approach hasn't been effective.
It seems that the system is currently unable to correctly discern the user's intention to exclusively use the chat history for generating a response. It's crucial that the system can accurately determine this to provide relevant and accurate responses.
### Suggestion:
I propose that the system should be enhanced with a mechanism to first detect the user's intention to either:
Select a response from the chat history only, or
Select a response from the chat history in combination with the vector store.
This could possibly be achieved by conditionally including or excluding certain parts of a prompt, such as {context} (from stuff_prompt.py, this is the default prompt used by stuff chain type), based on user input or intentions. However, this logic would need to be implemented during the data preparation for the template.
I haven't figured out a way to do so. Looking forward to your help!
| Issue: ConversationalRetrievalChain Fails to Distinguish User's Intention for Chat History Only or Chat History + Vector Store Answer | https://api.github.com/repos/langchain-ai/langchain/issues/6677/comments | 5 | 2023-06-24T04:38:11Z | 2023-10-30T16:06:18Z | https://github.com/langchain-ai/langchain/issues/6677 | 1,772,449,472 | 6,677 |
[
"hwchase17",
"langchain"
]
| ### System Info
```python
Linux-6.3.7-1-default-x86_64-with-glibc2.37
Python Version: 3.10.11
Langchain Version: 0.0.21
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Input
```Python
from langchain.chat_models import ChatVertexAI
from langchain.schema import HumanMessage, SystemMessage, AIMessage
chat = ChatVertexAI(temperature=0.7,verbose=True)
chat(
[
SystemMessage(content="Assume you are an expert tour guide.Help the user and assist him in his travel"),
HumanMessage(content="I like lush green valleys with cool weather. Where should I go?"),
AIMessage(content="Switzerland is a nice place to visit"),
HumanMessage(content="Name some of the popular places there to visit")
]
)
```
Response
```python
File [~/gamedisk/PyTorch2.0_env/lib/python3.10/site-packages/langchain/chat_models/vertexai.py:136], in ChatVertexAI._generate(self, messages, stop, run_manager, **kwargs)
134 chat = self.client.start_chat(**params)
135 for pair in history.history:
--> 136 chat._history.append((pair.question.content, pair.answer.content))
137 response = chat.send_message(question.content, **params)
138 text = self._enforce_stop_words(response.text, stop)
AttributeError: 'ChatSession' object has no attribute '_history'
```
### Expected behavior
It is expected to return an object similar to this
```python
AIMessage(content='Lauterbrunnen is a nice place to visit', additional_kwargs={}, example=False) | [ChatVertexAI] 'ChatSession' object has no attribute '_history' | https://api.github.com/repos/langchain-ai/langchain/issues/6675/comments | 4 | 2023-06-24T04:02:32Z | 2023-10-02T16:06:29Z | https://github.com/langchain-ai/langchain/issues/6675 | 1,772,436,936 | 6,675 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
with the QA generation over a document store is it possible to use hugging face models (local) instead of chat open AI?
### Suggestion:
_No response_ | Issue: using different local models for QA generation | https://api.github.com/repos/langchain-ai/langchain/issues/6674/comments | 1 | 2023-06-24T04:02:09Z | 2023-09-30T16:05:28Z | https://github.com/langchain-ai/langchain/issues/6674 | 1,772,436,473 | 6,674 |
[
"hwchase17",
"langchain"
]
| ### System Info
v0.0.211
### Who can help?
@hw
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use an OpenAPI spec which contains a key `format` with a known type like `date`.
```yaml
date_de_naissance_dirigeant_min:
name: date_de_naissance_dirigeant_min
in: query
description: Date de naissance minimale du dirigeant (ou de l'un des dirigeants de l'entreprise pour une recherche d'entreprises), au format JJ-MM-AAAA.
required: false
schema:
type: string
format: date
example: 1970-01-01
```
This gets translated into
```python
'date_de_naissance_dirigeant_min': {
'type': 'string',
'schema_format': 'date',
'description': "Date de naissance minimale du dirigeant (ou de l'un des dirigeants de l'entreprise pour une recherche d'entreprises), au format JJ-MM-AAAA.",
'example': datetime.date(1970, 1, 1),
},
```
### Expected behavior
No objects other that strings and lists should be instanciated by `openapi_spec_to_openai_fn` | openapi_spec_to_openai_fn generates Date objects which are not JSON serializable | https://api.github.com/repos/langchain-ai/langchain/issues/6671/comments | 3 | 2023-06-23T22:52:10Z | 2023-09-30T16:05:33Z | https://github.com/langchain-ai/langchain/issues/6671 | 1,772,260,655 | 6,671 |
[
"hwchase17",
"langchain"
]
| ### System Info
|software|Version|
|:---:|:---:|
|python|3.10.11|
|LangChain|0.0.209|
|Chroma|0.3.26|
|Windows|11|
|Ubuntu|22.06|
I have tried on both windows and ubuntu
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
i am having trouble adding multiple documents into a vectordb. I am using chromadb here.
The following loads, split and embed 2 text files and store them in a persistant vector database. Then it queries the database.
```python
from langchain.document_loaders import TextLoader
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
import os
from getpass import getpass
OPENAI_API_KEY = getpass()
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
embeddings = OpenAIEmbeddings()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
persist_directory = "db"
db = Chroma(persist_directory=persist_directory, embedding_function=embeddings)
# -------------------------- Adding spacex_wiki.txt -------------------------- #
loader = TextLoader("this_has_to_work/spacex_wiki.txt", encoding="utf8")
documents = loader.load()
docs = text_splitter.split_documents(documents)
db.add_documents(docs)
# ------------------------- Adding imploson_wiki.txt ------------------------- #
loader = TextLoader("this_has_to_work/implosion_wiki.txt", encoding="utf8")
documents = loader.load()
docs = text_splitter.split_documents(documents)
db.add_documents(docs)
db.persist()
# --------------------------- querying the vectordb -------------------------- #
db = None
db = Chroma(persist_directory=persist_directory, embedding_function=embeddings)
retriever = db.as_retriever(search_type="mmr")
query = "What is implosion?"
print(query)
print(retriever.get_relevant_documents(query)[0])
print("\n\n")
query = "Who is elon?"
print(query)
print(retriever.get_relevant_documents(query)[0])
print("\n\n")
```
The above code runs without a problem and is able to retreive from both text file. The problem starts with the following code.
The following code only loads the vectordb from the persistant location.
```python
from langchain.document_loaders import TextLoader
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
import os
from getpass import getpass
OPENAI_API_KEY = getpass()
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
embeddings = OpenAIEmbeddings()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
persist_directory = "db"
db = Chroma(persist_directory=persist_directory, embedding_function=embeddings)
# --------------------------- querying the vectordb -------------------------- #
retriever = db.as_retriever(search_type="mmr")
query = "What is implosion?"
print(query)
print(retriever.get_relevant_documents(query)[0])
print("\n\n")
query = "Who is elon?"
print(query)
print(retriever.get_relevant_documents(query)[0])
print("\n\n")
```
The above code will only return from the first document stored in the vectordb (spacex_wiki.txt), no matter what the prompt is.
The following are the text files used.
[implosion_wiki.txt](https://github.com/hwchase17/langchain/files/11850301/implosion_wiki.txt)
[spacex_wiki.txt](https://github.com/hwchase17/langchain/files/11850303/spacex_wiki.txt)
### Expected behavior
It is expected that information from both documents can be retreived when the vectordb is loaded from persistant location.
However, only the first embedded document can be retreived. | Chromadb only returns the first document from persistent db | https://api.github.com/repos/langchain-ai/langchain/issues/6657/comments | 3 | 2023-06-23T16:36:06Z | 2023-12-15T12:38:44Z | https://github.com/langchain-ai/langchain/issues/6657 | 1,771,741,425 | 6,657 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.208
Archcraft x86_64
Python 3.11.3
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Vector Stores / Retrievers
- [X] Document Loaders
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Export Chat from Whatsapp
2. The exported chat contains messages that aren't being extracted by the regex. Example :
`7/19/22, 11:26 pm - User: Message`
https://github.com/hwchase17/langchain/blob/980c8651743b653f994ad6b97a27b0fa31ee92b4/langchain/document_loaders/whatsapp_chat.py#L43
There are two issues here:
1. The regex is looking for a space character but in my exported message there was a unicode NNBSP character (U+202F)
2. AM/PM are expected in capital case whereas my export was in small case.
### Expected behavior
Message is parsed successfully. | WhatsAppChatLoader doesn't extract messages exported from WhatsApp | https://api.github.com/repos/langchain-ai/langchain/issues/6654/comments | 0 | 2023-06-23T15:42:11Z | 2023-06-26T09:16:16Z | https://github.com/langchain-ai/langchain/issues/6654 | 1,771,671,290 | 6,654 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Since the doc refactor, users are now limited to four search results per query.
<img width="837" alt="image" src="https://github.com/hwchase17/langchain/assets/1082786/e340d78c-b8bb-4242-93bc-0d96d4514b44">
I think users should be able to purchase tokens if they would like to be able to access more search results, like perhaps 1 token per 10 extra results. This would enable functionality similar to the previous search functions, which would return up to 50 results.
Tokens could also be used for respecting `@media (prefers-color-scheme: dark)` since right now my laptop is not rated high enough for the default brightness and I would not like to blow out my display. Lastly, I would be willing to pay 5 tokens to increase the font weight, since it is unfortunately not very accessible for people with low vision, though that price should probably be determined by what the market will bear.
### Motivation
Motivation: find broad array code definitions and usage examples when trying to integrate a piece of the library into my application.
Related to #6300.
Will allow users to surface relevant information without having to implement custom crawler/indexer for the docs.
### Your contribution
I will happily serve as QA tester to test the amount of search results returned. I don't think my sunglasses offer enough protection to test whether the docs site respects the dark-mode CSS media query. | Allow users to purchase tokens for more search results | https://api.github.com/repos/langchain-ai/langchain/issues/6651/comments | 1 | 2023-06-23T14:10:15Z | 2023-09-29T16:05:33Z | https://github.com/langchain-ai/langchain/issues/6651 | 1,771,531,070 | 6,651 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hello,
during the development of an application that needs to authenticate to Azure services and use the wrapper [AzureChatOpenAi](https://github.com/hwchase17/langchain/blob/master/langchain/chat_models/azure_openai.py), we encountered an error due to the fact that the model could not use the 'azure_ad' type.
It seems that this class sets the openai_api_type always to the set default value of 'azure' even if we have an environment variable called 'OPENAI_API_TYPE' specifying 'azure_ad'.
Why is it so?
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
answering_llm=AzureChatOpenAI(
deployment_name=ANSWERING_MODEL_CONFIG.model_name,
model_name=ANSWERING_MODEL_CONFIG.model_type, #"gpt-3.5-turbo"
openai_api_type="azure_ad", # IF THIS IS NOT EXPLICITLY PASSED IT FAILS
openai_api_key=auth_token,
temperature=ANSWERING_MODEL_CONFIG.temperature,
max_tokens=ANSWERING_MODEL_CONFIG.max_tokens
)
### Expected behavior
We expect the wrapper to take the value of the environmental variable correctly. | [AzureChatOpenAI] openai_api_type can't be changed from the default 'azure' value | https://api.github.com/repos/langchain-ai/langchain/issues/6650/comments | 1 | 2023-06-23T14:09:47Z | 2023-08-04T03:21:42Z | https://github.com/langchain-ai/langchain/issues/6650 | 1,771,530,370 | 6,650 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python 3.11.3
MacOs
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I encountered a problem during my initial installation of the Langchain package. I adhered to the installation instructions provided at https://python.langchain.com/docs/get_started/installation.
The command I used for installation was pip install langchain, which resulted in the installation of Langchain version 0.0.209.
However, when I attempted to execute the following code:
```
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
chat = ChatOpenAI()
res = chat.predict_messages([HumanMessage(
content="Translate this sentence from English to French. I love programming.")])
print(res.content)
```
I received an error message stating that the `predict_messages` function was not available. It appears that the package version available on pip does not align with the latest version on the GitHub repository.
Interestingly, when I installed the package from the cloned repository, it worked as expected.
### Expected behavior
After installing the Langchain package using pip install langchain, I should be able to import the OpenAI module from `langchain.chat_models` and use the predict function without any issues. The `predict_messages` function should be available and functional in the pip version of the package, just as it is in the version available in the GitHub repository. | Installation Issue with Langchain Package - 'predict_messages' Function Not Available in Pip Version 0.0.209 | https://api.github.com/repos/langchain-ai/langchain/issues/6643/comments | 2 | 2023-06-23T11:27:58Z | 2023-10-01T16:05:48Z | https://github.com/langchain-ai/langchain/issues/6643 | 1,771,298,502 | 6,643 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I would like to know how many **tokens** the tokenizer would generate for the **prompt** doing the **OpenAI** call, but I'm finding issues in reproducing the real call.
Indeed I'm trying two methods (that internally use `tiktoken` library if I'm not wrong) found in the documentation:
- [`get_num_tokens`](https://api.python.langchain.com/en/latest/modules/llms.html#langchain.llms.AI21.get_num_tokens)
- [`get_num_tokens_from_messages`](https://api.python.langchain.com/en/latest/modules/llms.html#langchain.llms.AI21.get_num_tokens_from_messages)
Then I check the number of prompt tokens with the callback `get_openai_callback` the understand if the calculation was correct:
```python
from langchain.llms import OpenAI
from langchain.schema import HumanMessage, SystemMessage, AIMessage
from langchain.callbacks import get_openai_callback
models_name = ["text-davinci-003", "gpt-3.5-turbo-0301", "gpt-3.5-turbo-0613"]
for model_name in models_name:
print(f"----{model_name}----")
llm = OpenAI(model_name = model_name)
print(llm)
text = "Hello world"
tokens = llm.get_num_tokens(text)
print(f"1) get_num_tokens: {tokens}")
human_message = HumanMessage(content=text)
system_message = SystemMessage(content=text)
ai_message = AIMessage(content=text)
tokens = llm.get_num_tokens_from_messages([human_message]), llm.get_num_tokens_from_messages([system_message]), llm.get_num_tokens_from_messages([ai_message])
print(f"2) get_num_tokens_from_messages: {tokens}")
with get_openai_callback() as cb:
llm_response = llm(text)
print(f"3) callback: {cb}")
```
The output is:
```
----text-davinci-003----
OpenAI
Params: {'model_name': 'text-davinci-003', 'temperature': 0.7, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'request_timeout': None, 'logit_bias': {}}
1) get_num_tokens: 2
2) get_num_tokens_from_messages: (4, 4, 4)
3) callback: Tokens Used: 23
Prompt Tokens: 2
Completion Tokens: 21
Successful Requests: 1
Total Cost (USD): $0.00045999999999999996
----gpt-3.5-turbo-0301----
OpenAIChat
Params: {'model_name': 'gpt-3.5-turbo-0301'}
1) get_num_tokens: 2
2) get_num_tokens_from_messages: (4, 4, 4)
3) callback: Tokens Used: 50
Prompt Tokens: 10
Completion Tokens: 40
Successful Requests: 1
Total Cost (USD): $0.0001
----gpt-3.5-turbo-0613----
OpenAIChat
Params: {'model_name': 'gpt-3.5-turbo-0613'}
1) get_num_tokens: 2
2) get_num_tokens_from_messages: (4, 4, 4)
3) callback: Tokens Used: 18
Prompt Tokens: 9
Completion Tokens: 9
Successful Requests: 1
Total Cost (USD): $0.0
```
I understand that each model has a different way to count the tokens, for example **text-davinci-003** has the same number between `get_num_tokens` output and the callback. The other two models: **gpt-3.5-turbo-0301** and **gpt-3.5-turbo-0613** seems to have respectively 6 and 5 tokens more in the callback compared to `get_num_tokens_from_messages`.
So how I can reproduce exactly the calculation of the token in the real call? Which is the official function used in it?
### Suggestion:
_No response_ | Tokenize before OpenAI call issues | https://api.github.com/repos/langchain-ai/langchain/issues/6642/comments | 2 | 2023-06-23T11:21:02Z | 2023-07-12T13:21:20Z | https://github.com/langchain-ai/langchain/issues/6642 | 1,771,287,642 | 6,642 |
[
"hwchase17",
"langchain"
]
| 因为LLAMA没有中文数据,想用中文数据fine tuning LLAMA模型,请问langchain有这个功能吗? | 是否有fine tuning LLAMA预训练模型功能 | https://api.github.com/repos/langchain-ai/langchain/issues/6641/comments | 1 | 2023-06-23T10:34:28Z | 2023-09-29T16:05:43Z | https://github.com/langchain-ai/langchain/issues/6641 | 1,771,224,942 | 6,641 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.207
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.sql_database import SQLDatabase
db = SQLDatabase.from_uri("sqlite:///sample.db",)
db.get_usable_table_names() # Change table names order each time the application is retarted
```
Current implementation
```python
class SQLDatabase:
def get_usable_table_names(self) -> Iterable[str]:
if self._include_tables:
return self._include_tables
return self._all_tables - self._ignore_tables # THIS IS A SET
class ListSQLDatabaseTool(BaseSQLDatabaseTool, BaseTool):
def _run(self, tool_input: str = "", ...) -> str:
return ", ".join(self.db.get_usable_table_names()) # ORDER CHANGES EACH RUN
```
### Expected behavior
```python
class ListSQLDatabaseTool(BaseSQLDatabaseTool, BaseTool):
def _run(self, tool_input: str = "", ...) -> str:
return ", ".join(sorted(self.db.get_usable_table_names()))
``` | sql_db_list_tables returning different order each time making caching impossible | https://api.github.com/repos/langchain-ai/langchain/issues/6640/comments | 2 | 2023-06-23T10:32:43Z | 2023-09-30T16:05:48Z | https://github.com/langchain-ai/langchain/issues/6640 | 1,771,222,506 | 6,640 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
In order to read all the text of wikipedia page, we would need to allow overriding the hard limit of 4000 characters set in `WikipediaAPIWrapper`
### Suggestion:
Just add a new argument to `WikipediaLoader` named `doc_content_chars_max` (the very same name that uses `WikipediaAPIWrapper` under the hood and use it when instancing the client. | Issue: Set doc_content_chars_max with WikipediaLoader | https://api.github.com/repos/langchain-ai/langchain/issues/6639/comments | 2 | 2023-06-23T10:20:04Z | 2023-10-30T09:11:52Z | https://github.com/langchain-ai/langchain/issues/6639 | 1,771,205,138 | 6,639 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain - 0.0.198
platform - ubuntu
python - 3.10.11
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Objective is to pass additional variables to `_call` method in CustomLLM.
Colab Link - https://colab.research.google.com/drive/19VSmSEBq5D0MDXQ3CF0rrmOdGjdaELUj?usp=sharing
Sample code:
```
from langchain import PromptTemplate, LLMChain
from langchain.llms.base import LLM
from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer
model_name = "facebook/opt-350m"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
class CustomLLM(LLM):
def _call(self, prompt, stop=None, **kwargs) -> str:
print("Kwargs: ", kwargs)
inputs = tokenizer([prompt], return_tensors="pt")
response = model.generate(**inputs, max_new_tokens=128)
response = tokenizer.decode(response[0])
return response
@property
def _identifying_params(self):
return {"name_of_model": model_name}
@property
def _llm_type(self) -> str:
return "custom"
llm = CustomLLM()
prompt_template = "Answer the question - {question}"
prompt = PromptTemplate(template=prompt_template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
```
### Expected behavior
### Scenario1
passing additional parameter `foo=123`
```
result = llm_chain.run({"question":"What is the weather in LA and SF"}, foo=123)
```
Following error is thrown
```
ValueError: `run` supported with either positional arguments or keyword arguments but not both. Got args: ({'question': 'What is the weather in LA and SF'},) and kwargs: {'foo': 123}.
```
### Scenario2
if we pass it as a dictionary - `{'foo': 123}`
```
result = llm_chain.run({"question":"What is the weather in LA and SF"}, {"foo":123})
```
Following error is thrown
```
ValueError: `run` supports only one positional argument.
```
### Scenario3
if we pass everything together
```
result = llm_chain.run({"question":"What is the weather in LA and SF", "foo":123})
```
The code works, but the kwargs in CustomLLM - `_call` is still empty. i guess, the chain is safely ignoring the variables which are not part of prompt template.
Is there any way to pass the additional parameter to the kwargs of CustomLLM - `_call` method without changing the prompt template?
| how to pass additional variables using kwargs to CustomLLM | https://api.github.com/repos/langchain-ai/langchain/issues/6638/comments | 5 | 2023-06-23T10:09:31Z | 2024-02-08T16:29:11Z | https://github.com/langchain-ai/langchain/issues/6638 | 1,771,189,299 | 6,638 |
[
"hwchase17",
"langchain"
]
| ### System Info
I've recently changed to use this agent since I started getting errors with `chat-conversational-react-description` (about it not being able to use multi-input tools). I've noticed that it often finishes a chain telling the user that it'll make a search/use a tool but it never does (because the chain is already finished).
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This is how the agent is set up
```python
from langchain.chat_models import ChatOpenAI
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain.agents import AgentType, initialize_agent
from agent_tools.comparables_tool import ComparablesTool
# from agent_tools.duck_search_tool import duck_search
from langchain.prompts import SystemMessagePromptTemplate, PromptTemplate
from agent_tools.python_repl_tool import PythonREPL
from token_counter import get_token_count
from langchain.prompts import MessagesPlaceholder
from langchain.memory import ConversationBufferMemory
tools = [PythonREPL(), ComparablesTool()]
chat_history = MessagesPlaceholder(variable_name="chat_history")
memory = ConversationBufferMemory(
memory_key="chat_history", return_messages=True)
gpt = ChatOpenAI(
temperature=0.2,
model_name='gpt-3.5-turbo-16k',
verbose=True
)
conversational_agent = initialize_agent(
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
tools=tools,
llm=gpt,
verbose=True,
max_iterations=10,
memory=memory,
agent_kwargs={
"memory_prompts": [chat_history],
"input_variables": ["input", "agent_scratchpad", "chat_history"]
}
)
async def get_response(user_message: str) -> str:
return await conversational_agent.arun(user_message)
```
And this is what's on the terminal:
```python
FerAtTheFringe#1080 said: "Hey I need to find apartments in madrid with at least 3 rooms" (general)
←[1m> Entering new chain...←[0m
←[32;1m←[1;3mSure! I can help you find apartments in Madrid with at least 3 rooms. Let me search for some options for you.←[0m
←[1m> Finished chain.←[0m
```
### Expected behavior
```python
FerAtTheFringe#1080 said: "Hey I need to find apartments in madrid with at least 3 rooms" (general)
←[1m> Entering new chain...←[0m
←[32;1m←[1;3m "action": "get_comparables",
"action_input": {
"latitude": "38.9921979",
"longitude": "-1.878099",
"rooms": "5",
"nresults": "10"
}
``` | STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION finishes chain BEFORE using a tool | https://api.github.com/repos/langchain-ai/langchain/issues/6637/comments | 14 | 2023-06-23T09:50:41Z | 2024-02-28T16:10:15Z | https://github.com/langchain-ai/langchain/issues/6637 | 1,771,161,521 | 6,637 |
[
"hwchase17",
"langchain"
]
| ### System Info
Ubuntu 20.04
Python 3.10
langchain 0.0.166
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This code is retrieved from the official website
https://python.langchain.com/docs/modules/memory/how_to/summary#initializing-with-messages
```python
from langchain.memory import ConversationSummaryMemory, ChatMessageHistory
from langchain.llms import OpenAI
from dotenv import load_dotenv
load_dotenv()
history = ChatMessageHistory()
history.add_user_message("hi")
history.add_ai_message("hi there!")
memory = ConversationSummaryMemory.from_messages(llm=OpenAI(temperature=0), chat_memory=history, return_messages=True)
```
The above code will throw out an exception
```
AttributeError: type object 'ConversationSummaryMemory' has no attribute 'from_messages'
```
I guess the class method has been deprecated?
### Expected behavior
It passes. | 'ConversationSummaryMemory' has no attribute 'from_messages' | https://api.github.com/repos/langchain-ai/langchain/issues/6636/comments | 2 | 2023-06-23T08:37:14Z | 2023-09-29T16:05:58Z | https://github.com/langchain-ai/langchain/issues/6636 | 1,771,057,110 | 6,636 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.206
python 3.11.3
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code
```
tfretriever = TFIDFRetriever.from_texts(
["My name is Luis Valencia",
"I am 70 years old",
"I like gardening, baking and hockey"])
template = """
Use the following context (delimited by <ctx></ctx>) and the chat history (delimited by <hs></hs>) to answer the question:
------
<ctx>
{context}
</ctx>
------
<hs>
{chat_history}
</hs>
------
{question}
Answer:
"""
prompt = PromptTemplate(
input_variables=["chat_history", "context", "question"],
template=template,
)
st.session_state['chain'] = chain = ConversationalRetrievalChain.from_llm(llm,
vectordb.as_retriever(),
memory=memory,
chain_type_kwargs={
"verbose": True,
"prompt": prompt,
"memory": ConversationBufferMemory(
memory_key="chat_history",
input_key="question"),
})
```
Error:
ValidationError: 1 validation error for ConversationalRetrievalChain chain_type_kwargs extra fields not permitted (type=value_error.extra)
### Expected behavior
I should be able to provide custom context to my conversational retrieval chain, without custom prompt it works and gets good answers from vector db, but I cant use custom prompts | ValidationError: 1 validation error for ConversationalRetrievalChain chain_type_kwargs extra fields not permitted (type=value_error.extra) | https://api.github.com/repos/langchain-ai/langchain/issues/6635/comments | 11 | 2023-06-23T08:13:12Z | 2023-11-03T04:33:18Z | https://github.com/langchain-ai/langchain/issues/6635 | 1,771,023,299 | 6,635 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Why does the langchain js version have a github repo document loader and this one can only load github issues?
### Motivation
-
### Your contribution
- | Github issues instead of Github repo? | https://api.github.com/repos/langchain-ai/langchain/issues/6631/comments | 1 | 2023-06-23T06:21:35Z | 2023-09-29T16:06:04Z | https://github.com/langchain-ai/langchain/issues/6631 | 1,770,856,353 | 6,631 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain-0.0.209
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
`import asyncio
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
async def async_generate(chain):
resp = await chain.arun(product="toothpaste")
print(resp)
async def generate_concurrently():
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain = LLMChain(llm=llm, prompt=prompt)
tasks = [async_generate(chain) for _ in range(3)]
await asyncio.gather(*tasks)
asyncio.run(generate_concurrently())`
no error and no answer until timeout
Retrying langchain.llms.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds...
I know this message mean it is working but no retrun and can't stop
and the same problem in Jupyter, code like this
my code in Jupyter
`import asyncio
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
async def async_generate(chain):
resp = await chain.arun(product="toothpaste")
print(resp)
async def generate_concurrently():
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain = LLMChain(llm=llm, prompt=prompt)
tasks = [async_generate(chain) for _ in range(3)]
await asyncio.gather(*tasks)
await generate_concurrently()`
### Expected behavior
no error and no answer until timeout
Retrying langchain.llms.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds...
I know this message mean it is working but no retrun and can't stop
and the same problem in Jupyter | Can't use arun acall, no return and can't stop | https://api.github.com/repos/langchain-ai/langchain/issues/6630/comments | 1 | 2023-06-23T06:18:54Z | 2023-09-29T16:06:09Z | https://github.com/langchain-ai/langchain/issues/6630 | 1,770,853,281 | 6,630 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
qa = ConversationalRetrievalChain.from_llm(AzureChatOpenAI(deployment_name="gpt-35-turbo"), db.as_retriever(), memory=memory)
print(qa.combine_docs_chain.llm_chain.prompt)
ChatPromptTemplate(input_variables=['question', 'context'], output_parser=None, partial_variables={}, messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=['context'], output_parser=None, partial_variables={}, template="Use the following pieces of context to answer the users question. \nIf you don't know the answer, just say that you don't know, don't try to make up an answer.\n----------------\n{context}", template_format='f-string', validate_template=True), additional_kwargs={}), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='{question}', template_format='f-string', validate_template=True), additional_kwargs={})])
How can I get the complete prompt includes questions and context?
### Suggestion:
_No response_ | Issue: How to print the complete prompt that chain used | https://api.github.com/repos/langchain-ai/langchain/issues/6628/comments | 12 | 2023-06-23T04:03:35Z | 2024-05-17T16:06:03Z | https://github.com/langchain-ai/langchain/issues/6628 | 1,770,741,393 | 6,628 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.