issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| ### Feature request
__A chain for planning using the [Planning Domain Definition Language](https://en.wikipedia.org/wiki/Planning_Domain_Definition_Language) and a dedicated solver__
As described in this [paper by Liu et al.](https://arxiv.org/abs/2304.11477), the LLM can generate the PDDL inputs required by a solver (they used [FastDownward](https://github.com/aibasel/downward), which has PDDL support, see [this info page](https://www.fast-downward.org/PddlSupport)). \
There is an implementation available from the authors, see [llm-pddl](https://github.com/Cranial-XIX/llm-pddl). However, I did not see any license information. \
It would be great having this readily available as a part of langchain. \
I guess implementing this as a chain would make the most sense, since iterations for reformulating the problem might be necessary.
### Motivation
LLMs are limited regarding planning in an optimal way and dedicated solvers require inputs that are tedious to create - putting the pieces together just makes sense
### Your contribution
I could create a PR, but this might take some time | Chain for planning using PDDL | https://api.github.com/repos/langchain-ai/langchain/issues/9119/comments | 2 | 2023-08-11T12:29:14Z | 2023-11-17T16:05:14Z | https://github.com/langchain-ai/langchain/issues/9119 | 1,846,740,678 | 9,119 |
[
"hwchase17",
"langchain"
]
| ### System Info
- langchain version 0.0.262
- Python version 3.10
### Who can help?
Users I've found through blame ([exact commit](https://github.com/langchain-ai/langchain/commit/1d649b127eb10c426f9b9a67cbd1fe6ec8e6befa)):
- @MassimilianoBiancucci
- @baskaryan
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just use the function or look into the source code ([this seems to be the exact commit](https://github.com/langchain-ai/langchain/commit/1d649b127eb10c426f9b9a67cbd1fe6ec8e6befa)).
### Expected behavior
Either return the sole JSON or the whole markdown block (including the closing three backticks). | StructuredOutputParser.get_format_instructions with only_json=True doesn't return the closing backticks | https://api.github.com/repos/langchain-ai/langchain/issues/9118/comments | 1 | 2023-08-11T11:26:35Z | 2023-11-17T16:05:19Z | https://github.com/langchain-ai/langchain/issues/9118 | 1,846,659,504 | 9,118 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.261
python 3.10
------------------
ValidationError Traceback (most recent call last)
Cell In[17], line 1
----> 1 embeddings = BedrockEmbeddings(
2 credentials_profile_name="monisdas-ibm", region_name="us-east-1"
3 )
4 vectorstore = Chroma.from_documents(docs, embeddings)
File ~/Documents/moni/knowlege/unstructured/examples/chroma-news-of-the-day/.py310_unstrcutured/lib/python3.10/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for BedrockEmbeddings
__root__
Could not load credentials to authenticate with AWS client. Please check that credentials in the specified profile name are valid. (type=value_error)
-------------------
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
try to use BedrockEmbeddings
### Expected behavior
it should work. | BedrockEmbeddings can't load aws credential profile | https://api.github.com/repos/langchain-ai/langchain/issues/9117/comments | 10 | 2023-08-11T10:10:30Z | 2024-03-26T13:40:12Z | https://github.com/langchain-ai/langchain/issues/9117 | 1,846,559,683 | 9,117 |
[
"hwchase17",
"langchain"
]
| HI,
Is that azure Openai embedding had limitation ?
Traceback (most recent call last):
File "D:\Corent\AI\LangChain\azure\azure_connection.py", line 45, in <module>
VectorStore = Milvus.from_texts(
^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\vectorstores\milvus.py", line 822, in from_texts
vector_db.add_texts(texts=texts, metadatas=metadatas)
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\vectorstores\milvus.py", line 422, in add_texts
embeddings = self.embedding_func.embed_documents(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\embeddings\openai.py", line 478, in embed_documents
return self._get_len_safe_embeddings(texts, engine=self.deployment)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\embeddings\openai.py", line 364, in _get_len_safe_embeddings
response = embed_with_retry(
^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\embeddings\openai.py", line 107, in embed_with_retry
return _embed_with_retry(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\tenacity\__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\tenacity\__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\tenacity\__init__.py", line 314, in iter
return fut.result()
^^^^^^^^^^^^
File "C:\Users\donbosco\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\donbosco\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\tenacity\__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\embeddings\openai.py", line 104, in _embed_with_retry
response = embeddings.client.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\openai\api_resources\embedding.py", line 33, in create
response = super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\openai\api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\openai\api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\openai\api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions. | openai.error.InvalidRequestError: Too many inputs. | https://api.github.com/repos/langchain-ai/langchain/issues/9112/comments | 3 | 2023-08-11T08:33:13Z | 2023-12-28T16:06:57Z | https://github.com/langchain-ai/langchain/issues/9112 | 1,846,424,735 | 9,112 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I just tried the simple example with the latest version of langchain, but I meet the problem with " list index out of the range", when I modify the chain_type in "RetrievalQA.from_chain_type" function, the error messages are as follows :
map_rerank :
-----------------------------------------------------------------------------------------------------------------------------
File ~\anaconda3\envs\pytorch\Lib\site-packages\langchain\chains\combine_documents\map_rerank.py:194, in MapRerankDocumentsChain._process_results(self, docs, results)
190 typed_results = cast(List[dict], results)
191 sorted_res = sorted(
192 zip(typed_results, docs), key=lambda x: -int(x[0][self.rank_key])
193 )
--> 194 output, document = sorted_res[0]
195 extra_info = {}
196 if self.metadata_keys is not None:
IndexError: list index out of range
-----------------------------------------------------------------------------------------------------------------------------
refine:
-----------------------------------------------------------------------------------------------------------------------------
File ~\anaconda3\envs\pytorch\Lib\site-packages\langchain\chains\combine_documents\refine.py:203, in RefineDocumentsChain._construct_initial_inputs(self, docs, **kwargs)
200 def _construct_initial_inputs(
201 self, docs: List[Document], **kwargs: Any
202 ) -> Dict[str, Any]:
--> 203 base_info = {"page_content": docs[0].page_content}
204 base_info.update(docs[0].metadata)
205 document_info = {k: base_info[k] for k in self.document_prompt.input_variables}
IndexError: list index out of range
-----------------------------------------------------------------------------------------------------------------------------
Thanks for your help!
### Suggestion:
_No response_ | Issue: when using the map_rerank & refine, occur the " list index out of range" (already modify the llm.py file, but only map_reduce can work well) | https://api.github.com/repos/langchain-ai/langchain/issues/9111/comments | 6 | 2023-08-11T08:26:10Z | 2023-12-06T17:44:20Z | https://github.com/langchain-ai/langchain/issues/9111 | 1,846,415,172 | 9,111 |
[
"hwchase17",
"langchain"
]
| ### System Info
If, for some reason, the list of document is empty, an exception is sent from llm.py.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
llm.prep_prompts(input_list=[])
raise exception
### Expected behavior
Continue
See the pull [request #9109](https://github.com/langchain-ai/langchain/pull/9109) | LLM with empty list of document | https://api.github.com/repos/langchain-ai/langchain/issues/9110/comments | 1 | 2023-08-11T08:14:57Z | 2023-08-16T07:14:35Z | https://github.com/langchain-ai/langchain/issues/9110 | 1,846,399,416 | 9,110 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am using Elasticsearch BM25 to fetch relevant documents. How can I add a parameter to tell the retriever to return only first n matching docs?
### Suggestion:
_No response_ | Issue: Elasticsearch BM25 | https://api.github.com/repos/langchain-ai/langchain/issues/9103/comments | 3 | 2023-08-11T07:41:41Z | 2024-02-12T16:16:04Z | https://github.com/langchain-ai/langchain/issues/9103 | 1,846,353,415 | 9,103 |
[
"hwchase17",
"langchain"
]
| ### System Info
Code snapshot:

Data in Azure Cognitive Search Vector Store:
data ={
"id": " ",
"content": "",
"content_vector": [],
"metadata": "{}",
"a": "",
"b": "",
"c": 20.4,
"d": ""
}
Issue: with langchain==0.0.261, but it is working fine with langchain==0.0.242
Issue Description : SerializationError: (', DeserializationError: (", AttributeError: \'float\' object has no attribute \'lower\'", \'Unable to deserialize to object: type\', AttributeError("\'float\' object has no attribute \'lower\'"))', 'Unable to build a model: (", AttributeError: \'float\' object has no attribute \'lower\'", \'Unable to deserialize to object: type\', AttributeError("\'float\' object has no attribute \'lower\'"))', DeserializationError(", AttributeError: 'float' object has no attribute 'lower'", 'Unable to deserialize to object: type', AttributeError("'float' object has no attribute 'lower'")))
Note: In the upgraded version the response time also increases.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
def __llm_client(self, streaming: bool =False):
# LLM connection
if streaming is False:
return AzureOpenAI(
deployment_name=AZURE_OPENAI_LLM_MODEL_DEPLOYMENT_NAME,
model=AZURE_OPENAI_LLM_MODEL_NAME,
openai_api_type=self.__azure_openAI_type,
openai_api_version=self.__azure_openAI_version,
openai_api_base=self.__azure_openAI_base,
openai_api_key=self.__azure_openAI_key,
temperature=0.0,
max_tokens=2000
)
return AzureOpenAI(
deployment_name=AZURE_OPENAI_LLM_MODEL_DEPLOYMENT_NAME,
model=AZURE_OPENAI_LLM_MODEL_NAME,
openai_api_type=self.__azure_openAI_type,
openai_api_version=self.__azure_openAI_version,
openai_api_base=self.__azure_openAI_base,
openai_api_key=self.__azure_openAI_key,
temperature=0.0,
max_tokens=2000,
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()]
)
# LLM Embeddings Client
def __embeddings_client(self):
return OpenAIEmbeddings(
model=AZURE_OPENAI_LLM_EMBEDDING_NAME,
deployment=AZURE_OPENAI_LLM_EMBEDDING_DEPLOYMENT_NAME,
openai_api_type=self.__azure_openAI_type,
openai_api_version=self.__azure_openAI_version,
openai_api_base=self.__azure_openAI_base,
openai_api_key=self.__azure_openAI_key,
chunk_size=1536
)
# Embedding vector store client
def __vector_store_client(self):
acs_vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=self.__acs_endpoint,
azure_search_key=self.__acs_key,
index_name=self.__acs_index_name,
embedding_function=self.__embeddings_client().embed_query,
)
return acs_vector_store
# Langchain Chain Client
def __chain_client(self):
chain_type = "stuff"
return load_qa_chain(
llm=self.__llm_client(streaming=True),
chain_type=chain_type,
document_variable_name='context',
prompt=QA_PROMPT
# verbose=True
)
def __conversational_retrieval_chain_client(self):
# print(self.__vector_store_client().as_retriever())
return ConversationalRetrievalChain(
retriever=self.__vector_store_client().as_retriever(),
question_generator=self.__question_generator(),
combine_docs_chain=self.__chain_client(),
memory=ConversationBufferMemory(
memory_key="chat_history",
return_messages=False
)
)
def __question_generator(self):
return LLMChain(
llm=self.__llm_client(),
prompt=CONDENSE_QUESTION_PROMPT
)
# Main function
def smart_chat_bot(self, query: str="*", conversation: list=[]):
self.user_input = query
print(f"Human Input: {self.user_input}", end="\n")
result = self.__conversational_retrieval_chain_client()({"question": self.user_input, "chat_history": conversation})
return result
### Expected behavior
It should return the response on the bases of user input and the documents that came from the vector store. And the latency will be lesser. | ConversationalRetrievalChain having trouble with vector store retriever 'as_retriever()' | https://api.github.com/repos/langchain-ai/langchain/issues/9101/comments | 4 | 2023-08-11T07:37:03Z | 2024-02-19T18:19:46Z | https://github.com/langchain-ai/langchain/issues/9101 | 1,846,348,024 | 9,101 |
[
"hwchase17",
"langchain"
]
| https://github.com/langchain-ai/langchain/blob/991b448dfce7fa326e70774b5f38c9576f1f304c/libs/langchain/langchain/chains/base.py#L327C25-L327C25
IMPORTANT: chain.acall is a async function.
Chain.prep_inputs will invoke memory.load_memory_variables
in ConversationBufferWindowMemory.load_memory_variables.
```python
buffer: Any = self.buffer[-self.k * 2 :] if self.k > 0 else []
```
```python
@property
def buffer(self) -> List[BaseMessage]:
"""String buffer of memory."""
return self.chat_memory.messages
```
if chat_memory is type of RedisChatMessageHistory.
```python
@property
def messages(self) -> List[BaseMessage]: # type: ignore
"""Retrieve the messages from Redis"""
_items = self.redis_client.lrange(self.key, 0, -1)
items = [json.loads(m.decode("utf-8")) for m in _items[::-1]]
messages = messages_from_dict(items)
return messages
```
the lrange is a sync invoking...
IMPORTANT: chain.acall is a async function.
in fastapi or other scene.
a chain.acall will blocking by this invoke. can you change the invoke of Chain.prep_inputs to:
```python
inputs = await asyncify(self.prep_inputs)(inputs)
``` | sync problem | https://api.github.com/repos/langchain-ai/langchain/issues/9100/comments | 4 | 2023-08-11T07:33:15Z | 2024-02-11T16:17:07Z | https://github.com/langchain-ai/langchain/issues/9100 | 1,846,343,465 | 9,100 |
[
"hwchase17",
"langchain"
]
| ### Feature request
To import a big list of documents in a vectorstore, you must load all the documents in memory, split the documents, and import the list of documents.
They consume a lot of memory just to import. Sometime, the number of documents is too big to feet in memory.
I propose to generalize the `lazy_...()` with some generator. Then, only one document at a time can be in memory.
### Motivation
Reduce the memory foot print.
Accept to import, in one loop, a big list of documents.
### Your contribution
- Small contribution. I use lazy approach in my pull request to load google documents. | Lazy import in vectorstore | https://api.github.com/repos/langchain-ai/langchain/issues/9099/comments | 3 | 2023-08-11T07:16:11Z | 2023-11-29T09:35:50Z | https://github.com/langchain-ai/langchain/issues/9099 | 1,846,323,394 | 9,099 |
[
"hwchase17",
"langchain"
]
| I just wondering can i do streaming model bedrock like Claude V2 on Langchain?
because i don't see support streaming on the Bedrock code.
If can, can you give me example for use streaming on bedrock? | Issue: Bedrock Doesn't support streaming | https://api.github.com/repos/langchain-ai/langchain/issues/9094/comments | 8 | 2023-08-11T05:34:22Z | 2024-05-14T16:06:10Z | https://github.com/langchain-ai/langchain/issues/9094 | 1,846,221,991 | 9,094 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.261
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When running the OpenAI Multi Functions Agent, **enums** in custom structured tools are not being used. Using the enums for data validation works fine, but the LLM does not use enums to produce enumerated parameter values. For reference, OpenAI's Function Calling takes the JSON schema for a function, which can pass the JSON schema enums to the LLM to be used when producing parameter values.
1. Create a custom structured tool with an enum input
2. Give tool to agent
3. Run a query that uses the tool
4. Agent may hallucinate an invalid enum
### Expected behavior
The expected behavior is two-fold:
1. Provide Python type annotations in BaseModel arg_schemas to the LLM instead of just the Field() object
2. Correct the error similar to the `StructuredChatOutputParserWithRetries` | Enums not used with OpenAI Multi Function Agent | https://api.github.com/repos/langchain-ai/langchain/issues/9092/comments | 2 | 2023-08-11T05:07:36Z | 2023-11-17T16:05:25Z | https://github.com/langchain-ai/langchain/issues/9092 | 1,846,203,093 | 9,092 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Can the atomicity of multiple write operations be guaranteed through the Agent
### Motivation
If two actions need to be executed within an agent and each action modifies the status of external tools or services, then if the first action writing operation is successful and the second action writing fails. How to ensure the revocation of the first write operation through an agent to ensure atomic: "all or nothing".
### Your contribution
Does the default agent runtime of langchain support this operation? If not, can we support it through the following ideas:
1: using langchain callback ?
2: inspired by BabyAGI feature: generate and pretend to execute tasks based on a given objective. I think the pretend operation can meet this requirement. Is this correct? | Can the atomicity of multiple write operations be guaranteed through the Agent | https://api.github.com/repos/langchain-ai/langchain/issues/9090/comments | 2 | 2023-08-11T03:35:52Z | 2023-11-17T16:05:30Z | https://github.com/langchain-ai/langchain/issues/9090 | 1,846,147,524 | 9,090 |
[
"hwchase17",
"langchain"
]
| ### System Info
```py
In [21]: langchain.__version__
Out[21]: '0.0.223'
In [24]: sys.platform
Out[24]: 'win32'
In [25]: !python -V
Python 3.10.10
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [x] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
In [18]: from langchain.embeddings.huggingface import HuggingFaceEmbeddings
In [19]: model = HuggingFaceEmbeddings(model_name='d:/src/chatglm2-6b-int4/')
No sentence-transformers model found with name d:/src/chatglm2-6b-int4/. Creating a new one with MEAN pooling.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[19], line 1
----> 1 model = HuggingFaceEmbeddings(model_name='d:/src/chatglm2-6b-int4/')
File D:\Python310\lib\site-packages\langchain\embeddings\huggingface.py:59, in HuggingFaceEmbeddings.__init__(self, **kwargs)
53 except ImportError as exc:
54 raise ImportError(
55 "Could not import sentence_transformers python package. "
56 "Please install it with `pip install sentence_transformers`."
57 ) from exc
---> 59 self.client = sentence_transformers.SentenceTransformer(
60 self.model_name, cache_folder=self.cache_folder, **self.model_kwargs
61 )
File D:\Python310\lib\site-packages\sentence_transformers\SentenceTransformer.py:97, in SentenceTransformer.__init__(self, model_name_or_path, modules, device, cache_folder, use_auth_token)
95 modules = self._load_sbert_model(model_path)
96 else: #Load with AutoModel
---> 97 modules = self._load_auto_model(model_path)
99 if modules is not None and not isinstance(modules, OrderedDict):
100 modules = OrderedDict([(str(idx), module) for idx, module in enumerate(modules)])
File D:\Python310\lib\site-packages\sentence_transformers\SentenceTransformer.py:806, in SentenceTransformer._load_auto_model(self, model_name_or_path)
802 """
803 Creates a simple Transformer + Mean Pooling model and returns the modules
804 """
805 logger.warning("No sentence-transformers model found with name {}. Creating a new one with MEAN pooling.".format(model_name_or_path))
--> 806 transformer_model = Transformer(model_name_or_path)
807 pooling_model = Pooling(transformer_model.get_word_embedding_dimension(), 'mean')
808 return [transformer_model, pooling_model]
File D:\Python310\lib\site-packages\sentence_transformers\models\Transformer.py:28, in Transformer.__init__(self, model_name_or_path, max_seq_length, model_args, cache_dir, tokenizer_args, do_lower_case, tokenizer_name_or_path)
25 self.config_keys = ['max_seq_length', 'do_lower_case']
26 self.do_lower_case = do_lower_case
---> 28 config = AutoConfig.from_pretrained(model_name_or_path, **model_args, cache_dir=cache_dir)
29 self._load_model(model_name_or_path, config, cache_dir)
31 self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_name_or_path if tokenizer_name_or_path is not None else model_name_or_path, cache_dir=cache_dir, **tokenizer_args)
File D:\Python310\lib\site-packages\transformers\models\auto\configuration_auto.py:986, in AutoConfig.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
984 has_remote_code = "auto_map" in config_dict and "AutoConfig" in config_dict["auto_map"]
985 has_local_code = "model_type" in config_dict and config_dict["model_type"] in CONFIG_MAPPING
--> 986 trust_remote_code = resolve_trust_remote_code(
987 trust_remote_code, pretrained_model_name_or_path, has_local_code, has_remote_code
988 )
990 if has_remote_code and trust_remote_code:
991 class_ref = config_dict["auto_map"]["AutoConfig"]
File D:\Python310\lib\site-packages\transformers\dynamic_module_utils.py:535, in resolve_trust_remote_code(trust_remote_code, model_name, has_local_code, has_remote_code)
533 trust_remote_code = False
534 elif has_remote_code and TIME_OUT_REMOTE_CODE > 0:
--> 535 signal.signal(signal.SIGALRM, _raise_timeout_error)
536 signal.alarm(TIME_OUT_REMOTE_CODE)
537 while trust_remote_code is None:
AttributeError: module 'signal' has no attribute 'SIGALRM'
```
### Expected behavior
load successfully
This is a signal not supported in windows, so it's shouldn't be depended. | [BUG] module 'signal' has no attribute 'SIGALRM | https://api.github.com/repos/langchain-ai/langchain/issues/9089/comments | 2 | 2023-08-11T02:54:44Z | 2023-08-11T16:04:49Z | https://github.com/langchain-ai/langchain/issues/9089 | 1,846,125,383 | 9,089 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Sometimes, GPT might not accurately portray characters or tools, so before I ask GPT a question, I need to come up with a set of prompts.
However, in most cases, these prompts may not capture the characteristics of the characters or tools effectively. To address this, I've developed a pip package that collects various prompts. How can I make Chain automatically use my package when it needs to embody characters (as a tool)? Or how can I optimize the pip package to better align with LangChain's requirements?
This is the address of my repository:[https://github.com/limaoyi1/GPT-prompt](https://github.com/limaoyi1/GPT-prompt)
### Motivation
Sometimes, GPT might not accurately portray characters or tools, so before I ask GPT a question, I need to come up with a set of prompts.
### Your contribution
Modify my pip package or submit pr | I would like GPT to quickly and accurately embody any characters or tools. | https://api.github.com/repos/langchain-ai/langchain/issues/9088/comments | 1 | 2023-08-11T02:39:07Z | 2023-11-17T16:05:35Z | https://github.com/langchain-ai/langchain/issues/9088 | 1,846,116,995 | 9,088 |
[
"hwchase17",
"langchain"
]
| ### System Info
"langchain": "^0.0.125"
Windows 11 x64
### Who can help?
@nfcampos
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
When trying to retrieve token usage from LangChain, it returns a empty object. There's already an issue open about that.
However, it gets slightly worse: Apparently, this breaks other components too, such as the `ConversationSummaryBufferMemory` which relies on Token count.
Longer explanation and code on this other issue comment: https://github.com/langchain-ai/langchain/issues/2359#issuecomment-1673926888
### Expected behavior
Completion tokens among others are expected to be returned. Maybe fall back to tiktoken ([py](https://pypi.org/project/tiktoken/)/[js](https://www.npmjs.com/package/@dqbd/tiktoken)) or similar when token count is not available? Just an idea. | ConversationSummaryBufferMemory broken due to missing token usage | https://api.github.com/repos/langchain-ai/langchain/issues/9083/comments | 2 | 2023-08-10T21:57:40Z | 2023-08-11T13:18:49Z | https://github.com/langchain-ai/langchain/issues/9083 | 1,845,949,455 | 9,083 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I think the `ResponseScheam` and `StructuredOutputParser` functionality is really convenient but unfortunately, it often fails in my use case due to the model outputting a slightly wrongly formatted JSON. It's usually due to extra or missing quotes.
Any suggestions on how to solve this?
### Suggestion:
Thinking about writing some extra logic to fix the missing/extra quotes issue, but not sure if anyone has a better solution | Issue: StructuredOutputParser failures due to invalid JSON | https://api.github.com/repos/langchain-ai/langchain/issues/9082/comments | 2 | 2023-08-10T21:38:22Z | 2023-11-17T16:05:39Z | https://github.com/langchain-ai/langchain/issues/9082 | 1,845,932,723 | 9,082 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am using langchain in a different language, but I cannot change the following sentence as it is hard-coded into the Agent class
```
thoughts += (
"\n\nI now need to return a final answer based on the previous steps:"
)
```
https://github.com/langchain-ai/langchain/blob/a5a4c53280b4dae8ea2e09430fed88e0cd4e03d2/libs/langchain/langchain/agents/agent.py#L588
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
-
### Expected behavior
- | Agents final thought as parameter | https://api.github.com/repos/langchain-ai/langchain/issues/9072/comments | 2 | 2023-08-10T20:06:59Z | 2023-11-11T21:20:04Z | https://github.com/langchain-ai/langchain/issues/9072 | 1,845,823,677 | 9,072 |
[
"hwchase17",
"langchain"
]
| https://github.com/langchain-ai/langchain/blob/a5a4c53280b4dae8ea2e09430fed88e0cd4e03d2/libs/langchain/langchain/chains/loading.py#L376
Work was done earlier this year to move all experimental and vulnerable code over to the experimental project. SQLDatabaseChain had (and still has) an open CVE against it so it was moved to the experimental project. However the experimental version of SQLDatabaseChain is being referenced in the production project on the line above. Our organization would like to start using LangChain but we cannot due to the open High severity CVE. Is it possible to either patch the vulnerability in code or complete the move of SQLDatabaseChain from the production project to the experimental? | Referencing experimental version of SQLDatabaseChain from the production project | https://api.github.com/repos/langchain-ai/langchain/issues/9071/comments | 2 | 2023-08-10T20:04:27Z | 2023-10-27T19:19:07Z | https://github.com/langchain-ai/langchain/issues/9071 | 1,845,820,486 | 9,071 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am creating a chatbot that records the history of the conversation through vectors in my vector store per user that uses the chatbot, and for this, I identify who is the owner of the vectors through metadata.
How can I add this metadata to the vectors if I'm using the vector store as a retriever for the memory in my ConversationChain?
This is my code:
```
import os
from langchain.llms import OpenAI
from langchain.chains import ConversationChain
from langchain.memory import VectorStoreRetrieverMemory
from langchain.vectorstores.pgvector import PGVector
from langchain.embeddings.openai import OpenAIEmbeddings
exit_conditions = ("q", "quit", "exit")
metadata={"id": "John Doe", "key": 123}
llm = OpenAI(openai_api_key=os.getenv("OPENAI_API_KEY"), temperature=0)
store = PGVector(
collection_name="chatbot_embeddings",
connection_string=os.getenv("POSTGRES_CONNECTION_STRING"),
embedding_function=OpenAIEmbeddings(),
collection_metadata=metadata
)
while True:
query = input("> ")
if query in exit_conditions:
break
conversation_with_summary = ConversationChain(
llm=llm,
memory=VectorStoreRetrieverMemory(retriever=store.as_retriever()),
verbose=True,
metadata=metadata,
)
print(conversation_with_summary.predict(input=query))
```
### Suggestion:
_No response_ | Issue: I want to attach metadata in my PGVector vector store used as retriever for my ConversationChain memory | https://api.github.com/repos/langchain-ai/langchain/issues/9067/comments | 7 | 2023-08-10T18:33:16Z | 2024-07-18T19:33:13Z | https://github.com/langchain-ai/langchain/issues/9067 | 1,845,693,256 | 9,067 |
[
"hwchase17",
"langchain"
]
| When I used to connect and azure openai Emabeddings and Milvus used , Below Exception happende
Traceback (most recent call last):
File "D:\Corent\AI\LangChain\azure\azure_connection.py", line 29, in <module>
VectorStore = Milvus.from_texts(
^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\vectorstores\milvus.py", line 822, in from_texts
vector_db.add_texts(texts=texts, metadatas=metadatas)
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\vectorstores\milvus.py", line 422, in add_texts
embeddings = self.embedding_func.embed_documents(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\embeddings\openai.py", line 478, in embed_documents
return self._get_len_safe_embeddings(texts, engine=self.deployment)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\embeddings\openai.py", line 341, in _get_len_safe_embeddings
token = encoding.encode(
^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\tiktoken\core.py", line 116, in encode
if match := _special_token_regex(disallowed_special).search(text):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: expected string or buffer | TypeError: expected string or buffer | https://api.github.com/repos/langchain-ai/langchain/issues/9057/comments | 3 | 2023-08-10T17:04:41Z | 2024-02-20T02:21:35Z | https://github.com/langchain-ai/langchain/issues/9057 | 1,845,568,042 | 9,057 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.247
Python 3.10.11
### Who can help?
@chase
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I wanted to adapt the original combine_docs_chain to some customization where a user input (e.g. 'you act like a HR chatbot') would be added to the original prompt. The workaround I found was to rewrite the original prompt files to take in a user input, which works. However, I'm having trouble making the ConversationRetrievalChain work.
1. Adapt the original templates from the source files and load them in:
I wanted to add the possibility to add some user input to the pre-made prompts from langchain.
So I duplicated the original files and adapted the text of the prompts somewhat
```
from docs.stuff_prompt import create_stuff_prompt_selector
STUFF_PROMPT_SELECTOR = create_stuff_prompt_selector(ui_input= ui_input) #adds the ui_input to the prompt
stuff_prompt = STUFF_PROMPT_SELECTOR.get_prompt(llm) #based on LLM model, it will return either the regular PromptTemplate version or the chat (ChatPromptTemplate). Same working principle as in the source files
combine_docs_chain = load_qa_chain(llm = llm,
chain_type = 'stuff',
prompt = stuff_prompt
) #create a custom combine_docs_chain
```
2. Create the ConversationalRetrievalChain.from_llm() object with the custom combine_docs_chain
Simply create the object. First time might work, but second won't work
```
from langchain.chains import ConversationalRetrievalChain
qa = ConversationalRetrievalChain.from_llm(
llm,
retriever=vectordb.as_retriever(),
return_source_documents = True,
memory=memory,
verbose = True,
combine_docs_chain = combine_docs_chain
)
```
3. This yields an error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[25], line 2
1 from langchain.chains import ConversationalRetrievalChain
----> 2 qa = ConversationalRetrievalChain.from_llm(
3 llm,
4 retriever=vectordb.as_retriever(),
5 return_source_documents = True, # when return_source_documents is True, a workaround needs to be found https://github.com/langchain-ai/langchain/issues/2303
6 memory=memory,
7 #, get_chat_history=lambda h : h
8 verbose = True,
9 combine_docs_chain = combine_docs_chain
10 #question_generator
11 )
File [c:\Users\XXX\Anaconda3\envs\langchain-env\lib\site-packages\langchain\chains\conversational_retrieval\base.py:357](file:///C:/Users/XXX/Anaconda3/envs/langchain-env/lib/site-packages/langchain/chains/conversational_retrieval/base.py:357), in ConversationalRetrievalChain.from_llm(cls, llm, retriever, condense_question_prompt, chain_type, verbose, condense_question_llm, combine_docs_chain_kwargs, callbacks, **kwargs)
350 _llm = condense_question_llm or llm
351 condense_question_chain = LLMChain(
352 llm=_llm,
353 prompt=condense_question_prompt,
354 verbose=verbose,
355 callbacks=callbacks,
356 )
--> 357 return cls(
358 retriever=retriever,
359 combine_docs_chain=doc_chain,
360 question_generator=condense_question_chain,
361 callbacks=callbacks,
362 **kwargs,
363 )
TypeError: langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain() got multiple values for keyword argument 'combine_docs_chain'
```
### Expected behavior
It is expected to take in the combine_docs_chain I made and initiate the ConversationalRetrievalChain. Is this a cache error? Where the ConversationalRetrievalChain cannot be changed after it has been initiated before? | ConversationalRetrievalChain having trouble with custom 'combine_docs_chain' | https://api.github.com/repos/langchain-ai/langchain/issues/9052/comments | 2 | 2023-08-10T15:45:03Z | 2023-11-16T16:05:16Z | https://github.com/langchain-ai/langchain/issues/9052 | 1,845,452,324 | 9,052 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain: v0.0.260
Python: v3.11
### Who can help?
@hwchase17 @eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Sometime `Metaphor` tool returns results without publishedDate which makes the Tool wrapper break due not guarding against empty values:
```
langchain/utilities/metaphor_search.py", line 169, in _clean_results
"published_date": result["publishedDate"],
KeyError: 'publishedDate'
```
### Expected behavior
Preferably the Tool should guard agains missing keys in API response. | Metaphor Search throws key error when `publishedDate` is missing | https://api.github.com/repos/langchain-ai/langchain/issues/9048/comments | 5 | 2023-08-10T15:22:58Z | 2023-11-19T16:05:16Z | https://github.com/langchain-ai/langchain/issues/9048 | 1,845,415,438 | 9,048 |
[
"hwchase17",
"langchain"
]
| ### Feature request
If I know exactly what paper(s) I'd like to load, it would be nice to be able to retrieve them by their arxiv IDs. The arxiv library seems to support this:
```
import arxiv
paper = next(arxiv.Search(id_list=["1605.08386v1"]).results())
```
### Motivation
Sometimes searching by exact paper title still returns the incorrect paper. Searching by ID should be more fool-proof.
### Your contribution
🤷🏻♂️ | ArxivLoader support searching by arxiv id_list | https://api.github.com/repos/langchain-ai/langchain/issues/9047/comments | 3 | 2023-08-10T15:21:01Z | 2023-11-22T16:06:59Z | https://github.com/langchain-ai/langchain/issues/9047 | 1,845,411,321 | 9,047 |
[
"hwchase17",
"langchain"
]
| ### System Info
Latest pip versions
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I tried searching by exact title in the following way:
```python
docs = ArxivLoader(query="MetaGPT: Meta Programming for Multi-Agent Collaborative Framework", load_max_docs=1).load()
```
But the result is incorrect. The search works properly on the arxiv website.
### Expected behavior
Correct paper returned | ArxivLoader incorrect results | https://api.github.com/repos/langchain-ai/langchain/issues/9046/comments | 4 | 2023-08-10T15:18:24Z | 2023-08-10T18:59:41Z | https://github.com/langchain-ai/langchain/issues/9046 | 1,845,405,467 | 9,046 |
[
"hwchase17",
"langchain"
]
| ### System Info
I have checked the issue tracker for similar issues but did not find any. Therefore, I am creating a new one.
I was trying to load a sample document using *UnstructuredFileLoader* from *langchain.document_loaders*. I was using T4 GPU on Google Colab.
I think the issue might be related to some recent upgrades and maybe the solution is trivial. However, I am in a bit of a hurry as I have to submit my project report to my professor :disappointed:
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
loader = UnstructuredFileLoader("/content/FAQ-ShelfLifeItem.docx")
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=50)
docs = text_splitter.split_documents(documents)
hf_embedding = HuggingFaceInstructEmbeddings()
```
### Expected behavior
I would expect no errors like:
```
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
[<ipython-input-14-79f2c22bc34d>](https://localhost:8080/#) in <cell line: 2>()
1 loader = UnstructuredFileLoader("/content/FAQ-ShelfLifeItem.docx")
----> 2 documents = loader.load()
3 text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=50)
4 docs = text_splitter.split_documents(documents)
5
2 frames
[/usr/local/lib/python3.10/dist-packages/unstructured/partition/auto.py](https://localhost:8080/#) in partition(filename, content_type, file, file_filename, url, include_page_breaks, strategy, encoding, paragraph_grouper, headers, skip_infer_table_types, ssl_verify, ocr_languages, pdf_infer_table_structure, xml_keep_tags, data_source_metadata, **kwargs)
170 **kwargs,
171 )
--> 172 elif filetype == FileType.MD:
173 elements = partition_md(
174 filename=filename,
NameError: name 'partition_docx' is not defined
``` | NameError: name 'partition_docx' is not defined | https://api.github.com/repos/langchain-ai/langchain/issues/9039/comments | 2 | 2023-08-10T13:19:48Z | 2023-08-10T15:19:12Z | https://github.com/langchain-ai/langchain/issues/9039 | 1,845,178,321 | 9,039 |
[
"hwchase17",
"langchain"
]
| Traceback (most recent call last):
File "D:\Corent\AI\LangChain\azure\azure_connection.py", line 14, in <module>
print(llm("Tell me joke"))
^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\llms\base.py", line 802, in __call__
self.generate(
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\llms\base.py", line 598, in generate
output = self._generate_helper(
^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\llms\base.py", line 504, in _generate_helper
raise e
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\llms\base.py", line 491, in _generate_helper
self._generate(
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\llms\openai.py", line 384, in _generate
response = completion_with_retry(
^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\llms\openai.py", line 116, in completion_with_retry
return _completion_with_retry(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\tenacity\__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\tenacity\__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\tenacity\__init__.py", line 314, in iter
return fut.result()
^^^^^^^^^^^^
File "C:\Users\donbosco\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\donbosco\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\tenacity\__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\llms\openai.py", line 114, in _completion_with_retry
return llm.client.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\openai\api_resources\completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\openai\api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\openai\api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\openai\api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: The completion operation does not work with the specified model, gpt-35-turbo-16k. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993. | openai.error.InvalidRequestError: The completion operation does not work with the specified model, gpt-35-turbo-16k. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993. | https://api.github.com/repos/langchain-ai/langchain/issues/9038/comments | 4 | 2023-08-10T12:42:46Z | 2024-03-11T13:09:44Z | https://github.com/langchain-ai/langchain/issues/9038 | 1,845,112,993 | 9,038 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise
### Suggestion:
_No response_ | Streaming ConversationalRetrievalQaChain | https://api.github.com/repos/langchain-ai/langchain/issues/9036/comments | 0 | 2023-08-10T10:51:38Z | 2023-08-10T10:52:42Z | https://github.com/langchain-ai/langchain/issues/9036 | 1,844,935,577 | 9,036 |
[
"hwchase17",
"langchain"
]
| ### Feature request
What is the exact difference between the two graph based chains in langchain 1 )GraphCypherQAChain and 2)GraphQAChain. What are the pros and cons of each, and when to use one over another?
### Motivation
What is the exact difference between the two graph based chains in langchain 1 )GraphCypherQAChain and 2)GraphQAChain. What are the pros and cons of each, and when to use one over another?
### Your contribution
What is the exact difference between the two graph based chains in langchain 1 )GraphCypherQAChain and 2)GraphQAChain. What are the pros and cons of each, and when to use one over another? | Functional difference between GraphCypherQAChain and GraphQAChain | https://api.github.com/repos/langchain-ai/langchain/issues/9035/comments | 5 | 2023-08-10T09:50:43Z | 2023-11-16T16:05:29Z | https://github.com/langchain-ai/langchain/issues/9035 | 1,844,832,758 | 9,035 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I've been trying to install langchain using `pip install langchain[all]` in the conda environment where it already had packages to run hugging face text-generation-inference module.
Let me know how to handle this issue. While running make command i'm facing this error.
```python
Traceback (most recent call last):
File "/home/sri/miniconda3/envs/text-generation-inference/bin/make", line 5, in <module>
from scripts.proto import main
ModuleNotFoundError: No module named 'scripts'
```
### Suggestion:
_No response_ | Issue: make command is not working after installing langchain | https://api.github.com/repos/langchain-ai/langchain/issues/9033/comments | 6 | 2023-08-10T08:39:47Z | 2023-12-01T16:08:23Z | https://github.com/langchain-ai/langchain/issues/9033 | 1,844,699,377 | 9,033 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello guys !
I would like to build a Prompt which include a System message, Context data from a vector_store and the final question. What is the best way to do that?
Right now, I'm doing this:
```
def get_prompt(system_prompt):
prompt = SystemMessage(content=system_prompt)
new_prompt = (
prompt
+ "--- \n\n" + "le contexte:" + "\n"
+ "{context}"
+ '\n\n --- \n\n ' + "la question: \n\n"
+ "{question}"
)
return PromptTemplate(
template=new_prompt, input_variables=["context", "question"]
)
```
But get this error: **KeyError: 'template'**
Is there a way to do what I want? I'm searching for a solution where I don't need to add the context myself, because it should already be managed by the vector store retriever
### Suggestion:
_No response_ | Issue: How to build a prompt that include SystemMessage, vector store context and final question | https://api.github.com/repos/langchain-ai/langchain/issues/9031/comments | 4 | 2023-08-10T08:31:32Z | 2023-11-16T16:05:50Z | https://github.com/langchain-ai/langchain/issues/9031 | 1,844,686,751 | 9,031 |
[
"hwchase17",
"langchain"
]
| ## Issue with current `dark theme`:
Recently the documentation has been much improvised aesthetically and functionally. But when we switch to the dark mode, the search bar **on the top right** and the **mendable chat screen** remain in the light theme.

I think, making them match the dark theme, will even out an overall experience ✌🏻 | The "search bar" and "mendable chat" should also follow the dark theme. | https://api.github.com/repos/langchain-ai/langchain/issues/9028/comments | 4 | 2023-08-10T06:38:18Z | 2023-12-05T04:41:49Z | https://github.com/langchain-ai/langchain/issues/9028 | 1,844,513,797 | 9,028 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain 0.0.260, Mac M2, Miniforge3, Python 3.9
### Who can help?
@ves
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.vectorstores.elastic_vector_search import ElasticKnnSearch
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationalRetrievalChain
from langchain.chat_models import ChatOpenAI
from langchain import ElasticVectorSearch
from elasticsearch import Elasticsearch
model_name = "sentence-transformers/all-mpnet-base-v2"
embedding = HuggingFaceEmbeddings(model_name=model_name)
es_url = "http://localhost:9221"
es = Elasticsearch(es_url)
# prepare db
texts = ["This is a test document", "This is another test document"]
metadatas = [{}, {}]
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.create_documents(texts, metadatas=metadatas)
ElasticVectorSearch.from_documents(documents, embedding, index_name="bug_demo", elasticsearch_url=es_url)
# chat with db
memory = ConversationBufferMemory(memory_key="chat_history", output_key="answer", return_messages=True)
db = ElasticKnnSearch(embedding=embedding, es_connection=es, index_name="bug_demo")
llm_model="gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_model, temperature=0)
chain = ConversationalRetrievalChain.from_llm(
llm,
db.as_retriever(),
memory=memory,
return_source_documents=True
)
user_input = "What is love?"
output = chain({"question": user_input})
```
### Expected behavior
The previous code used to work correctly with langchain 0.0.245.
After merge https://github.com/langchain-ai/langchain/pull/8180, it produces this error:
```
>>> output = chain({"question": user_input})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/miguel/miniforge3/envs/fenonto_qa3/lib/python3.9/site-packages/langchain/chains/base.py", line 258, in __call__
raise e
File "/Users/miguel/miniforge3/envs/fenonto_qa3/lib/python3.9/site-packages/langchain/chains/base.py", line 252, in __call__
self._call(inputs, run_manager=run_manager)
File "/Users/miguel/miniforge3/envs/fenonto_qa3/lib/python3.9/site-packages/langchain/chains/conversational_retrieval/base.py", line 135, in _call
docs = self._get_docs(new_question, inputs, run_manager=_run_manager)
File "/Users/miguel/miniforge3/envs/fenonto_qa3/lib/python3.9/site-packages/langchain/chains/conversational_retrieval/base.py", line 287, in _get_docs
docs = self.retriever.get_relevant_documents(
File "/Users/miguel/miniforge3/envs/fenonto_qa3/lib/python3.9/site-packages/langchain/schema/retriever.py", line 193, in get_relevant_documents
raise e
File "/Users/miguel/miniforge3/envs/fenonto_qa3/lib/python3.9/site-packages/langchain/schema/retriever.py", line 186, in get_relevant_documents
result = self._get_relevant_documents(
File "/Users/miguel/miniforge3/envs/fenonto_qa3/lib/python3.9/site-packages/langchain/vectorstores/base.py", line 504, in _get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
File "/Users/miguel/miniforge3/envs/fenonto_qa3/lib/python3.9/site-packages/langchain/vectorstores/elastic_vector_search.py", line 472, in similarity_search
results = self.knn_search(query=query, k=k, **kwargs)
File "/Users/miguel/miniforge3/envs/fenonto_qa3/lib/python3.9/site-packages/langchain/vectorstores/elastic_vector_search.py", line 520, in knn_search
knn_query_body = self._default_knn_query(
File "/Users/miguel/miniforge3/envs/fenonto_qa3/lib/python3.9/site-packages/langchain/vectorstores/elastic_vector_search.py", line 460, in _default_knn_query
raise ValueError(
ValueError: Either `query_vector` or `model_id` must be provided, but not both.
```
My guess is, after refactoring, knn_search no longer takes into account the embedding parameter. If so it would use it to create the query_vector. @jeffvestal any clue?
BTW, I have also tried preparing the db using the new ElasticKnnSearch#from_texts method.
```python
from langchain.vectorstores.elastic_vector_search import ElasticKnnSearch
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationalRetrievalChain
from langchain.chat_models import ChatOpenAI
from langchain import ElasticVectorSearch
from elasticsearch import Elasticsearch
model_name = "sentence-transformers/all-mpnet-base-v2"
embedding = HuggingFaceEmbeddings(model_name=model_name)
es_url = "http://localhost:9221"
es = Elasticsearch(es_url)
index_name = "test_bug3"
# prepare db
texts = ["This is a test document", "This is another test document"]
metadatas = [{}, {}]
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.create_documents(texts, metadatas=metadatas)
knnvectorsearch = ElasticKnnSearch.from_texts(
texts=texts,
embedding=embedding,
index_name= index_name,
vector_query_field='vector',
query_field='text',
dims=768,
es_connection=es
)
# Test `add_texts` method
texts2 = ["Hello, world!", "Machine learning is fun.", "I love Python."]
knnvectorsearch.add_texts(texts2)
# chat with db
memory = ConversationBufferMemory(memory_key="chat_history", output_key="answer", return_messages=True)
db = ElasticKnnSearch(embedding=embedding, es_connection=es, index_name="index_name")
llm_model="gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_model, temperature=0)
chain = ConversationalRetrievalChain.from_llm(
llm,
db.as_retriever(),
memory=memory,
return_source_documents=True
)
user_input = "Who is Janine?"
output = chain({"question": user_input})
```
But I get the same error. | ElasticKnnSearch: ValueError: Either `query_vector` or `model_id` must be provided, but not both. | https://api.github.com/repos/langchain-ai/langchain/issues/9022/comments | 4 | 2023-08-10T04:31:24Z | 2023-09-19T15:36:38Z | https://github.com/langchain-ai/langchain/issues/9022 | 1,844,395,059 | 9,022 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python type checking complains about a parameter `client` with a simple setup like this:
<img width="170" alt="Screenshot 2023-08-10 at 12 20 53 PM" src="https://github.com/langchain-ai/langchain/assets/1828968/355100b6-11dc-4f73-beec-8059aba004f9">
<img width="534" alt="Screenshot 2023-08-10 at 12 03 14 PM" src="https://github.com/langchain-ai/langchain/assets/1828968/9eb22e4e-d78e-4b5b-ae98-6e649bffc14f">
According to documentation, no fields are required to initialize the model:
https://python.langchain.com/docs/modules/model_io/models/chat/
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Version: langchain = "^0.0.250".
Python 3.11.4
Install Pylance. Use VSCode to run typecheck.
### Expected behavior
It should pass type checking based on documentation. | Typing issue: Undocumented parameter `client` | https://api.github.com/repos/langchain-ai/langchain/issues/9021/comments | 2 | 2023-08-10T04:24:33Z | 2023-11-16T16:05:23Z | https://github.com/langchain-ai/langchain/issues/9021 | 1,844,389,967 | 9,021 |
[
"hwchase17",
"langchain"
]
| ### System Info
windows 10
python 3.10.12
### Who can help?
@eyurtsev There is a BUG in the def __add function in langchain\vectorstores\faiss.py: When the parameter ids is provided and there are duplicates in it, index_to_docstore_id will report an error and the program will terminate. But the text vector embedding has been added to the index. This causes the id in the index to be inconsistent with the id in index_to_docstore_id, not aligned, and the text cannot be matched correctly.
langchain\vectorstores\faiss.py 中 def __add 函数存在BUG:
当参数 ids 已提供, 且其中存在重复时, index_to_docstore_id 新增会报错,程序运行终止. 但是文本向量 embedding 已经添加至 index 中.导致 index 中的 id 与 index_to_docstore_id 的中 id 对应不一致,文本无法正确匹配
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [x] Document Loaders
- [x] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
None
### Expected behavior
None | Duplicate ids passed into langchain\vectorstores\fais.__add will result in a mismatch between faiss and document's index | https://api.github.com/repos/langchain-ai/langchain/issues/9019/comments | 4 | 2023-08-10T03:52:27Z | 2023-11-17T16:05:59Z | https://github.com/langchain-ai/langchain/issues/9019 | 1,844,365,609 | 9,019 |
[
"hwchase17",
"langchain"
]
| ### System Info
# System Info:
Langchain==0.0.260
Python==3.10.10
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
When using the following script from official documentation:
```python
from langchain.chat_models import ChatOpenAI
from langchain.retrievers.multi_query import MultiQueryRetriever
question = "What are the approaches to Task Decomposition?"
llm = ChatOpenAI(temperature=0)
retriever_from_llm = MultiQueryRetriever.from_llm(
retriever=vectordb.as_retriever(), llm=llm
)
```
ImportError arise:
```bash
File /opt/conda/envs/gpt-env/lib/python3.10/site-packages/langchain/retrievers/multi_query.py:6
2 from typing import List
4 from pydantic import BaseModel, Field
----> 6 from langchain.callbacks.manager import CallbackManagerForRetrieverRun
7 from langchain.chains.llm import LLMChain
8 from langchain.llms.base import BaseLLM
ImportError: cannot import name 'CallbackManagerForRetrieverRun' from 'langchain.callbacks.manager' (/opt/conda/envs/gpt-env/lib/python3.10/site-packages/langchain/callbacks/manager.py)
```
### Expected behavior
There should be no issue when importing MultiQueryRetriever and using the script from the official documentation. | ImportError when importing MultiQueryRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/9018/comments | 1 | 2023-08-10T03:04:59Z | 2023-08-11T06:13:41Z | https://github.com/langchain-ai/langchain/issues/9018 | 1,844,332,543 | 9,018 |
[
"hwchase17",
"langchain"
]
| ### Feature request
It would be good it create_pandas_dataframe_agent could return a table with scroll bar . Instead of us adding html code on the top of the generated code
### Motivation
Being able to capture table data is one key element of visualizations, currently create_pandas_dataframe_agent doesn't generate a table visualization with scroll bar
### Your contribution
I can try, but might need help | For the Visual currently it doesn't display or show tabular format with scroll bar | https://api.github.com/repos/langchain-ai/langchain/issues/9017/comments | 4 | 2023-08-10T01:55:13Z | 2023-11-17T16:06:07Z | https://github.com/langchain-ai/langchain/issues/9017 | 1,844,278,830 | 9,017 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version 0.0.260, Python
### Who can help?
@agola11
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Both chunks of code gave me the error.
Chunk 1
`from langchain.chains import RetrievalQAWithSourcesChain
user_input = "How do LLM Powered Autonomous Agents work?"
qa_chain = RetrievalQAWithSourcesChain.from_chain_type(llm,retriever=web_research_retriever)
result = qa_chain({"question": user_input})
result`
Chunk 2
'import logging
logging.basicConfig()
logging.getLogger("langchain.retrievers.web_research").setLevel(logging.INFO)
user_input = "How do LLM Powered Autonomous Agents work?"
docs = web_research_retriever.get_relevant_documents(user_input)
docs'
Both gave me this error
`[/usr/lib/python3.10/asyncio/runners.py](https://localhost:8080/#) in run(main, debug)
31 """
32 if events._get_running_loop() is not None:
---> 33 raise RuntimeError(
34 "asyncio.run() cannot be called from a running event loop")
35
RuntimeError: asyncio.run() cannot be called from a running event loop`
### Expected behavior
In both cases I just wanted it to produce website text with a source. | WebResearchRetriever - RuntimeError: asyncio.run() cannot be called from a running event loop | https://api.github.com/repos/langchain-ai/langchain/issues/9014/comments | 6 | 2023-08-10T00:04:39Z | 2024-01-17T19:07:42Z | https://github.com/langchain-ai/langchain/issues/9014 | 1,844,201,292 | 9,014 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
DOC: API Reference: Classes are not sorted in the root page. It is really hard to search for unsorted items.
For example:
```
...
agents.agent_iterator.AgentExecutorIterator(...)
agents.agent.ExceptionTool
agents.agent.LLMSingleActionAgent
agents.tools.InvalidTool
agents.schema.AgentScratchPadChatPromptTemplate
agents.agent_types.AgentType(value[, names, ...])
agents.xml.base.XMLAgent
agents.xml.base.XMLAgentOutputParser
...
```
### Idea or request for content:
_No response_ | DOC: API Reference: Classes are not sorted | https://api.github.com/repos/langchain-ai/langchain/issues/9012/comments | 7 | 2023-08-09T23:14:10Z | 2023-11-14T19:03:46Z | https://github.com/langchain-ai/langchain/issues/9012 | 1,844,166,468 | 9,012 |
[
"hwchase17",
"langchain"
]
| ### System Info
After I upgraded my Langchain to version 0.0.260, the ConversationalChatAgent returns the action plan as the chat response.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce: request something that will make a ConversationalChatAgent call a tool.
### Expected behavior
The agent should pass the action and action_input to the tool and not return it as a final response to the user. | Conversational agent returning action plan as response after 0.0.260 release | https://api.github.com/repos/langchain-ai/langchain/issues/9011/comments | 3 | 2023-08-09T22:54:46Z | 2023-11-19T16:05:37Z | https://github.com/langchain-ai/langchain/issues/9011 | 1,844,150,816 | 9,011 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello - I receive a UnicodeDecodeError when running the below code:
from dotenv import load_dotenv
load_dotenv()
from langchain.llms import OpenAI
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains import RetrievalQA
llm = OpenAI(temperature=0.1)
from langchain.document_loaders import TextLoader
loader = TextLoader("./Training/test2.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_documents(texts, embeddings)
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever(search_kwargs={"k": 1}))
query = "How much did the judge fine Twitter?"
qa.run(query)
The test2.txt is a 1,600-word UTF-8 encoded file. Here is the error text that I receive:
**Traceback (most recent call last):
File "C:\Users\Admin\PycharmProjects\pythonProject\Misc\Testing.py", line 17, in <module>
documents = loader.load()
^^^^^^^^^^^^^
File "C:\Users\Admin\PycharmProjects\pythonProject\Misc\venv\Lib\site-packages\langchain\document_loaders\text.py", line 18, in load
text = f.read()
^^^^^^^^
File "C:\Users\Admin\AppData\Local\Programs\Python\Python311\Lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 1079: character maps to <undefined>**
Any advice would be greatly appreciated.
### Suggestion:
_No response_ | Issue: UnicodeDecodeError when running .txt files using TextLoader | https://api.github.com/repos/langchain-ai/langchain/issues/9005/comments | 4 | 2023-08-09T20:48:00Z | 2024-02-04T06:16:34Z | https://github.com/langchain-ai/langchain/issues/9005 | 1,843,990,236 | 9,005 |
[
"hwchase17",
"langchain"
]
| ### System Info
python==3.11
langchain==0.0.246
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. db_chain = SQLDatabaseChain.from_llm(chat_llm, db, use_query_checker=True, verbose=True)#, query_prompt=PROMPT)
2. chat_llm=AzureChatOpenAI(deployment_name="gpt-35-turbo-16k",
model_name="gpt-35-turbo-16k" )
3. tools=[Tool(
func=db_chain.run,
name="Database Search",
description="useful for when you need to lookup specific queries on the ATLASIQ Schema"
)]
4. prefix = """ You are an expert Snowflake SQL data analyst, who writes queries with perfect syntax,
and performs necessary computations on that data in the AtlasIQ Schema. Your goal is to answer the following questions as best you can.
When there are multiple results for the same quantity, return all of them. DO NOT hallucinate an answer if there is no result."""
suffix = """Begin!"
{chat_history}
Question: {input}
{agent_scratchpad}"""
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=["input", "chat_history", "agent_scratchpad"],
)
memory = ConversationBufferMemory(memory_key="chat_history")
llm_chain = LLMChain(llm=chat_llm, prompt=prompt)#, callbacks=[custom_handler.short_chain(
new_zero_agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)
agent_chain = AgentExecutor.from_agent_and_tools(
agent=new_zero_agent, tools=tools, verbose=True, memory=memory, handle_parsing_errors=True)
agent_chain.run(query)
### Expected behavior
I expect the query to be synthesized and executed but instead, I get an unnecessary explanation of why the query is correct "The original query does not contain any of the mentioned mistakes. Therefore, here is the reproduced original query:
SELECT SUM(total_operating_expenses)
FROM expense_income_table
WHERE market = 'Seattle'
AND period = '202305'The original query provided does not contain any of the mentioned mistakes. Hence, the original query is reproduced below:
```sql
SELECT SUM(total_operating_expenses)
FROM expense_income_table
WHERE market = 'Seattle"<- which causes the output parsing error | Chat LLM adds an extra sentence in front of SQL queries, produces Output Parsing Error | https://api.github.com/repos/langchain-ai/langchain/issues/9001/comments | 4 | 2023-08-09T20:26:30Z | 2024-02-11T16:17:16Z | https://github.com/langchain-ai/langchain/issues/9001 | 1,843,961,304 | 9,001 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
I am using the default configuration for RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter()
chunks = text_splitter.create_documents([full_text])
However, I would like to know what values are being used in the defaults. I would like to explicitly pass values, because for certain documents it seems to be not splitting and downstream I am getting from OpenAI's embeddings endpoint `This model's maximum context length is 8191 tokens, however you requested 11451 tokens (11451 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.`
### Idea or request for content:
I would like to know what values are being used as default for RecursiveCharacterTextSplitter | DOC: What are the default values to RecursiveCharacterTextSplitter | https://api.github.com/repos/langchain-ai/langchain/issues/8999/comments | 0 | 2023-08-09T20:03:15Z | 2023-08-09T20:06:51Z | https://github.com/langchain-ai/langchain/issues/8999 | 1,843,925,953 | 8,999 |
[
"hwchase17",
"langchain"
]
| ### System Info
When importing the `AirbyteStripeLoader` introduced in v0.0.259 it throws an `module not found error`.
```
from libs.langchain.langchain.utils.utils import guard_import
ModuleNotFoundError: No module named 'libs'
```
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.document_loaders.airbyte import AirbyteStripeLoader
config = {
# your stripe configuration
}
loader = AirbyteStripeLoader(config=config, stream_name="invoices") # check the documentation linked above for a list of all streams
```
### Expected behavior
Import the document should not throw an error. | `AirbyteStripeLoader` throws an error | https://api.github.com/repos/langchain-ai/langchain/issues/8996/comments | 5 | 2023-08-09T19:43:49Z | 2023-08-09T20:19:05Z | https://github.com/langchain-ai/langchain/issues/8996 | 1,843,898,676 | 8,996 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hey y'all. i have this code. i wrote it one month back and when i am running it now, it is giving me an error.
**CODE**:
<code>import os
from langchain.llms import OpenAI
from langchain import PromptTemplate
from langchain.chains import LLMChain
from constant import openai_key
import streamlit as st
os.environ['OPENAI_API_KEY'] = openai_key
llm = OpenAI(temperature=0.8)
multiple_inputs = PromptTemplate(
input_variables=["name", "age", "location"],
template="My name is {name}, I am {age} years old and live in {location}."
)
chain = LLMChain(llm=llm, prompt=multiple_inputs)
output = chain.run(
name="John",
age=30,
location="New York"
)
print(output)
</code>
**ERROR**:
<code>Traceback (most recent call last):
File ".\questions.py", line 19, in <module>
output = chain.run(
TypeError: run() got an unexpected keyword argument 'name'</code>
### Suggestion:
_No response_ | run() got an unexpected keyword argument 'name' | https://api.github.com/repos/langchain-ai/langchain/issues/8990/comments | 7 | 2023-08-09T18:31:56Z | 2023-08-13T16:01:55Z | https://github.com/langchain-ai/langchain/issues/8990 | 1,843,798,272 | 8,990 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
```from langchain.chains import ConversationRetrievalChain```
this is what I have and i am using the latest langchain version
### Suggestion:
_No response_ | Issue: ConversationRetrievalChain" is not accessedPylance | https://api.github.com/repos/langchain-ai/langchain/issues/8987/comments | 2 | 2023-08-09T18:21:55Z | 2023-11-16T16:06:44Z | https://github.com/langchain-ai/langchain/issues/8987 | 1,843,784,776 | 8,987 |
[
"hwchase17",
"langchain"
]
| ### System Info
Ubuntu 22
Langchain version
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Current behaviour:
I'm using the [RAG example](https://github.com/pinecone-io/examples/blob/master/docs/langchain-retrieval-augmentation.ipynb) and feeding my own database of [1 football article](https://www.bbc.co.uk/sport/football/65984561).
The Pinecone DB is a brand new database and only contains vectors from the football article.
When I do qa_with_sources(query="Who is Sachin Tendulkar") it provides me an answer and a link as a reference. This is not the expected behavior.
I have not fed any article about Sachin Tendulkar to the database. How and why/where from is it getting the answer and the link?
Now, If I add more articles only about football, push the vector count in the database to around 90. And then I ask the same question, query="Who is Sachin Tendulkar", it is not able to give the answer, which is the expected behavior.
I wonder if the fullness of the vector db makes it more accurate? Has anyone else seen this?
Repro:
Create a new Vector DB on pinecone. Use [this example](https://github.com/pinecone-io/examples/blob/master/docs/langchain-retrieval-augmentation.ipynb) to feed in a [football](https://www.bbc.com/sport/football/65984561) article.
Run query="Who is Sachin Tendulkar". Note the result contains a reference and an answer. (Unexpected)
Now, create a more full db, with more articles and ask the same query. Note that the results is empty as expected.
### Expected behavior
Since the database does not contain any article or mention of Sachin Tendulkar, it should not provide any answer, and instead say "This is not mentioned in the database".
| Out of dataset answer and reference link provided for RAG example | https://api.github.com/repos/langchain-ai/langchain/issues/8986/comments | 3 | 2023-08-09T18:20:23Z | 2023-11-16T16:06:02Z | https://github.com/langchain-ai/langchain/issues/8986 | 1,843,782,612 | 8,986 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.257
Python version: 3.11
Opensearch-py version: 2.3.0
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have this `Opensearch Vector DB `and I maintain multiple indices that start with "index-" (for example index-pdf, index-html). I have indexed sets of documents to each of the indices using Langchain's OpenSearchVectorSearch.from_documents() function.
Now, I want to run some queries which I want them to be run across multiple indices. An example would be "What is the title of each document?". When I execute below code, it either just outputs answer from first or last matching index, or says it cannot find the answer. Here is my current code:
```
from langchain.vectorstores import OpenSearchVectorSearch
from langchain.chains import RetrievalQA, ConversationalRetrievalChain
import os
from langchain.embeddings import OpenAIEmbeddings
from langchain.chat_models import ChatOpenAI
embeddings = OpenAIEmbeddings()
def get_llm(model):
llm = ChatOpenAI(model_name=model.lower(), verbose=False, temperature=0)
return llm
docsearch = OpenSearchVectorSearch(opensearch_url="http://localhost:9200",
index_name="index-*",
embedding_function=embeddings)
chain = ConversationalRetrievalChain.from_llm(
llm=get_llm("gpt-3.5-turbo"),
retriever=docsearch.as_retriever(),
)
result = chain({'question': 'What is the title of each document?', "chat_history": []})
response = result['answer']
print(response)
```
The response I get is either of the format "The document provided does not list different titles..."
### Expected behavior
Response should be span across multiple indices | Issue in running ConversationalRetrievalChain query across multiple Opensearch indices with wildcard specification | https://api.github.com/repos/langchain-ai/langchain/issues/8985/comments | 5 | 2023-08-09T18:08:27Z | 2023-11-29T16:08:15Z | https://github.com/langchain-ai/langchain/issues/8985 | 1,843,767,365 | 8,985 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Many packages use pydantic versions that are much more recent than the one in langchain.
### Motivation
It is very difficult to use langchain with other packages that use recent versions of pydantic
### Your contribution
Just wanting to signal this as a needed feature | pydantic upgrade | https://api.github.com/repos/langchain-ai/langchain/issues/8984/comments | 6 | 2023-08-09T18:01:33Z | 2023-11-24T16:07:25Z | https://github.com/langchain-ai/langchain/issues/8984 | 1,843,758,645 | 8,984 |
[
"hwchase17",
"langchain"
]
| ### System Info
Mac OS 14.0
M1 Max 64GB ram
VSCode 1.80.2
Jupyter Notebook
Python 3.11.4
Llama-cpp-python using `!CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install --upgrade llama-cpp-python`
### Who can help?
@hwchase17 @agol
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.llms import LlamaCpp
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
llm = LlamaCpp(
model_path="./llama2_70b_chat_uncensored.ggmlv3.q5_K_S.bin",
n_gpu_layers=n_gpu_layers,
n_gqa=8,
n_batch=n_batch,
n_ctx=2048,
f16_kv=True,
callback_manager=callback_manager,
verbose=True,
)
prompt = """
Question: A rap battle between Stephen Colbert and John Oliver
"""
llm(prompt)
```
Error log:
```
10:30:36.498 [error] Disposing session as kernel process died ExitCode: undefined, Reason: 0.00s - Debugger warning: It seems that frozen modules are being used, which may
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.
llama.cpp: loading model from ./llama2_70b_chat_uncensored.ggmlv3.q5_K_S.bin
llama_model_load_internal: warning: assuming 70B model based on GQA == 8
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 8192
llama_model_load_internal: n_mult = 7168
llama_model_load_internal: n_head = 64
llama_model_load_internal: n_head_kv = 8
llama_model_load_internal: n_layer = 80
llama_model_load_internal: n_rot = 128
llama_model_load_internal: n_gqa = 8
llama_model_load_internal: rnorm_eps = 1.0e-06
llama_model_load_internal: n_ff = 28672
llama_model_load_internal: freq_base = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype = 16 (mostly Q5_K - Small)
llama_model_load_internal: model size = 70B
llama_model_load_internal: ggml ctx size = 0.21 MB
llama_model_load_internal: mem required = 46046.21 MB (+ 640.00 MB per state)
llama_new_context_with_model: kv self size = 640.00 MB
ggml_metal_init: allocating
ggml_metal_init: using MPS
ggml_metal_init: loading '~/anaconda3/envs/llama-cpp-venv/lib/python3.11/site-packages/llama_cpp/ggml-metal.metal'
ggml_metal_init: loaded kernel_add 0x126d41720
ggml_metal_init: loaded kernel_add_row 0x126d41130
ggml_metal_init: loaded kernel_mul 0x126d4e410
ggml_metal_init: loaded kernel_mul_row 0x126d4e710
ggml_metal_init: loaded kernel_scale 0x126d55390
ggml_metal_init: loaded kernel_silu 0x126d538c0
ggml_metal_init: loaded kernel_relu 0x126d56130
ggml_metal_init: loaded kernel_gelu 0x126d568e0
ggml_metal_init: loaded kernel_soft_max 0x126d56d00
ggml_metal_init: loaded kernel_diag_mask_inf 0x126d56f70
ggml_metal_init: loaded kernel_get_rows_f16 0x126d57af0
ggml_metal_init: loaded kernel_get_rows_q4_0 0x126d581f0
ggml_metal_init: loaded kernel_get_rows_q4_1 0x126d58780
ggml_metal_init: loaded kernel_get_rows_q2_K 0x111ce66e0
ggml_metal_init: loaded kernel_get_rows_q3_K 0x126d589f0
ggml_metal_init: loaded kernel_get_rows_q4_K 0x126d58c60
ggml_metal_init: loaded kernel_get_rows_q5_K 0x126d59830
ggml_metal_init: loaded kernel_get_rows_q6_K 0x126d59d80
ggml_metal_init: loaded kernel_rms_norm 0x130328d00
ggml_metal_init: loaded kernel_norm 0x1303284a0
ggml_metal_init: loaded kernel_mul_mat_f16_f32 0x13032a1c0
ggml_metal_init: loaded kernel_mul_mat_q4_0_f32 0x126d59ff0
ggml_metal_init: loaded kernel_mul_mat_q4_1_f32 0x126d5a260
ggml_metal_init: loaded kernel_mul_mat_q2_K_f32 0x126d5b0c0
ggml_metal_init: loaded kernel_mul_mat_q3_K_f32 0x126d5b640
ggml_metal_init: loaded kernel_mul_mat_q4_K_f32 0x126d5bc50
ggml_metal_init: loaded kernel_mul_mat_q5_K_f32 0x126d5c240
ggml_metal_init: loaded kernel_mul_mat_q6_K_f32 0x126d5c960
ggml_metal_init: loaded kernel_rope 0x13032af10
ggml_metal_init: loaded kernel_alibi_f32 0x13032b180
ggml_metal_init: loaded kernel_cpy_f32_f16 0x13032b3f0
ggml_metal_init: loaded kernel_cpy_f32_f32 0x13032b660
ggml_metal_init: loaded kernel_cpy_f16_f16 0x13032d380
ggml_metal_init: recommendedMaxWorkingSetSize = 49152.00 MB
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: maxTransferRate = built-in GPU
llama_new_context_with_model: max tensor size = 205.08 MB
ggml_metal_add_buffer: allocated 'data ' buffer, size = 36864.00 MB, offs = 0
ggml_metal_add_buffer: allocated 'data ' buffer, size = 8603.55 MB, offs = 38439649280, (45468.00 / 49152.00)
ggml_metal_add_buffer: allocated 'eval ' buffer, size = 24.00 MB, (45492.00 / 49152.00)
ggml_metal_add_buffer: allocated 'kv ' buffer, size = 642.00 MB, (46134.00 / 49152.00)
ggml_metal_add_buffer: allocated 'scr0 ' buffer, size = 456.00 MB, (46590.00 / 49152.00)
ggml_metal_add_buffer: allocated 'scr1 ' buffer, size = 304.00 MB, (46894.00 / 49152.00)
10:30:36.498 [info] Dispose Kernel process 40138.
10:30:36.498 [error] Raw kernel process exited code: undefined
10:30:36.499 [error] Error in waiting for cell to complete [Error: Canceled future for execute_request message before replies were done
at t.KernelShellFutureHandler.dispose (~/.vscode/extensions/ms-toolsai.jupyter-2023.6.1101941928-darwin-arm64/out/extension.node.js:2:32375)
at ~/.vscode/extensions/ms-toolsai.jupyter-2023.6.1101941928-darwin-arm64/out/extension.node.js:2:51427
at Map.forEach (<anonymous>)
at v._clearKernelState (~/.vscode/extensions/ms-toolsai.jupyter-2023.6.1101941928-darwin-arm64/out/extension.node.js:2:51412)
at v.dispose (~/.vscode/extensions/ms-toolsai.jupyter-2023.6.1101941928-darwin-arm64/out/extension.node.js:2:44894)
at ~/.vscode/extensions/ms-toolsai.jupyter-2023.6.1101941928-darwin-arm64/out/extension.node.js:24:113024
at re (~/.vscode/extensions/ms-toolsai.jupyter-2023.6.1101941928-darwin-arm64/out/extension.node.js:2:1587343)
at Cv.dispose (~/.vscode/extensions/ms-toolsai.jupyter-2023.6.1101941928-darwin-arm64/out/extension.node.js:24:113000)
at Ev.dispose (~/.vscode/extensions/ms-toolsai.jupyter-2023.6.1101941928-darwin-arm64/out/extension.node.js:24:120283)
at process.processTicksAndRejections (node:internal/process/task_queues:96:5)]
10:30:36.499 [warn] Cell completed with errors {
message: 'Canceled future for execute_request message before replies were done'
}
10:30:36.499 [info] End cell 21 execution @ 1691602236499, started @ 1691602225152, elapsed time = 11.347s
```
Tested with llama.cpp sample and 70b model works directly without langchain. The problem only occurs when using langchain to prompt to llama.cpp & the 70b model.
### Expected behavior
Kernel should not crash. | Kernel crash when using llama2 70b on langchain with llama.cpp | https://api.github.com/repos/langchain-ai/langchain/issues/8983/comments | 3 | 2023-08-09T17:50:43Z | 2023-11-16T16:06:21Z | https://github.com/langchain-ai/langchain/issues/8983 | 1,843,744,154 | 8,983 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
How do you force a langchain agent to use a tool and not use any information outside the tool? Currently, if the question is not related to the Tool it uses its own data to generate answer which I don't want it to be.
Use Case: I have an index which has a lot of documents. I want to make sure the agent will use the index ALWAYS to get information. If the question is not in the index it should return "I don't know" or similar.
P.S I am using OPENAI_FUNCTION agent.
### Suggestion:
_No response_ | Force LangChain Agent to Use a Tool | https://api.github.com/repos/langchain-ai/langchain/issues/8979/comments | 4 | 2023-08-09T16:48:11Z | 2024-05-08T19:44:43Z | https://github.com/langchain-ai/langchain/issues/8979 | 1,843,655,453 | 8,979 |
[
"hwchase17",
"langchain"
]
| ### System Info
colab
```
!pip install -q langchain tiktoken openai chromadb
```
### Who can help?
@eyurtsev @aga
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
%%writefile adjective_joke_prompt.json
{
# what type of prompt. Currently supports "prompt" and "few_shot"
"_type": "prompt",
# the input variables used in the template
"input_variables": ["adjective", "content"],
# the template text of the prompt, including variable placeholders
"template": "Tell me a {{ adjective }} joke about {{ content }}",
# alternatively the template text can be loaded from a file
"template_path": "adjective_joke_prompt_template.txt"
# NOTE: both "template" and "template_path" cannot be used at the same time!
# the format of the template
"template_format": "jinja2",
# currently only the "RegexParser" is supported for "output_parser"
# this is example of a date parser
"output_parser": {
"_type": "regex_parser",
"regex": "(\\d{4})-(\\d{2})-(\\d{2})",
"output_keys": ["year", "month", "day"]
}
}
```
```
# load the prompt using a file
prompt_template = load_prompt("adjective_joke_prompt.json")
# create a prompt using the variables
prompt_template.format(adjective="funny", content="chickens")
```
gives error:
```
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
[<ipython-input-41-7755c281a03b>](https://localhost:8080/#) in <cell line: 2>()
1 # load the prompt using a file
----> 2 prompt_template = load_prompt("adjective_joke_prompt.json")
3
4 # create a prompt using the variables
5 prompt_template.format(adjective="funny", content="chickens")
5 frames
[/usr/lib/python3.10/json/decoder.py](https://localhost:8080/#) in raw_decode(self, s, idx)
351 """
352 try:
--> 353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
355 raise JSONDecodeError("Expecting value", s, err.value) from None
JSONDecodeError: Expecting property name enclosed in double quotes: line 3 column 2 (char 4)
```
### Expected behavior
gives response based on template | issues loading the prompt with load_prompt | https://api.github.com/repos/langchain-ai/langchain/issues/8978/comments | 1 | 2023-08-09T16:45:57Z | 2023-08-09T16:47:49Z | https://github.com/langchain-ai/langchain/issues/8978 | 1,843,652,268 | 8,978 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am trying to have a conversation with my documents like this
```typescript
const index = client.Index(indexName); //pinecode index
const queryEmbedding = await new OpenAIEmbeddings().embedQuery(question);
const queryResponse = await index.query({
queryRequest: { topK: 10, vector: queryEmbedding, includeMetadata: true },
});
const llm = new OpenAI();
const chatHistory = new ChatMessageHistory();
const memory = new BufferMemory({ chatHistory, inputKey: "my_chat_history", memoryKey: "chat_history" });
const chain = new ConversationChain({ llm, memory, verbose: true });
const concatenatedPageContent = queryResponse.matches.map(match => (<any>match?.metadata)?.pageContent).join(" ");
const result = await chain.call({
input_documents: [new Document({ pageContent: concatenatedPageContent })],
input: question,
});
```
But gettiing
```
Error: Missing value for input history
```
It seems to be impossible to marry embedded docs with chatting | Issue: Can't pass embedded documents with chat | https://api.github.com/repos/langchain-ai/langchain/issues/8975/comments | 0 | 2023-08-09T16:38:34Z | 2023-08-09T17:38:23Z | https://github.com/langchain-ai/langchain/issues/8975 | 1,843,641,608 | 8,975 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Can we use Human as a tool in product?
def get_input() -> str:
print("Insert your text. Enter 'q' or press Ctrl-D (or Ctrl-Z on Windows) to end.")
contents = []
while True:
try:
line = input()
except EOFError:
break
if line == "q":
break
contents.append(line)
return "\n".join(contents)
# You can modify the tool when loading
tools = load_tools(["human", "ddg-search"], llm=math_llm, input_func=get_input)
what to do with input() in get_input() function if we need to use this tool in production?
reference:-https://python.langchain.com/docs/integrations/tools/human_tools
### Suggestion:
_No response_ | Human as a Tool in Production | https://api.github.com/repos/langchain-ai/langchain/issues/8973/comments | 2 | 2023-08-09T16:15:19Z | 2023-11-15T16:05:23Z | https://github.com/langchain-ai/langchain/issues/8973 | 1,843,608,827 | 8,973 |
[
"hwchase17",
"langchain"
]
| ### Feature request
### First
The aim of qa_with_sources is to find only the documents used in the answer.
The attribute `return_source_documents` in `qa_with_sources` chain, returns all the documents. Not just the documents used to provide the answer.
I think it's not necessary because `retriever.get_relevant_documents(question)` returns the same documents list.
It has no added value, and this is not in the spirit of the chain.
I propose to add an attribute `return_used_documents` or change the semantic of `return_source_documents` to limit the result to the documents used to provide the answer.
### Second
With a long list of documents, with a big URL for each document (like a document come from google drive or Microsoft Azure), the number of tokens used is exploding.
The recursive map-reduce must be activated. At each reduction, **some URLs disappear**, and space for documents is shrinking.
### Motivation
When we use the qa_with_source, we want to be able to justify the response.
Actually, we can return a correct list of url, but not the list of associated documents.
Sometime, when the original document is split into multiple documents, all part have the same URL. It's not possible to find the corresponding documents with a list of URL.
### Your contribution
I propose a new chain [qa_with_reference](https://github.com/langchain-ai/langchain/pull/7278) without these problems, but for the moment, nothing is moving. | qa_with_source it returns all documents, not just the ones used. | https://api.github.com/repos/langchain-ai/langchain/issues/8972/comments | 1 | 2023-08-09T16:09:42Z | 2023-11-15T16:05:34Z | https://github.com/langchain-ai/langchain/issues/8972 | 1,843,600,490 | 8,972 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version 0.0.259
### Who can help?
@hwchase17 @baskaryan @eyurtsev
## Issue with `GoogleDriveLoader.file_loader_cls` accepting only classes with constructors that receive a `file` parameter.
The current behavior of `GoogleDriveLoader.file_loader_cls` is that it only accepts a class with a constructor that
takes a `file` parameter. For instance, it can accept the class `UnstructuredFileIOLoader`, and
typically other classes like `NotebookLoader` as well. However, when attempting to use
the `NotebookLoader` class with the following code:
```python
from langchain.document_loaders import NotebookLoader
file_id = "1Hrrf3b4cgjwuKEt1wQUgRtipxqyprKaU"
loader = GoogleDriveLoader(
file_ids=[file_id],
file_loader_cls=NotebookLoader,
file_loader_kwargs={"mode": "elements"},
)
loader.load()
```
An exception is thrown: `TypeError: NotebookLoader.__init__() got an unexpected keyword argument 'file'`.
## Issue with `GoogleDriveLoader` and `import PyPDF2`
If the `file_loader_cls` is not set explicitly, the code attempts to execute `import PyPDF2`.
However, the code is only designed to handle PDF files. Additionally, the dependencies
of `PyPDF2` are not declared in the `pyproject.toml` file. Currently, only `pypdf` is
declared as a dependency. To address this, it is recommended to update the code to utilize `pypdf`
instead of `PyPDF2`. Otherwise, an exception will be raised.
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import NotebookLoader
file_id = "1Hrrf3b4cgjwuKEt1wQUgRtipxqyprKaU" # Link to colab file
loader = GoogleDriveLoader(
file_ids=[file_id],
file_loader_cls=NotebookLoader,
file_loader_kwargs={"mode": "elements"},
)
loader.load()
### Expected behavior
The code should be modified to accept the provided code without any exceptions.
I propose a solution with [a pull request](https://github.com/langchain-ai/langchain/pull/5135),
with a reimplementation of this class. | Bugs in `GoogleDriveLoader.file_loader_cls` | https://api.github.com/repos/langchain-ai/langchain/issues/8969/comments | 2 | 2023-08-09T14:06:18Z | 2023-09-19T08:31:39Z | https://github.com/langchain-ai/langchain/issues/8969 | 1,843,326,275 | 8,969 |
[
"hwchase17",
"langchain"
]
| ### Feature request
- The google drive loader, can manage only GDoc and GSheet.
- The code can load others type of files, but only with one other loader (`file_loader_cls`).
- It's not possible to request Google Drive to find some files with criteria.
- A Google Drive tools is not exist now.
- The can not manage the Google shortcut
- Can not use the description metadata of Google file
- Load only the description for a snippet of document
- It's not possible to set the advanced parameters like
- corpora, driveId, fields, includeItemsFromAllDrives, includeLabels, includePermissionsForView, orderBy, pageSize, spaces, supportsAllDrives,
- Select if return the URL of the document if for view or download
- It's not possible to return
- For GSheet: mode `single` or `elements`
- For GSlide: mode `single`, `elements` or ̀slide`
- It's not possible to use a fine filter (to refuse some document during the load)
- It's not possible to lazy loading if case of import a long list of documents and save memory
- It's not possible to use a standardized environment variable to manage the authentification (like all others technologies)
[LINK](https://github.com/langchain-ai/langchain/pull/5135)
### Motivation
All my company's documents are on GDrive, with more or less complex organizations and formats
(in the organization of CVs, customer documents, etc.).
The actual implementation of `GoogleDriveLoader` is very limited.
We tag all CVs with #cv. This allows us to quickly search for skills using Google Drive search.
Since the directory structure is complex, this approach requires only a single query
(instead of one query per subdirectory).
We utilize Google Drive searches for various purposes.
### Your contribution
@hwchase17 @baskaryan
For the last 10 weeks, I've been offering a pull-request number
[5135](https://github.com/langchain-ai/langchain/pull/5135) that nobody is taking up.
I've had various commitments, but they've never been kept.
My [proposition](https://github.com/langchain-ai/langchain/pull/5135) resolves all these features, and maintains the compatibility with the current version
(with many deprecated warning).
| Extend google drive loader | https://api.github.com/repos/langchain-ai/langchain/issues/8968/comments | 1 | 2023-08-09T14:06:01Z | 2023-09-05T14:42:04Z | https://github.com/langchain-ai/langchain/issues/8968 | 1,843,325,640 | 8,968 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
@hwchase17
The code utilizes certain pivot files responsible for the comprehensive integration of classes (__init__.py, load_tools.py, etc.).
Currently, maintaining an up-to-date pull request for these files poses challenges, as they undergo updates with each new version. This necessitates frequent rebasing until the code is approved.
In my opinion, it would be more advantageous to establish a mechanism that prevents alterations to these files (via naming conventions, abstract classes, etc.). This way, incorporating a new feature would involve solely the addition of new files without requiring modifications to existing ones. Such an approach could serve as the foundation for implementing plugins.
What are your thoughts on this idea?
### Suggestion:
_No response_ | Issue: re-implement pivot files to facilitate the integration of new functions | https://api.github.com/repos/langchain-ai/langchain/issues/8967/comments | 1 | 2023-08-09T14:05:53Z | 2023-11-15T16:07:03Z | https://github.com/langchain-ai/langchain/issues/8967 | 1,843,325,349 | 8,967 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain: 0.0.257
Python: 3.11.4
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying to create an object for `OpenAIEmbeddings` as described [here](https://python.langchain.com/docs/integrations/text_embedding/azureopenai), but the constructor expects a `client `parameter which is documented nowhere.
<img width="591" alt="image" src="https://github.com/langchain-ai/langchain/assets/33937977/6d78c87b-bdcb-4c84-82d8-2c171739d47d">
### Expected behavior
Initialize the object without passing a client | OpenAIEmbeddings expects client parameter | https://api.github.com/repos/langchain-ai/langchain/issues/8966/comments | 1 | 2023-08-09T13:38:54Z | 2023-11-15T16:05:26Z | https://github.com/langchain-ai/langchain/issues/8966 | 1,843,265,073 | 8,966 |
[
"hwchase17",
"langchain"
]
| In the docs it seems the ReAct reference references to the wrong paper.
The actual ReAct paper is [this](https://arxiv.org/pdf/2210.03629.pdf)
https://github.com/langchain-ai/langchain/blob/b8df15cd647ca645ef16b2d66be271dc1f5187c1/docs/docs_skeleton/docs/modules/agents/agent_types/index.mdx#L15 | Wrong paper reference | https://api.github.com/repos/langchain-ai/langchain/issues/8964/comments | 2 | 2023-08-09T12:49:03Z | 2023-08-20T05:26:14Z | https://github.com/langchain-ai/langchain/issues/8964 | 1,843,168,836 | 8,964 |
[
"hwchase17",
"langchain"
]
| Now I am using Pinecone and langchain in my Project.
And I am using RecursiveCharacterTextSplitter when I embedding my bot based on my specific data.
And also I am using ConversationalRetrievalQAChain in chain.
These are my code.
```
const text_splitter = new RecursiveCharacterTextSplitter({
chunkSize: 1000,
chunkOverlap: 200,
});
const docs = await text_splitter.splitDocuments(rowDocs);
```
```
const chain = ConversationalRetrievalQAChain.fromLLM(
llm,
vectorStore.asRetriever(),
{
memory: new BufferMemory({
memoryKey: 'chat_history', // Must be set to "chat_history"
inputKey: 'question',
returnMessages: true,
}),
},
);
```
But my embedding quality is not good.
Sometimes my Bot gives strange answers to my questions. 😢
For example
I have already trained ChatBot using one docx(about Abrantes)
I have asked like that
Me: Please tell me about Abrantes
AI: Abrantes is a Project Controls Specialist ....
Me: What Project he managed?
AI: Ed has managed some .....
Who is Ed in this answer? 😣
There is no Ed in this docx too.
So how can I fix it? | How to improve my embedding quality? | https://api.github.com/repos/langchain-ai/langchain/issues/8962/comments | 2 | 2023-08-09T12:31:35Z | 2023-11-24T19:36:17Z | https://github.com/langchain-ai/langchain/issues/8962 | 1,843,138,505 | 8,962 |
[
"hwchase17",
"langchain"
]
| I'm implementing vectorstore agent on my custom data. Can I implement this with a local llm like gpt4all(GPT4All-J v1.3-groovy)
Can agents give better and expected answer when we use agents, or should go with better model like(gpt4, llama2)? | Agents on a local llm with custom data | https://api.github.com/repos/langchain-ai/langchain/issues/8961/comments | 3 | 2023-08-09T11:49:11Z | 2023-11-28T16:08:55Z | https://github.com/langchain-ai/langchain/issues/8961 | 1,843,070,745 | 8,961 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
How can I retrieve the action for a LLM agent?
![Uploading image.png…]()
### Suggestion:
_No response_ | Issue: How can I retrieve the action for a LLM agent? | https://api.github.com/repos/langchain-ai/langchain/issues/8959/comments | 1 | 2023-08-09T10:52:59Z | 2023-11-15T16:05:29Z | https://github.com/langchain-ai/langchain/issues/8959 | 1,842,978,093 | 8,959 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
FAISS is taking around 12hrs to create embedding and add it to index for 100000 rows csv file. is there any bulk load strategy for CSV files embedding
### Suggestion:
_No response_ | Issue: FAISS taking long time to add to index for 30MB csv file | https://api.github.com/repos/langchain-ai/langchain/issues/8958/comments | 1 | 2023-08-09T10:39:17Z | 2023-11-15T16:10:55Z | https://github.com/langchain-ai/langchain/issues/8958 | 1,842,957,431 | 8,958 |
[
"hwchase17",
"langchain"
]
| ### System Info
macOS Ventura Version `13.4.1`
Python `3.11.4`
```
langchain==0.0.251
chromadb==0.4.5
```
### Who can help?
@hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a vector database and insert some items with relevant embeddings
2. Update vector database collection with some new docs
Python snippet to create/update vector db collection and it's embeddings:
```
documents = []
for row in get_data_rows():
documents.append(
Document(
page_content=row['full_text'],
metadata={
'id': int(row['id']),
'created_at': parser.parse(row['created_at']).timestamp(),
}
)
)
Chroma.from_documents(
documents,
embedding=OpenAIEmbeddings(),
collection_name='my_collection',
persist_directory=f'embeddings/my_collection'
)
```
### Expected behavior
Running above snippet in both scenarios should only call embeddings function for newly added or changed docs instead of everything?
| Embeddings are regenerated for entire vector db on updating collections | https://api.github.com/repos/langchain-ai/langchain/issues/8957/comments | 1 | 2023-08-09T10:24:17Z | 2023-08-31T14:33:51Z | https://github.com/langchain-ai/langchain/issues/8957 | 1,842,933,508 | 8,957 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Can you please help me with connecting my LangChain agent to a MongoDB database?
I have connected my relational database below like
db = SQLDatabase.from_uri("mysql_db_url")
The above process is not working for Mongo Db because it is a NoSQL database
anyone can help me to do this ???????
Thank you .....
### Suggestion:
_No response_ | lang chain connection with mongo DB | https://api.github.com/repos/langchain-ai/langchain/issues/8956/comments | 11 | 2023-08-09T09:53:54Z | 2024-02-27T16:08:10Z | https://github.com/langchain-ai/langchain/issues/8956 | 1,842,884,084 | 8,956 |
[
"hwchase17",
"langchain"
]
| ### System Info
Name: langchain
Version: 0.0.251
Name: faiss-cpu
Version: 1.7.1
Name: llama-cpp-python
Version: 0.1.77
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.llms import LlamaCpp
from langchain import PromptTemplate, LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
import gradio as gr
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.vectorstores import FAISS
from langchain.llms import HuggingFacePipeline
from langchain.document_loaders import DirectoryLoader
from langchain.document_loaders import UnstructuredWordDocumentLoader
from torch import cuda, bfloat16
from transformers import StoppingCriteria, StoppingCriteriaList
from langchain.chains import ConversationalRetrievalChain
from langchain.embeddings import LlamaCppEmbeddings
template = """Question: {question}
Answer:"""
prompt = PromptTemplate(template=template, input_variables=["question"])
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
n_gpu_layers = 42 # Change this value based on your model and your GPU VRAM pool.
n_batch = 1024 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
embeddings = ubuntu(model_path="llama-2-7b-chat/7B/ggml-model-q4_0.bin",
n_gpu_layers=n_gpu_layers,
n_batch=n_batch)
llm = LlamaCpp(
model_path="llama-2-7b-chat/7B/ggml-model-q4_0.bin",
n_gpu_layers=n_gpu_layers,
n_batch=n_batch,
callback_manager=callback_manager,
verbose=True,
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
txt_loader = DirectoryLoader("doc", glob="./*.docx", loader_cls=UnstructuredWordDocumentLoader)
documents = txt_loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=20)
all_splits = text_splitter.split_documents(documents)
vectorstore = FAISS.from_documents(all_splits, embeddings)
query = "How is it going?"
search = vectorstore.similarity_search(query, k=5)
template = '''Context: {context}
Based on Context provide me answer for following question
Question: {question}
Tell me the information about the fact. The answer should be from context only
do not use general knowledge to answer the query'''
prompt = PromptTemplate(input_variables=["context", "question"], template= template)
final_prompt = prompt.format(question=query, context=search)
result = llm_chain.run(final_prompt)
print(result)
```
I get error:
```
llama_tokenize_with_model: too many tokens
Traceback (most recent call last):
File "test_ggml.py", line 57, in <module>
vectorstore = FAISS.from_documents(all_splits, embeddings)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/langchain/vectorstores/base.py", line 420, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/langchain/vectorstores/faiss.py", line 577, in from_texts
embeddings = embedding.embed_documents(texts)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/langchain/embeddings/llamacpp.py", line 110, in embed_documents
embeddings = [self.client.embed(text) for text in texts]
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/langchain/embeddings/llamacpp.py", line 110, in <listcomp>
embeddings = [self.client.embed(text) for text in texts]
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/llama_cpp/llama.py", line 812, in embed
return list(map(float, self.create_embedding(input)["data"][0]["embedding"]))
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/llama_cpp/llama.py", line 776, in create_embedding
self.eval(tokens)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/llama_cpp/llama.py", line 471, in eval
self.input_ids[self.n_tokens : self.n_tokens + n_tokens] = batch
ValueError: could not broadcast input array from shape (179,) into shape (0,)
Exception ignored in: <function Llama.__del__ at 0x7f68eedc9af0>
Traceback (most recent call last):
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/llama_cpp/llama.py", line 1508, in __del__
TypeError: 'NoneType' object is not callable
Exception ignored in: <function Llama.__del__ at 0x7f68eedc9af0>
Traceback (most recent call last):
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/llama_cpp/llama.py", line 1508, in __del__
TypeError: 'NoneType' object is not callable
```
### Expected behavior
CppEmbedding can work well with faiss | LlamaCppEmbeddings not working with faiss | https://api.github.com/repos/langchain-ai/langchain/issues/8955/comments | 1 | 2023-08-09T09:23:46Z | 2023-08-11T03:17:28Z | https://github.com/langchain-ai/langchain/issues/8955 | 1,842,833,550 | 8,955 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I believe one of the most popular directions for langchain-based applications will be to parse the query in a natural language and to use some external api for the response generation. However, as soon as this external API doesn't have an OpenAISchema-based specification, the easiest way to do it will be via chain parsing + request.
I tried to implement it via a sequential chain, the example is provided below:
```
EXCHANGE_API_KEY = SOME_KEY
from langchain.chains import LLMRequestsChain, LLMChain, SimpleSequentialChain
from langchain.chains.openai_functions.openapi import get_openapi_chain
from langchain.prompts import PromptTemplate, SystemMessagePromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
template = """You help to convert currencies.
You need to convert the input phrase into the query like from=EUR&to=GBP&amount=100"
You need to extract these parameters yourself from the query in natural language.
Please, return the string like https://api.currencylayer.com/convert?from=EUR&to=GBP&amount=100, where you need to fill in "from", "to", and "amount" based on previous instructions"
"""
system_prompt = SystemMessagePromptTemplate.from_template(template)
human_message_prompt = HumanMessagePromptTemplate.from_template("{question}")
first_prompt = ChatPromptTemplate(
messages=[
system_prompt,
human_message_prompt
]
)
## this one above may be done nicer, via PromptTemplate, but I don't think it's necessary here
template_2 = """
Query api with {synopsis} and return api response
"""
llm = OpenAI(temperature=0)
convert_chain = LLMChain(llm=llm, prompt=first_prompt, output_key="synopsis")
system_prompt_2 = PromptTemplate(input_variables=["synopsis"], template=template_2)
chain = LLMRequestsChain(llm_chain=LLMChain(llm=OpenAI(temperature=0), prompt=system_prompt_2))
overall_chain = SimpleSequentialChain(chains=[convert_chain, chain], verbose=True)
question = "Convert 1000 American Dollars to euros please"
inputs = {"input": question,
#"url": "https://api.currencylayer.com/convert",
"headers": {"access_key": EXCHANGE_API_KEY}
}
overall_chain(inputs)
```
The code above fails due to _requests_ library stacktrace: `InvalidSchema: No connection adapters were found for 'System: [https://api.currencylayer.com/convert?from=USD&to=EUR&amount=1000'](https://api.currencylayer.com/convert?from=USD&to=EUR&amount=1000%27)`
It happens because the string "System: " is inevitably attached to the output of the first chain.
So, my questions are as follows:
1. Is the way I showed above right now the best option to interact with the APIs which have not been implemented yet to langchain and haven't got OpenAISchema-based docs?
2. If so, how to deal with my problem and strip the "System: " part for query? Is there a natural way to parse the natural language query into an API request via this chain?
### Suggestion:
I think there should be a very clear way and documentation of how to deal with external APIs after natural language parsing done by generative network | Issue: unclear processing of natural language query parsing + external api querying afterwards | https://api.github.com/repos/langchain-ai/langchain/issues/8953/comments | 3 | 2023-08-09T08:51:52Z | 2023-11-16T16:06:36Z | https://github.com/langchain-ai/langchain/issues/8953 | 1,842,781,388 | 8,953 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am working with hugging face opensource models for SQL generation with langchain
Our model works without a connection db, but when I connect the db, the chain generates inaccurate answers and takes unknown columns.
I am using below models
wizard, alpaca,vicuna,falcon
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
db_chain = SQLDatabaseChain.from_llm(llm, db=db, verbose=True,prompt=PROMPT)
db_chain.run(input('enter the question:'))
### Expected behavior
I need to connect the database with my open-source model using Langchain
when I ask a question to my chain, the chain generates the correct SQL query with consideration of my database schema. | schema consideration while generating query | https://api.github.com/repos/langchain-ai/langchain/issues/8950/comments | 5 | 2023-08-09T06:00:14Z | 2023-12-08T16:05:40Z | https://github.com/langchain-ai/langchain/issues/8950 | 1,842,545,419 | 8,950 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Dear [langchain](https://github.com/langchain-ai/langchain) developer,
Greetings! I am Jimmy, a community developer and volunteer at InternLM. Your work has been immensely beneficial to me, and I believe it can be effectively utilized in InternLM as well. Welcome to add Discord https://discord.gg/gF9ezcmtM3 . I hope to get in touch with you.
Best regards,
Jimmy
### Motivation
Hope to get in touch
### Your contribution
Hope to get in touch | Hope to get in touch | https://api.github.com/repos/langchain-ai/langchain/issues/8949/comments | 4 | 2023-08-09T05:30:05Z | 2023-08-20T19:42:46Z | https://github.com/langchain-ai/langchain/issues/8949 | 1,842,513,770 | 8,949 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
got 404 error while looking for tabular documentation:
From link:
https://python.langchain.com/docs/use_cases/question_answering/
To link:
https://python.langchain.com/docs/use_cases/tabular
got 404:

### Idea or request for content:
_No response_ | DOC: Tabular doc not found | https://api.github.com/repos/langchain-ai/langchain/issues/8946/comments | 2 | 2023-08-09T02:03:58Z | 2023-11-21T02:13:34Z | https://github.com/langchain-ai/langchain/issues/8946 | 1,842,336,871 | 8,946 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Try https://api.python.langchain.com/en/latest/search.html?q=PubMed (a search on the "PubMed" string)
It returns too many lines.
It seems it converts every found "PubMed" string into a link. That creates duplicate links in big quantities.
### Idea or request for content:
Fix a script that generates the "Search results" page. | DOC: API Reference: bug in the Search | https://api.github.com/repos/langchain-ai/langchain/issues/8936/comments | 1 | 2023-08-08T21:28:15Z | 2023-11-14T16:05:11Z | https://github.com/langchain-ai/langchain/issues/8936 | 1,842,127,127 | 8,936 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I was using Langchain (map reduce) for summarization of longer documents with local HuggingFace model. I am using local models as my work prohibits me to directly connect to huggingface and/or openai.
I am having some issue in running the model. I am getting the following errors:
"'HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /gpt2/resolve/main/tokenizer_config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f11d2f511d0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))' thrown while requesting HEAD https://huggingface.co/gpt2/resolve/main/tokenizer_config.json"
My code is:
`from transformers import AutoTokenizer, LlamaForCausalLM,AutoModelForSeq2SeqLM,AutoModel, AutoModelForCausalLM, , LlamaTokenizer, LlamaForCausalLM,AutoModelForCausalLM,pipeline
from langchain.chains.summarize import load_summarize_chain
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains.mapreduce import MapReduceChain
from langchain.docstore.document import Document
from langchain.document_loaders import PyPDFLoader
import os
import torch
model = AutoModelForSeq2SeqLM.from_pretrained("/home/projects/nlp_summarize/led-large-16384")
tokenizer2 = AutoTokenizer.from_pretrained("/home/projects/nlp_summarize/led-large-16384")
tokenizer2.deprecation_warnings["Asking-to-pad-a-fast-tokenizer"] = True
pipe = pipeline(
"text2text-generation",
model=model,
tokenizer=tokenizer2,
#max_new_tokens= 4096,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15,device= DEVICE
)
llm = HuggingFacePipeline(pipeline=pipe)
text_splitter = CharacterTextSplitter()
with open("stateoftheunion.txt") as f:
data = f.read()
texts = text_splitter.split_text(data)
docs = [Document(page_content=t) for t in texts[:]]
chain = load_summarize_chain(llm, chain_type="map_reduce")
output_summary = chain.run(docs)
`
Even though I am not using any gpt2 model2 but still the model is looking for a gpt2 model online.
Any idea why such behavior is happening and how can I avoid it?
### Suggestion:
_No response_ | Local huggingface model for summarization task | https://api.github.com/repos/langchain-ai/langchain/issues/8931/comments | 4 | 2023-08-08T19:23:10Z | 2024-01-30T00:41:16Z | https://github.com/langchain-ai/langchain/issues/8931 | 1,841,912,446 | 8,931 |
[
"hwchase17",
"langchain"
]
| ### Feature request
The qdrant vector store currently has an async function for adding texts to the (aadd_texts) which only supports creating embeddings with the _generate_rest_batches function synchronously.
There could be the option to add a async embedding function and aadd_texts could maybe then have a parameter to create the embeddings async with an async version of _generate_rest_batches.
### Motivation
It would be nice to have the option to have an aadd_texts function with completely non blocking IO.
### Your contribution
I could try to implement this or support with the implementation. | Qdrant support for async embedding functions in aadd_texts | https://api.github.com/repos/langchain-ai/langchain/issues/8930/comments | 1 | 2023-08-08T18:27:50Z | 2023-11-14T16:06:19Z | https://github.com/langchain-ai/langchain/issues/8930 | 1,841,811,758 | 8,930 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.257
Python 3.8.17
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.llms.openai import AzureOpenAI
llm = AzureOpenAI(deployment_name="text-davinci-003", verbose=True)
llm._generate(["You are a chatbot. \nUser: How are you?\nBot:",
"You are a chatbot. \nUser: What is the weather like?\nBot:"], n=2)
```
### Expected behavior
The result should be the same when we are providing 'n' parameter when initiating the 'AzureOpenAI' class in the beginning. However, it is not being handled, when passing it through the 'generate' function.
When n is "2", Generations only contain 1 example for each prompt, however, I expect it to have 2 examples for each. Additionally, 2 responses for the first prompt are returned as the result of two prompts, ultimately losing the result of the second prompt.
result:
```
[
[Generation(text=" I'm doing great, thanks for asking! How about you?", generation_info={'finish_reason': 'stop', 'logprobs': None
})
],
[Generation(text=" I'm doing well, thanks for asking! How about you?", generation_info={'finish_reason': 'stop', 'logprobs': None
})
]
]
```
This may be because when the 'create_llm_result' function is called, it does not receive the recent value of 'n', instead of 2 it shows 'n' as 1. When I manually set 'self.n' to 2, it returns the expected result.
code change:
```
def create_llm_result(
self, choices: Any, prompts: List[str], token_usage: Dict[str, int]
) -> LLMResult:
"""Create the LLMResult from the choices and prompts."""
generations = []
++ self.n = 2
for i, _ in enumerate(prompts):
sub_choices = choices[i * self.n : (i + 1) * self.n]
```
result:
```
[
[Generation(text=" I'm doing great, thanks for asking! How can I help you today?", generation_info={'finish_reason': 'stop', 'logprobs': None
}), Generation(text=" I'm doing well, thank you for asking! How can I help you?", generation_info={'finish_reason': 'stop', 'logprobs': None
})
],
[Generation(text=' The weather today is mostly sunny with a high of 70 degrees Fahrenheit and a low of 45 degrees Fahrenheit.', generation_info={'finish_reason': 'stop', 'logprobs': None
}), Generation(text=' The current weather is sunny and warm with temperatures in the high 70s.', generation_info={'finish_reason': 'stop', 'logprobs': None
})
]
]
``` | n hyperparameter in AzureOpenai is not updated | https://api.github.com/repos/langchain-ai/langchain/issues/8928/comments | 4 | 2023-08-08T18:03:51Z | 2023-11-29T09:49:10Z | https://github.com/langchain-ai/langchain/issues/8928 | 1,841,767,814 | 8,928 |
[
"hwchase17",
"langchain"
]
| ### System Info
Platform: MacOS
Python version: Python 3.10.12
LangChain version: 0.0.257
Azure Search package version: 1.0.0b2
Azure Search Document package version: 11.3.0
azure-search==1.0.0b2
azure-search-documents==11.3.0
langchain==0.0.257
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. pip install packages langchain, azure-search, azure-search-documents.
2. Try to create an instance of AzureSearch as described [here](https://python.langchain.com/docs/integrations/vectorstores/azuresearch). (Official LangChain documentation.)
3. Receive error `AttributeError: module ‘azure.search.documents.indexes.models._edm’ has no attribute ‘Single’` from:
```
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query,
)
```
### Expected behavior
AzureSearch instance should be created without any errors from langchain dependencies. _edm.py file in azure-search-documents package does not define Single as a data type. | SearchFieldDataType.Single raises error in azuresearch.py, under AzureSearch() class. | https://api.github.com/repos/langchain-ai/langchain/issues/8917/comments | 4 | 2023-08-08T14:27:07Z | 2023-08-11T14:08:16Z | https://github.com/langchain-ai/langchain/issues/8917 | 1,841,431,623 | 8,917 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version - 0.0.257
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
i run the following
```
llm = HuggingFaceTextGenInference(
inference_server_url="http://" + settings.self_hosted_server_ip+":8080",
max_new_tokens=2000,
top_k=5,
top_p=0.95,
typical_p=0.95,
temperature=0.01,
repetition_penalty=1.23,
callbacks = callbacks,
stream = True
)
my_chain = LLMChain(
llm=llm, # type: ignore
prompt=prompt,
verbose=False,
)
tools = [PythonAstREPLTool()]
custom_agent = LLMSingleActionAgent(
llm_chain=my_chain,
output_parser=my_output_parser,
stop=["\nObservation:"],
)
agent_memory = CustomConversationTokenBufferMemory(
k=1,
llm=llm, # type: ignore
max_token_limit=4096,
memory_key="history"
)
return AgentExecutor.from_agent_and_tools(
agent=custom_agent,
tools=tools,
verbose=True,
memory=agent_memory,
)
```
where the HF inference endpoint is running with the following docker :
```
model=meta-llama/Llama-2-70b-chat-hf
docker run -d --gpus all --shm-size 1g -e HUGGING_FACE_HUB_TOKEN=$token -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:latest --max-input-length 3000 --max-total-tokens 4096 --quantize bitsandbytes --json-output --model-id $model --trust-remote-code
```
the error i get is :
**Token indices sequence length is longer than the specified maximum sequence length for this model (1286 > 1024). Running this sequence through the model will result in indexing errors**
but the Llama-2-70B model token limit is 4096....
i tried to test using just the :
```
llm_local = HuggingFaceTextGenInference(
inference_server_url=inference_server_url_local,
max_new_tokens=2000,
top_k=10,
top_p=0.95,
typical_p=0.95,
temperature=0.7,
repetition_penalty=1.03,
)
```
with :
```
llm_chain_local = LLMChain(prompt=prompt, llm=llm_local)
print(llm_chain_local("some query"))
```
and didnt get any error
### Expected behavior
i am not expecting any token limit issue | running HuggingFaceTextGenInference from an agent gives token limit warning | https://api.github.com/repos/langchain-ai/langchain/issues/8913/comments | 5 | 2023-08-08T12:32:10Z | 2024-01-30T00:41:16Z | https://github.com/langchain-ai/langchain/issues/8913 | 1,841,220,971 | 8,913 |
[
"hwchase17",
"langchain"
]
| ### System Info
`0.0.257`
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Load a document from GCS and check the metadata["source"], it would point to the file in `/tmp` directory
### Expected behavior
source pointing to an original file on GCS | GCSFileLoader stores a wrong source in the metadata | https://api.github.com/repos/langchain-ai/langchain/issues/8911/comments | 2 | 2023-08-08T12:17:01Z | 2023-08-08T21:23:10Z | https://github.com/langchain-ai/langchain/issues/8911 | 1,841,196,684 | 8,911 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain = 0.0.251
Python = 3.10.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create an OWL ontology called `dbpedia_sample.ttl` with the following:
``` turtle
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix dcterms: <http://purl.org/dc/terms/> .
@prefix wikidata: <http://www.wikidata.org/entity/> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix prov: <http://www.w3.org/ns/prov#> .
@prefix : <http://dbpedia.org/ontology/> .
:Actor
a owl:Class ;
rdfs:comment "An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity."@en ;
rdfs:label "actor"@en ;
rdfs:subClassOf :Artist ;
owl:equivalentClass wikidata:Q33999 ;
prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:Actor> .
:AdministrativeRegion
a owl:Class ;
rdfs:comment "A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)"@en ;
rdfs:label "administrative region"@en ;
rdfs:subClassOf :Region ;
owl:equivalentClass <http://schema.org/AdministrativeArea>, wikidata:Q3455524 ;
prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:AdministrativeRegion> .
:birthPlace
a rdf:Property, owl:ObjectProperty ;
rdfs:comment "where the person was born"@en ;
rdfs:domain :Animal ;
rdfs:label "birth place"@en ;
rdfs:range :Place ;
rdfs:subPropertyOf dul:hasLocation ;
owl:equivalentProperty <http://schema.org/birthPlace>, wikidata:P19 ;
prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyProperty:birthPlace> .
```
2. Run
``` python
from langchain.graphs import RdfGraph
graph = RdfGraph(
source_file="dbpedia_sample.ttl",
serialization="ttl",
standard="owl"
)
print(graph.get_schema)
```
3. Output
```
In the following, each IRI is followed by the local name and optionally its description in parentheses.
The OWL graph supports the following node types:
<http://dbpedia.org/ontology/Actor> (Actor, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.),
<http://dbpedia.org/ontology/AdministrativeRegion> (AdministrativeRegion, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration))
The OWL graph supports the following object properties, i.e., relationships between objects:
<http://dbpedia.org/ontology/birthPlace> (birthPlace, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.),
<http://dbpedia.org/ontology/birthPlace> (birthPlace, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)), <http://dbpedia.org/ontology/birthPlace> (birthPlace, where the person was born)
The OWL graph supports the following data properties, i.e., relationships between objects and literals:
```
### Expected behavior
The issue is that in the SPARQL queries getting the properties the `rdfs:comment` triple pattern always refers to the variable `?cls` which obviously comes from copy/paste code.
For example, getting the RDFS properties via
``` python
rel_query_rdf = prefixes["rdfs"] + (
"""SELECT DISTINCT ?rel ?com\n"""
"""WHERE { \n"""
""" ?subj ?rel ?obj . \n"""
""" OPTIONAL { ?cls rdfs:comment ?com } \n"""
"""}"""
)
```
you can see that the `OPTIONAL` clause refers to `?cls`, but it should be `?rel`.
The same holds for all other queries regarding properties.
The current status leads to a cartesian product of properties and all `rdfs:comment` vlaues in the dataset, which can be horribly large and of course leads to misleading and huge prompts (see the output of my sample in the "reproduction" part) | RdfGraph schema retrieval queries for the relation types are not linked by the correct comment variable | https://api.github.com/repos/langchain-ai/langchain/issues/8907/comments | 1 | 2023-08-08T10:57:54Z | 2023-10-25T20:36:59Z | https://github.com/langchain-ai/langchain/issues/8907 | 1,841,077,028 | 8,907 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Implement a Chat Message History class backed up by Google Cloud Firestore.
### Motivation
It's a common No-SQL database, used by a lot of people to build MVPs, due to its friendly pricing.
### Your contribution
I'd submit a PR to implement this, if you guys think that it could be a helpful feature | Cloud Firestore Chat Message History | https://api.github.com/repos/langchain-ai/langchain/issues/8906/comments | 3 | 2023-08-08T10:52:17Z | 2024-07-14T18:14:33Z | https://github.com/langchain-ai/langchain/issues/8906 | 1,841,068,385 | 8,906 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I'm looking for a way to add entries to a vector store when the embeddings already exist and I don't want to calculate them again. However, it seems like this is not possible with langchain at the moment. Maybe something like an `add_entries()` function would be nice, where the data needs to be exactly in the form as it's stored in the DB.
### Motivation
What if you want to add entries but you already have embeddings? What if you need to calculate embeddings separately from the "add to db" step?
### Your contribution
I already built a workaround for my own problem but it's very hacky. It would be nice if langchain could do this natively. | VectorStore: Add entries by providing embeddings and not an embedding function | https://api.github.com/repos/langchain-ai/langchain/issues/8905/comments | 1 | 2023-08-08T09:35:08Z | 2023-11-14T16:05:19Z | https://github.com/langchain-ai/langchain/issues/8905 | 1,840,945,541 | 8,905 |
[
"hwchase17",
"langchain"
]
| ### System Info
Name: langchain
Version: 0.0.256
platform: Linux
Python 3.9.16
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
There is a mistake in langchain.llms.chatglm.py

in line 128, where:
**`self.history = self.history + [[None, parsed_response["response"]]]`**
when you run this code, the log file shows like this:
2023-08-08 16:44:26,370 - ChatGLM - Query - 谢谢
2023-08-08 16:44:26,370 - ChatGLM - History - [['我将从美国到中国来旅游,出行前希望了解中国的城市', '欢迎问我任何问题。'], [None, '我是一个人工智能助手,我将尽力回答您的问题。'], [None, '我能回答各种问题、提供信息、帮助解决问题等。']]
Actually,it should be:
**`self.history = self.history + [[prompt, parsed_response["response"]]]`**
In this way, my input prompt can be recorded in the list correctly!
### Expected behavior
record my input prompt in a list with correspond response as history. | There is a bug in [Integration]-->[ChatGLM] | https://api.github.com/repos/langchain-ai/langchain/issues/8904/comments | 2 | 2023-08-08T09:08:19Z | 2023-11-14T16:05:28Z | https://github.com/langchain-ai/langchain/issues/8904 | 1,840,899,526 | 8,904 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain v0.0.254
python 3.11
### Who can help?
In this commit https://github.com/langchain-ai/langchain/commit/81e5b1ad362e9e6ec955b6a54776322af82050d0#diff-5d00673e4963a0b2c6bf091d22f98c30d267b20fea4d15d9541ba6d1f5d79e4fR20 inheritance from `Serializable` was introduced. This inheritance is the root of the problem. The commit was by @nfcampos. Perhaps you can help?
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Simply run this code:
``` python
from typing import Protocol
from langchain.schema.retriever import BaseRetriever
class Foo(Protocol):
bar: int
class IAmAFailure(BaseRetriever):
foo: Foo
```
You will then see the error:
```
Traceback (most recent call last):
File "/home/yolen/Desktop/langchain_bug.py", line 9, in <module>
class IAmAFailure(Serializable):
File "pydantic/main.py", line 197, in pydantic.main.ModelMetaclass.__new__
File "pydantic/fields.py", line 506, in pydantic.fields.ModelField.infer
File "pydantic/fields.py", line 436, in pydantic.fields.ModelField.__init__
File "pydantic/fields.py", line 552, in pydantic.fields.ModelField.prepare
File "pydantic/fields.py", line 639, in pydantic.fields.ModelField._type_analysis
File "/usr/local/lib/python3.11/typing.py", line 1960, in __instancecheck__
raise TypeError("Instance and class checks can only be used with"
TypeError: Instance and class checks can only be used with @runtime_checkable protocols
```
Adding the `@runtime_checkable` decorator does not help (just changes the error). To my understanding, this problem is cased by the inheritance of `Serializable` in the new release https://github.com/langchain-ai/langchain/blob/fa30a57034b6359e8de36bbb98766d2214acfcbd/libs/langchain/langchain/schema/retriever.py#L21.
### Expected behavior
I would expect that I can use Protocols with retrievers. In my use case, I, among others, inject a class that can compute a query embedding. The `QueryEmbedding` class is a protocol e.g.
```python
class QueryEmbedder(Protocol):
async def get_embeddings(self, *, text: str) -> list[TextEmbedding]:
...
class FooRetriever(BaseRetriever):
def __init__(self,*, query_embedder:QueryEmbedder)->None:
self.query_embedder=query_embedder
async def _aget_relevant_documents(
self, query: str,
*, run_manager: AsyncCallbackManagerForRetrieverRun) -> list[Document]:
query_embedding = await self.query_embedder.embed(query=query)
...
```
Protocols are very good at reducing the coupling in your code base. Furthermore, I do think that using multiple inheritance will cause problems at some stage. _Composition over inheritance_ | Retrievers inheriting from BaseRetriever are incompatible with typing.Protocol | https://api.github.com/repos/langchain-ai/langchain/issues/8901/comments | 2 | 2023-08-08T05:50:30Z | 2023-11-14T16:07:42Z | https://github.com/langchain-ai/langchain/issues/8901 | 1,840,608,946 | 8,901 |
[
"hwchase17",
"langchain"
]
| ### System Info
I use OpenAIEmbeddings from langchain.embedding, and using openAI function.
However I find it's a problem that when we call OpenAI Interface, the input is a 2D list, not a 1D list.
If the input is a 1D list, it could work for embedding.create func
like this:

and I found out that openai api does not support 2D list as input.


how could I solve this problem

the error is shown below:
```
Traceback (most recent call last):
File "..\main.py", line 47, in <module>
langchain_openai()
File "..\main.py", line 42, in langchain_openai
query_result = embeddings.embed_query(text)
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\langchain\embeddings\openai.py", line 501, in embed_query
return self.embed_documents([text])[0]
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\langchain\embeddings\openai.py", line 473, in embed_documents
return self._get_len_safe_embeddings(texts, engine=self.deployment)
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\langchain\embeddings\openai.py", line 359, in _get_len_safe_embeddings
response = embed_with_retry(
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\langchain\embeddings\openai.py", line 108, in embed_with_retry
return _embed_with_retry(**kwargs)
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\tenacity\__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\tenacity\__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\tenacity\__init__.py", line 314, in iter
return fut.result()
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\concurrent\futures\_base.py", line 437, in result
return self.__get_result()
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\concurrent\futures\_base.py", line 389, in __get_result
raise self._exception
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\tenacity\__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\langchain\embeddings\openai.py", line 105, in _embed_with_retry
response = embeddings.client.create(**kwargs)
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\openai\api_resources\embedding.py", line 33, in create
response = super().create(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\openai\api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\openai\api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "C:\ProgramData\Anaconda3\envs\python3.8\lib\site-packages\openai\api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: parse parameter error: type mismatch
```
### Who can help?
@hwchase17 @agola11
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

```
def langchain_openai():
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(
model="text-embedding-ada-002",
openai_api_base="",
openai_api_key=""
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
print(query_result[:5])
```
### Expected behavior
I think it's a bug or something to fix | Type Mismatch For using OpenAI Interface | https://api.github.com/repos/langchain-ai/langchain/issues/8899/comments | 4 | 2023-08-08T03:37:55Z | 2023-08-08T10:56:43Z | https://github.com/langchain-ai/langchain/issues/8899 | 1,840,506,624 | 8,899 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Hello, how to expand the integrated content, such as the SQLDatabaseToolkit class, after executing the existing tools to continue other work, I checked the following code in the SQLDatabaseToolkit class,`
def get_tools(self) -> List[BaseTool]:
"""Get the tools in the toolkit."""
return [
QuerySQLDataBaseTool(db=self.db),
InfoSQLDatabaseTool(db=self.db),
ListSQLDatabaseTool(db=self.db),
QueryCheckerTool(db=self.db, llm=self.llm),
]`
how can I add my own tool class to expand it?
I tried to subclass BaseTool according to the instructions in the documentation, and then override the get_tools method of the SQLDatabaseToolkit class to add my custom BaseTool. However, it failed.
I'm also considering how to extend the integration for other components to achieve my task scenario.
### Idea or request for content:
_No response_ | how to expand the integrated content | https://api.github.com/repos/langchain-ai/langchain/issues/8898/comments | 5 | 2023-08-08T03:33:04Z | 2023-11-14T16:05:37Z | https://github.com/langchain-ai/langchain/issues/8898 | 1,840,501,592 | 8,898 |
[
"hwchase17",
"langchain"
]
| ### Feature request
In Qdrant vectorstore, we have a method:
```
def retrieve(
self,
collection_name: str,
ids: Sequence[types.PointId],
with_payload: Union[bool, Sequence[str], types.PayloadSelector] = True,
with_vectors: Union[bool, Sequence[str]] = False,
consistency: Optional[types.ReadConsistency] = None,
) -> List[types.Record]:
"""Retrieve stored points by IDs
Args:
collection_name: Name of the collection to lookup in
ids: list of IDs to lookup
with_payload:
- Specify which stored payload should be attached to the result.
- If `True` - attach all payload
- If `False` - do not attach any payload
- If List of string - include only specified fields
- If `PayloadSelector` - use explicit rules
with_vectors:
- If `True` - Attach stored vector to the search result.
- If `False` - Do not attach vector.
- If List of string - Attach only specified vectors.
- Default: `False`
consistency:
Read consistency of the search. Defines how many replicas should be queried before returning the result.
Values:
- int - number of replicas to query, values should present in all queried replicas
- 'majority' - query all replicas, but return values present in the majority of replicas
- 'quorum' - query the majority of replicas, return values present in all of them
- 'all' - query all replicas, and return values present in all replicas
Returns:
List of points
"""
```
I have not see simarly method in FAISS implementation, how to retrieve vectors `with_vectors=True` in FAISS vectorstore?
### Motivation
I am using FAISS vectorstore, and now:
1. use FAISS added some doc and returned doc ids;
2. I want a method like in Qdrant, to retrieve the embedding from this doc ids;
Here is the Qdrant simarliy code:
```
ids_inserted = qdrant_client.add_texts(
[description],
[{
"source": "todo"
}],
)
# now I want to get the embeddings
records_inserted = qdrant_client.retrieve(
collection_name="test",
ids=ids_inserted,
with_vectors=True
)
self.embedding = records_inserted[0].vector
```
### Your contribution
Sorry, no. | How to retrieve vectors by ids for LangChain vectorstore FAISS? | https://api.github.com/repos/langchain-ai/langchain/issues/8897/comments | 6 | 2023-08-08T03:26:19Z | 2023-08-09T08:15:18Z | https://github.com/langchain-ai/langchain/issues/8897 | 1,840,497,093 | 8,897 |
[
"hwchase17",
"langchain"
]
| ### System Info
v0.0.256
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Add `print(f"chunk-size: {_chunk_size}")` in OpenAIEmbeddings._get_len_safe_embeddings after `_chunk_size = chunk_size or self.chunk_size`, and pass chunk_size argument to embed_documents other than the default value of 1000. 1000 will be printed instead of the chunk_size passed as argument.
### Expected behavior
chunk_size arg is not used in OpenAIEmbeddings's embed_documents and aembed_documents methods. It should be passed to self._get_len_safe_embeddings.
Check: https://github.com/langchain-ai/langchain/blob/v0.0.256/libs/langchain/langchain/embeddings/openai.py#L463 | chunk_size arg is not used in OpenAIEmbeddings's embed_documents and aembed_documents methods | https://api.github.com/repos/langchain-ai/langchain/issues/8887/comments | 2 | 2023-08-07T21:34:25Z | 2023-11-14T16:06:01Z | https://github.com/langchain-ai/langchain/issues/8887 | 1,840,237,698 | 8,887 |
[
"hwchase17",
"langchain"
]
| ### Feature request
## Description
Add format/lint actions to the pre-commit tool, so user's don't have to run these locally on their machines. This is a 2 stage process:
1. Make this part of the local commit process. This will ensure that all commits run these explicitly before pushing the change.
2. Adding this to the CI will ensure that users who choose to ignore setting up the pre-commit in their local repo, can rely on the CI to fix some of the formatting and linting (fixable). This also avoids some of the system specific issues/differences that users might end up having on their local machines (stale versions of lint tools) by ensuring that the CI is running these on one consistent up-to-date platform.
### Motivation
The current manual linting process adds churn and friction to the dev workflow as most devs end up pushing their changes without running these steps, and rely on the CI to remind them on running them. Some of the formatting and linting can be automated with CI, and does not require a dev's direct involvement.
### Your contribution
I am happy to work on this, but would need the core team's input before going ahead with this change. | Simplify linting workflow by adding to pre-commit | https://api.github.com/repos/langchain-ai/langchain/issues/8884/comments | 3 | 2023-08-07T20:57:44Z | 2023-11-16T16:06:26Z | https://github.com/langchain-ai/langchain/issues/8884 | 1,840,197,092 | 8,884 |
[
"hwchase17",
"langchain"
]
| ### System Info
Based on https://github.com/hwchase17/conversational-retrieval-agent/blob/master/streamlit.py, did few changes to code.
Since the beginning of the tests could not prevent the max content length error.
the best change was using:
memory = AgentTokenBufferMemory(memory_key="history", llm=llm, max_history=5, max_token_limit= 3000)
llm = ChatOpenAI(temperature=0, streaming=True, model="gpt-3.5-turbo",max_tokens=500)
It took longer but still got the error message. All the conversation was logged using LangSmith , username [email protected] ,, https://smith.langchain.com/projects/p/2e2c3411-5910-4785-985d-a4b31702e6c7/r/2893c334-9650-4125-99df-af6e18481b12
It seemed that the context was under control, but the latest question broke it.
Python version - 3.11.3
Langchain - 0.0.252
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
memory = AgentTokenBufferMemory(memory_key="history", llm=llm, max_history=5, max_token_limit= 3000)
llm = ChatOpenAI(temperature=0, streaming=True, model="gpt-3.5-turbo",max_tokens=500)
### Expected behavior
All the conversation was logged using LangSmith , username [email protected] ,, https://smith.langchain.com/projects/p/2e2c3411-5910-4785-985d-a4b31702e6c7/r/2893c334-9650-4125-99df-af6e18481b12 | conversational-retrieval-agent using AgentTokenBufferMemory : Still cannot prevent maximum content length errors | https://api.github.com/repos/langchain-ai/langchain/issues/8881/comments | 3 | 2023-08-07T20:04:21Z | 2023-08-11T12:21:33Z | https://github.com/langchain-ai/langchain/issues/8881 | 1,840,131,332 | 8,881 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.256
Python 3.8
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I've developed a Next.js application that uses the Langchain library for chat functionality and is deployed on AWS Amplify. The application works perfectly when running locally, but fails after deployment on AWS Amplify.
The application uses the Langchain library, OpenAIEmbeddings for generating embeddings, and PineconeStore for storing vectors. I've ensured that my environment variables are correctly set up both locally and in the AWS Amplify console.
Here is the code snippet for my chat handler:
```
import type { NextApiRequest, NextApiResponse } from 'next';
import { OpenAIEmbeddings } from 'langchain/embeddings/openai';
import { PineconeStore } from 'langchain/vectorstores/pinecone';
import { makeChain } from '@/utils/makechain';
import { pinecone } from '@/utils/pinecone-client';
import { PINECONE_INDEX_NAME, PINECONE_NAME_SPACE } from '@/config/pinecone';
export default async function handler(
req: NextApiRequest,
res: NextApiResponse,
) {
const { question, history } = req.body;
if (req.method !== 'POST') {
res.status(405).json({ error: 'Method not allowed' });
return;
}
if (!question) {
return res.status(400).json({ message: 'No question in the request' });
}
const sanitizedQuestion = question.trim().replaceAll('\n', ' ');
try {
const index = pinecone.Index(PINECONE_INDEX_NAME);
const vectorStore = await PineconeStore.fromExistingIndex(
new OpenAIEmbeddings(),
{
pineconeIndex: index,
textKey: 'text',
namespace: PINECONE_NAME_SPACE,
},
);
const chain = makeChain(vectorStore);
const response = await chain.call({
question: sanitizedQuestion,
chat_history: history || [],
});
res.status(200).json(response);
} catch (error: any) {
console.log('chat.ts file error: ', error);
res.status(500).json({ error: error.message || 'Something went wrong' });
}
}
```
### Expected behavior
The error message I'm receiving suggests that the Langchain library is assuming I am running on an Azure instance and is expecting an Azure-specific environment variable. However, I am not using Azure, I'm using AWS. The error message states that an 'azureOpenAIApiInstanceName' is missing, which, as I understand, is only relevant if I was using Azure.
Has anyone encountered a similar issue, or have any insights into why this might be happening? I've been unable to find any information in the Langchain library documentation about this Azure dependency. | Unexpected Azure Dependency with OpenAI and Langchain Library in Next.js App Deployed on AWS Amplify | https://api.github.com/repos/langchain-ai/langchain/issues/8877/comments | 5 | 2023-08-07T18:07:37Z | 2023-09-01T10:55:18Z | https://github.com/langchain-ai/langchain/issues/8877 | 1,839,976,644 | 8,877 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi, Im using ConversationalRetrievalChain.from_llm and I notice that sometimes the answer is the following up question.
Also sometimes the answer generates a few Q and A all togheter.
Anyone with the same problem?
regards.
### Suggestion:
_No response_ | Issue: ConversationalRetrievalChain.from_llm sometime answer with the following up question | https://api.github.com/repos/langchain-ai/langchain/issues/8875/comments | 5 | 2023-08-07T17:57:21Z | 2024-04-01T15:48:55Z | https://github.com/langchain-ai/langchain/issues/8875 | 1,839,964,107 | 8,875 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.250
Python 3.11.3
22.5.0 Darwin Kernel Version 22.5.0
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
This code runs perfectly well if `agenerate` is replaced with `_agenerate`, meaning that the super function is not using the internal implementation correctly.
```
llm = ChatOpenAI(temperature=0.7, max_tokens=max_tokens)
messages = [
SystemMessagePromptTemplate.from_template("You are a funny chatbot"),
HumanMessagePromptTemplate.from_template("Tell me a joke about {topic}")
]
ChatPromptTemplate.from_messages(
messages
)
formatted_messages = chat_prompt.format_prompt(
topic='horses'
).to_messages()
async def async_generate(llm, formatted_messages):
return await llm.agenerate(messages=formatted_messages)
res = await async_generate(llm, formatted_messages)
```
### Expected behavior
agenerate should work the same as _agenerate | ChatOpenAI agenerate does not use internal _agenerate and does not support message roles | https://api.github.com/repos/langchain-ai/langchain/issues/8874/comments | 1 | 2023-08-07T17:26:43Z | 2023-09-25T09:33:50Z | https://github.com/langchain-ai/langchain/issues/8874 | 1,839,922,495 | 8,874 |
[
"hwchase17",
"langchain"
]
| I think most of the people who works with Langchain in building products or automating stuff noticed how bad the quality of answers degrades when you ask multiple of questions using the same chain (e.g: the chain gets longer). It gets to answer the first 2 or 3 questions right then it may hallucinate returning a wrong answer, it may fail to answer your question or even crash regardless of the GPT model used.
Is there a way anyone found in order to optimize this? One way I always try is to prompt engineering things but it doesn't help as much. | Quality of answers gets drastically bad with longer chains in Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/8871/comments | 5 | 2023-08-07T15:40:13Z | 2023-11-19T16:05:51Z | https://github.com/langchain-ai/langchain/issues/8871 | 1,839,744,856 | 8,871 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Agents can deal with prompts in different languages, but they might get confused or more unreliable if the internal prompts (e.g. format structure with the action, action input, thought etc.) are still in English.
That's why I thought about an additional language option when initializing an agent. I tried to come up with an idea of how to implement this efficiently, but I found it particularly hard to implement it, so it can be used for every language.
My initial thought was to specify the necessary prompts in a different language and initialize the agent with an extra keyword. With this approach only a finite number of languages can be covered and it could end up in a mess to save all the prompts in different languages.
My second thought was to initialize the agent with a language keyword and then translate all the prompts in the desired language. That could be too much of a hustle for just initializing an agent with prompts that are actually known.
I wanted to share this and see if this is actually something that could be useful, and maybe gather some ideas of how to implement it.
### Motivation
I've implemented an MRKL agent that writes a detailed article on a specified topic based on internet search. In my specific case, it should write an article about topics related to Germany, e.g. the drought situation in Germany. Naively, my first attempt was to give it a prompt like _Write a detailed article about the drought situation in Germany. The article must be written in German language_.
I discovered two problems:
1. As I was giving the prompt in English, the model "thought" in English as well. Therefore, the action inputs were in English, too. This led to the result that all search results were also in English and primarily from sources (e.g. The Guardian, BBC) outside of Germany reporting about the drought situation in Germany. What I would have liked instead are search results from German newspapers or similar.
2. The text generation was very much inconsistent. It wasn't quite able to write the final article in German, as all information was given in English. Sometimes, it managed to write the text in German, but I also wrote all intermediate steps (action, action input, Observation, Thought, etc.) in German, too. That led to the problem of not finding the tool (tool descriptions were also in English) and resulting OutputParser Error, because the English keywords _thought_ and _action_ etc. could be found.
So my goal was to create an agent that solely worked in the German language. Meaning, that the prompt, intermediate steps, search results and the final output are all in German.
So what I did was reimplementing all the necessary MRKL agent functions and templates in a German version. It worked out and the results were as expected and reliable.
I think that agents with different language support could be beneficial for more users. You can give it a prompt in other languages but that the agent's internal prompts are in specific language, too.
### Your contribution
I am not really sure how to do that in a good way, so that multiple languages are supported for all the agents, and if that's even desired. But if there is a good strategy, I would like to try to implement it. | Language support for Agents | https://api.github.com/repos/langchain-ai/langchain/issues/8867/comments | 7 | 2023-08-07T13:28:41Z | 2024-02-13T16:14:17Z | https://github.com/langchain-ai/langchain/issues/8867 | 1,839,461,670 | 8,867 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
When using supabase client I get dependency errors, which I believe originate from the pydantic library
Supabase uses requires pydantic<3.0,>=2.1.0
Langchain requires (I guess) 1.10.12
With langchain compatible pydantic versions, supabase errors out.
``` python
from supabase.client import Client, create_client
supabase_url = os.environ.get("SUPABASE_URL")
supabase_key = os.environ.get("SUPABASE_SERVICE_KEY")
supabase: Client = create_client(supabase_url, supabase_key)
```
ImportError: cannot import name 'field_validator' from 'pydantic'
But if I upgrade pydantic, langchain starts erroring out.
``` python
from langchain.vectorstores.pgvector import PGVector
```
PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.
Is there any quick fix for this?
Thanks
| Issue: Supabase python client pydantic dependency mismatch | https://api.github.com/repos/langchain-ai/langchain/issues/8866/comments | 15 | 2023-08-07T13:15:04Z | 2023-12-02T16:06:32Z | https://github.com/langchain-ai/langchain/issues/8866 | 1,839,437,012 | 8,866 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'm trying to chat with a PDF using ConversationalRetrievalChain.
```
embeddings = OpenAIEmbeddings()
vectordb = Chroma(embedding_function=embeddings, persist_directory=directory)
qa_chain = ConversationalRetrievalChain.from_llm(ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.3), vectordb.as_retriever(), memory=memory)
answer = (qa_chain({"question": query}))
```
It works perfectly as it gives the answer from the documents. But it can't alter the tone or even convert the answer into another language when prompted.
How can we change the tone like we do in openai ChatCompletion:
```
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful and friendly chatbot who converts text to a very friendly tone."},
{"role": "user", "content": f"{final_answer}"}
]
)
```
such that it answers from the doc but also converts it according to some given prompt. Right now I have to pass the received output from ConversationalRetrievalChain in above code in order to modify the tone. Is this kind of functionality missing in ConversationalRetrievalChain?
### Suggestion:
_No response_ | Issue: How can we combine ConversationalRetrievalChain with openai ChatCompletion | https://api.github.com/repos/langchain-ai/langchain/issues/8864/comments | 2 | 2023-08-07T12:19:11Z | 2023-08-08T19:26:55Z | https://github.com/langchain-ai/langchain/issues/8864 | 1,839,336,132 | 8,864 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Currently GCSFileLoader uses UnstructuredFileLoader in a pre-defined mode, but it would be nice to allow to specify different pdf or other loaders.
### Motivation
Allow to specify loaders when working with files from GCS.
### Your contribution
yes, I'm happy to. | Allow GCSFileLoader to use alternative loaders for individual files | https://api.github.com/repos/langchain-ai/langchain/issues/8863/comments | 1 | 2023-08-07T11:56:01Z | 2023-08-08T12:14:10Z | https://github.com/langchain-ai/langchain/issues/8863 | 1,839,299,675 | 8,863 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Can we get ttl and sessionId in UpstashRedisCache ?
### Motivation
cleaning cache
### Your contribution
suggestion | UpstashRedisCache | https://api.github.com/repos/langchain-ai/langchain/issues/8860/comments | 3 | 2023-08-07T11:20:43Z | 2023-11-13T16:06:01Z | https://github.com/langchain-ai/langchain/issues/8860 | 1,839,241,519 | 8,860 |
[
"hwchase17",
"langchain"
]
| ### System Info
OS: Redhat 8
Python: 3.9
Langchain: 0.0.246
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
So i initially reported this bug with mlflow, bug upon further investigation it is related to langchain.
Following code is simple representation of bigger code. I will also post that in bottom.
===================================================================
Code:
<pre><code>
import os
from langchain import PromptTemplate, LLMChain
from langchain.llms import HuggingFacePipeline
os.environ["CURL_CA_BUNDLE"] = ""
if True: # run the following code to download the model flan-t5-small from huggingface.co
from transformers import pipeline
model = pipeline(model="google/flan-t5-small") #'text2text-generation'
model.save_pretrained("/tmp/model/flan-t5-small")
llm = HuggingFacePipeline.from_model_id(
model_id="/tmp/model/flan-t5-small",
task="text2text-generation",
model_kwargs={"temperature": 1e-10},
)
template = """Translate everything you see after this into French:
{input}
"""
prompt = PromptTemplate(template=template, input_variables=["input"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain("my name is John")) # works
llm_chain.save("llm_chain.json")
from langchain.chains import load_chain
m = load_chain("llm_chain.json")
print(m("my name is John"))
</code></pre>
Error trace:
<code><pre>
{'input': 'my name is John', 'text': " toutefois, je suis en uvre à l'heure"}
Traceback (most recent call last):
File "a.py", line 37, in <module>
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/chains/base.py", line 258, in __call__
raise e
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/chains/base.py", line 252, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/chains/llm.py", line 92, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/chains/llm.py", line 102, in generate
return self.llm.generate_prompt(
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/base.py", line 451, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/base.py", line 582, in generate
output = self._generate_helper(
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/base.py", line 488, in _generate_helper
raise e
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/base.py", line 475, in _generate_helper
self._generate(
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/base.py", line 961, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/huggingface_pipeline.py", line 168, in _call
response = self.pipeline(prompt)
TypeError: 'NoneType' object is not callable
</code></pre>
===================================================================
Original Code where this bug occurred (MLFlow):
<code></pre>
import mlflow
from datetime import datetime
import logging
logging.getLogger("mlflow").setLevel(logging.DEBUG)
from langchain import PromptTemplate, LLMChain, HuggingFaceHub
from langchain.llms import HuggingFacePipeline
import os
def now_str():
return datetime.now().strftime("%Y%m%d%H%M%S")
os.environ["CURL_CA_BUNDLE"] = ""
if True:
from transformers import pipeline
model = pipeline(model="google/flan-t5-small") #'text2text-generation'
model.save_pretrained("/tmp/model/flan-t5-small")
llm = HuggingFacePipeline.from_model_id(
model_id="/tmp/model/flan-t5-small",
task="text2text-generation",
model_kwargs={"temperature": 1e-10},
)
template = """Translate everything you see after this into French:
{input}
"""
prompt = PromptTemplate(template=template, input_variables=["input"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
print(llm_chain.run("my name is John")) # This is working !!
### Output: {'input': 'my name is John', 'text': "j'ai le nom de John"}
experiment_id = mlflow.create_experiment(f"HF_LLM_{now_str()}")
with mlflow.start_run(experiment_id=experiment_id) as run:
logged_model = mlflow.langchain.log_model(
lc_model=llm_chain,
artifact_path="HF_LLM",
)
m = mlflow.langchain.load_model(logged_model.model_uri)
m.run("my name is John")
</code></pre>
Error Trace:
<code><pre>
Traceback (most recent call last):
File "a.py", line 51, in <module>
m.run("my name is John")
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/chains/base.py", line 451, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/chains/base.py", line 258, in __call__
raise e
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/chains/base.py", line 252, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/chains/llm.py", line 92, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/chains/llm.py", line 102, in generate
return self.llm.generate_prompt(
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/base.py", line 451, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/base.py", line 582, in generate
output = self._generate_helper(
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/base.py", line 488, in _generate_helper
raise e
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/base.py", line 475, in _generate_helper
self._generate(
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/base.py", line 961, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain/llms/huggingface_pipeline.py", line 168, in _call
response = self.pipeline(prompt)
TypeError: 'NoneType' object is not callable
</code></pre>
===================================================================
Another Example:
<code><pre>
from datetime import datetime
import logging
logging.getLogger("mlflow").setLevel(logging.DEBUG)
from langchain import PromptTemplate, LLMChain, HuggingFaceHub
from langchain.llms import HuggingFacePipeline
import os
def now_str():
return datetime.now().strftime("%Y%m%d%H%M%S")
os.environ["CURL_CA_BUNDLE"] = ''
if (True): # run the following code to download the model flan-t5-large from huggingface.co
from transformers import pipeline
model= pipeline(model="google/flan-t5-large") #'text2text-generation'
model.save_pretrained("/tmp/model/flan-t5-large")
llm = HuggingFacePipeline.from_model_id(model_id="/tmp/model/flan-t5-large", task="text2text-generation", model_kwargs={"temperature":1e-10})
template = """Translate everything you see after this into French:
{input}
"""
prompt = PromptTemplate(template=template, input_variables=["input"])
llm_chain = LLMChain(
prompt=prompt,
llm=llm
)
llm_chain("my name is John") # This is working !!
#Output: {'input': 'my name is John', 'text': "j'ai le nom de John"}
experiment_id = mlflow.create_experiment(f'HF_LLM_{now_str()}')
with mlflow.start_run(experiment_id=experiment_id) as run:
logged_model = mlflow.langchain.log_model(
lc_model=llm_chain,
artifact_path="HF_LLM",
)
=============== This throws error ==============
input_str = "My name is John"
loaded_model = mlflow.pyfunc.load_model(logged_model.model_uri)
output = loaded_model.predict(
[
{
"input": input_str
},
{
"input": "Do you like coffee?"
}
]
)
print(output)
</code></pre>
Error Trace:
<code><pre>
2023/08/07 08:57:08 WARNING mlflow.langchain.api_request_parallel_processor: Request #0 failed with TypeError("'NoneType' object is not callable")
2023/08/07 08:57:08 WARNING mlflow.langchain.api_request_parallel_processor: Request #1 failed with TypeError("'NoneType' object is not callable")
---------------------------------------------------------------------------
MlflowException Traceback (most recent call last)
<ipython-input-19-29b0feddacd1> in <cell line: 3>()
3 with project.setup_mlflow(mf) as mlflow:
4 loaded_model = mlflow.pyfunc.load_model(logged_model.model_uri)
----> 5 output = loaded_model.predict(
6 [
7 {
/hadoopfs/fs1/dataiku/data_dir/code-envs/python/mlflow_25_python_39/lib/python3.9/site-packages/mlflow/pyfunc/__init__.py in predict(self, data)
426 raise
427
--> 428 return self._predict_fn(data)
429
430 @experimental
/hadoopfs/fs1/dataiku/data_dir/code-envs/python/mlflow_25_python_39/lib/python3.9/site-packages/mlflow/langchain/__init__.py in predict(self, data)
654 "Input must be a pandas DataFrame or a list of strings or a list of dictionaries",
655 )
--> 656 return process_api_requests(lc_model=self.lc_model, requests=messages)
657
658
/hadoopfs/fs1/dataiku/data_dir/code-envs/python/mlflow_25_python_39/lib/python3.9/site-packages/mlflow/langchain/api_request_parallel_processor.py in process_api_requests(lc_model, requests, max_workers)
138 # after finishing, log final status
139 if status_tracker.num_tasks_failed > 0:
--> 140 raise mlflow.MlflowException(
141 f"{status_tracker.num_tasks_failed} tasks failed. See logs for details."
142 )
MlflowException: 2 tasks failed. See logs for details.
</code></pre>
Looks like langchain doesn't restore the pipeline.
### Expected behavior
Langhain loads the chain successfully. | TypeError("'NoneType' object is not callable") | https://api.github.com/repos/langchain-ai/langchain/issues/8858/comments | 4 | 2023-08-07T10:15:56Z | 2023-11-16T16:06:31Z | https://github.com/langchain-ai/langchain/issues/8858 | 1,839,139,204 | 8,858 |
[
"hwchase17",
"langchain"
]
| ### System Info
I was trying to create FAISS embeddings that would work on different platforms so I tried to use:
`os.environ['FAISS_NO_AVX2'] = '1' `
as recommended in https://github.com/langchain-ai/langchain/blob/6cdd4b5edca511b0015f1b39102225fe638d8359/langchain/vectorstores/faiss.py
It works for windows, but I am getting `TypeError: IndexFlatCodes.add() missing 1 required positional argument: 'x'` when I try to create embeddings in Docker image
Full error:
```
TypeError: IndexFlatCodes.add() missing 1 required positional argument: 'x'
Traceback:
File "/usr/local/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "/app/src/pages/1_💬__AI-Chat.py", line 127, in <module>
chatbot = utils.setup_chatbot(
^^^^^^^^^^^^^^^^^^^^
File "/app/./src/modules/utils.py", line 121, in setup_chatbot
vectors = embeds.getDocEmbeds(file, uploaded_file.name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/./src/modules/embedder.py", line 104, in getDocEmbeds
self.storeDocEmbeds(file, original_filename)
File "/app/./src/modules/embedder.py", line 86, in storeDocEmbeds
vectors = FAISS.from_documents(data, embeddings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/vectorstores/base.py", line 336, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/vectorstores/faiss.py", line 550, in from_texts
return cls.__from(
^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/vectorstores/faiss.py", line 505, in __from
index.add(vector)
```
langchain==0.0.226
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders.csv_loader import CSVLoader
from langchain.vectorstores import FAISS
from langchain.embeddings.openai import OpenAIEmbeddings
loader = CSVLoader(file_path=tmp_file_path, encoding="utf-8",csv_args={
'delimiter': ',',})
data = loader.load()
embeddings = OpenAIEmbeddings(...)
vectors = FAISS.from_documents(data, embeddings)
### Expected behavior
embeddings should generated | Bug with os.environ['FAISS_NO_AVX2'] = '1' | https://api.github.com/repos/langchain-ai/langchain/issues/8857/comments | 4 | 2023-08-07T08:04:40Z | 2024-02-18T16:07:51Z | https://github.com/langchain-ai/langchain/issues/8857 | 1,838,913,498 | 8,857 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.