issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| ### System Info
I have used the SemanticSimilarityExampleSelector and created a prompt. When I try to pass this to agent it fails saying: **ValueError: Saving an example selector is not currently supported**

ValueError: Saving an example selector is not currently supported
to create prompt I have used https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/similarity
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use the above code provided along with prompt selector provided in the original docuementation
### Expected behavior
To produce the output for the question asked | Issue with Few Short prompt when passed to agent | https://api.github.com/repos/langchain-ai/langchain/issues/10336/comments | 37 | 2023-09-07T17:11:20Z | 2024-02-15T16:09:55Z | https://github.com/langchain-ai/langchain/issues/10336 | 1,886,296,572 | 10,336 |
[
"hwchase17",
"langchain"
]
| ### System Info
There are a few differences between the PineconeHybridSearchRetriever and base Pinecone retriever making it difficult to switch to the former.
@hw
You can pass search_kwargs to pinecone index.as_retriver(search_kwargs:{filter: 'value'}) because it inherits from base VectorStore which has the as_retriever method with search_kwarg args.
But it is not clear from the documentation of PineconeHybridSearchRetriever, which inherits from BaseRetriever, how to pass those same arguments.
Also, the add_text method in Pinecone class returns the IDs, whereas in HybridSearch it does not.
`[[docs]](https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html#langchain.vectorstores.pinecone.Pinecone.add_texts) def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
namespace: Optional[str] = None,
batch_size: int = 32,
embedding_chunk_size: int = 1000,
**kwargs: Any,
....
....
for i in range(0, len(texts), embedding_chunk_size):
chunk_texts = texts[i : i + embedding_chunk_size]
chunk_ids = ids[i : i + embedding_chunk_size]
chunk_metadatas = metadatas[i : i + embedding_chunk_size]
embeddings = self._embed_documents(chunk_texts)
async_res = [
self._index.upsert(
vectors=batch,
namespace=namespace,
async_req=True,
**kwargs,
)
for batch in batch_iterate(
batch_size, zip(chunk_ids, embeddings, chunk_metadatas)
)
]
[res.get() for res in async_res]
return ids`
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create pinecone index and use as_retriever with passing search_kwargs('filter':{key:value})
Vs
Create PineconeHybridSearchRetriever, there is no way to pass filters
### Expected behavior
Pass search_kwargs in the class instantiation of PineconeHybridSearchRetriever so it can be used downstream in conversationalretrievalchain | PineconeHynridSearchRetriever missing several args and not returning ids of vectors when adding texts | https://api.github.com/repos/langchain-ai/langchain/issues/10333/comments | 5 | 2023-09-07T16:19:08Z | 2024-05-10T13:39:00Z | https://github.com/langchain-ai/langchain/issues/10333 | 1,886,206,628 | 10,333 |
[
"hwchase17",
"langchain"
]
| ### System Info
Got no system info to share.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I just want to say that what you developed is awesome, but I have small bug to report, sometimes when I am chatting the response will be in french/spanish although training documentation and question asked is in English.
How to reproduce:
Say hello to bot, he will respond information is not in the given context.
Ask him some question about docs it learned from in English, answer will be in another language French/Spanish/etc.
Here is the code.
```
for message in st.session_state.messages:
with st.chat_message(message["role"]:
st.markdown(message["content"])
if prompt := st.chat_input("State your question"):
st.session_state.messages.append({"role": "user", "content": prompt})
with st.chat_message("user"):
st.markdown(prompt)
llm = ChatOpenAI(temperature=0.5, max_tokens=1000,
model_name="gpt-3.5-turbo")
conversation = ConversationalRetrievalChain.from_llm(
llm, vector_store.as_retriever())
final= conversation ({"question": prompt, "chat_history": [
(message["role"], message["content"]) for message in st.session_state.messages]})
with st.chat_message("assistant):
message_placeholder = st.empty()
final_response= final["answer"]
message_placeholder.markdown(final_response)
message_placeholder.markdown(final_response)
st.session_state.messages.append(
{"role": "assistant", "content": final_response})
```
### Expected behavior
Answer should be in same language question was asked. | Response is sometimes in different language | https://api.github.com/repos/langchain-ai/langchain/issues/10329/comments | 1 | 2023-09-07T14:13:38Z | 2023-12-14T16:05:03Z | https://github.com/langchain-ai/langchain/issues/10329 | 1,885,991,150 | 10,329 |
[
"hwchase17",
"langchain"
]
| ### System Info
Running SQLDatabaseChain with LangChain version 0.0.271 and SQLite return records that does not match the query.
Using SQLDatabaseChain with verbose set to true I am getting these in the console:
```sql
SQLQuery:SELECT id, sale_type, sold_date, property_type, address, city, state_or_province, zip_or_postal_code, price, beds, baths, location, square_feet, lot_size, year_built, day_on_market, usd_per_square_feet, hoa_per_month, url, latitude, longitude FROM properties WHERE city = 'New York' AND price <= 900000 AND property_type = 'House' AND beds >= 2 ORDER BY price DESC LIMIT 5;
```
```
SQLResult:
Answer:[{"id":3,"sale_type":"MLS Listing","sold_date":"None","property_type":"Condo/Co-op","address":"225 Fifth Ave Ph -H","city":"New York","state_or_province":"NY","zip_or_postal_code":10010,"price":3495000,"beds":3,"baths":3.0,"location":"NoMad","square_feet":"1987","lot_size":31106,"year_built":1907,"day_on_market":1,"usd_per_square_feet":"1987","hoa_per_month":"HOAMONTH","url":"https://www.redfin.com/NY/New-York/225-5th-Ave-10010/unit-H/home/174298359","latitude":40.7437447,"longitude":-73.9875513},{"id":2,"sale_type":"MLS Listing","sold_date":"None","property_type":"Condo/Co-op","address":"416 W 52nd St #520","city":"New York","state_or_province":"NY","zip_or_
```
The SQLResult does not match the SQLQuery. The SQLResult contains a property with a price of 3495000 which is higher than the 900000 filter in the SQLQuery. Also property_type "Condo/Co-op" does not match "house" from the query. Note that for any reason the results string is truncated at "zip_or_" around 770 characters.
When I execute a copy/paste of the SQL query from SQLQuery in console directly in the database it returns me 0 result. which what it should return. For info the "price" column is an INTEGER.
This is the code :
```python
def ask_question():
data = request.get_json()
question = data['question']
question = QUERY.format(question=question)
text = db_chain.run(question)
```
Did I miss something ?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run an SQLDatabaseChain with verbose to True
Compare the SQLquery and the SQLresult from the console. The result is not what is expected with the query.
### Expected behavior
The SQL result from the console should match the SQL query. | Running SQLDatabaseChain return records that does not match the query | https://api.github.com/repos/langchain-ai/langchain/issues/10325/comments | 7 | 2023-09-07T11:33:14Z | 2024-02-29T10:19:39Z | https://github.com/langchain-ai/langchain/issues/10325 | 1,885,710,571 | 10,325 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I have pdf which has more than 2000 pages. file read successfully, but vector db was not created and getting this error. please suggest any solution.
**complete error:**
embeddings
results = cur.execute(sql, params).fetchall()
sqlite3.OperationalError: too many SQL variables
### Suggestion:
_No response_ | sqlite3.OperationalError: too many SQL variables | https://api.github.com/repos/langchain-ai/langchain/issues/10321/comments | 6 | 2023-09-07T09:11:01Z | 2023-12-14T16:05:07Z | https://github.com/langchain-ai/langchain/issues/10321 | 1,885,452,004 | 10,321 |
[
"hwchase17",
"langchain"
]
| ### System Info
I have created a LLama 7b model and in Databricks i have used model serving options. When i test model it works in databricks, but when i use the same model to connect with Langchain i get this error.
this works in Databricks model endpoint:
Json input:
{
"dataframe_split": {
"columns": [
"prompt",
"temperature",
"max_tokens"
],
"data": [
[
"what is ML?",
0.5,
100
]
]
}
}
code:-
from langchain.llms import Databricks
llm = Databricks(endpoint_name="databricks-llama-servingmodel", model_kwargs={"temperature": 0.1, "max_tokens": 100})
llm ("who are you")
Error:-
ValidationError: 1 validation error for Generation
text
str type expected (type=type_error.str)
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Any model you take LLM and use the below notebook
https://github.com/databricks/databricks-ml-examples/blob/master/llm-models/llamav2/llamav2-7b/02_mlflow_logging_inference.py (for model serving)
Try using model serving in Langchain as show in the code and official langchain documents
from langchain.llms import Databricks
llm = Databricks(endpoint_name="databricks-llama-servingmodel", model_kwargs={"temperature": 0.1, "max_tokens": 100})
llm ("who are you")
### Expected behavior
llm("who are you")
expected output:- I am language model
| Langchain doesnt work with Databricks Model serving asking generate (str type expected (type=type_error.str) | https://api.github.com/repos/langchain-ai/langchain/issues/10318/comments | 12 | 2023-09-07T08:28:28Z | 2024-06-25T03:05:42Z | https://github.com/langchain-ai/langchain/issues/10318 | 1,885,373,063 | 10,318 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am writing a ConversationalRetrievalChain with a question_generator_chain. When writing a prompt for condensing the chat_history, I found it doesn't work well with the whole {chat_history}. I guess if I only give all questions asked by "human:", it will perform better. But I don't know how to do that.
The original prompt is:
Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, if the follow up question is already a standalone question, just return the follow up question.
Chat History:
{chat_history}
Follow Up Question: {question}
Standalone question:
What I want may looks like:
.........
Question asked:
{chat_history.human}
Follow Up Question: {question}
Standalone question:
Thanks for attention
### Suggestion:
_No response_ | Issue: How to customize the question_generator_chain in ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/10317/comments | 2 | 2023-09-07T08:25:40Z | 2023-12-08T04:55:32Z | https://github.com/langchain-ai/langchain/issues/10317 | 1,885,368,696 | 10,317 |
[
"hwchase17",
"langchain"
]
| ### System Info
hi,
I am unable to stream the final answer from llm chain to chianlit UI.
langchain==0.0.218
Python 3.9.16
here are the details:
https://github.com/Chainlit/chainlit/issues/313
is this implemented? - https://github.com/langchain-ai/langchain/pull/1222/
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
the code given above can reproduce the behaviour
### Expected behavior
expected behaviour : the final answer from llm chain to be streamed properly to chianlit UI | Final answer streaming problem | https://api.github.com/repos/langchain-ai/langchain/issues/10316/comments | 21 | 2023-09-07T07:14:40Z | 2024-08-09T16:07:47Z | https://github.com/langchain-ai/langchain/issues/10316 | 1,885,260,192 | 10,316 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I want to create a callback handler to monitor all tokens in all intermediate steps and output catch and return when the response has "AI:". How can I achieve that using the following custom callback handler?
```
import sys
from typing import Any, Dict, List, Optional
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
DEFAULT_ANSWER_PREFIX_TOKENS = ["AI", ":"]
class CustomCallBackHandler(StreamingStdOutCallbackHandler):
"""Callback handler for streaming in agents.
Only works with agents using LLMs that support streaming.
Only the final output of the agent will be streamed.
"""
def append_to_last_tokens(self, token: str) -> None:
self.last_tokens.append(token)
self.last_tokens_stripped.append(token.strip())
if len(self.last_tokens) > len(self.answer_prefix_tokens):
self.last_tokens.pop(0)
self.last_tokens_stripped.pop(0)
def check_if_answer_reached(self) -> bool:
if self.strip_tokens:
return self.last_tokens_stripped == self.answer_prefix_tokens_stripped
else:
return self.last_tokens == self.answer_prefix_tokens
def __init__(
self,
*,
answer_prefix_tokens: Optional[List[str]] = None,
strip_tokens: bool = True,
stream_prefix: bool = False
) -> None:
self.collected_tokens = []
"""Instantiate FinalStreamingStdOutCallbackHandler.
Args:
answer_prefix_tokens: Token sequence that prefixes the answer.
Default is ["Final", "Answer", ":"]
strip_tokens: Ignore white spaces and new lines when comparing
answer_prefix_tokens to last tokens? (to determine if answer has been
reached)
stream_prefix: Should answer prefix itself also be streamed?
"""
super().__init__()
if answer_prefix_tokens is None:
self.answer_prefix_tokens = DEFAULT_ANSWER_PREFIX_TOKENS
else:
self.answer_prefix_tokens = answer_prefix_tokens
if strip_tokens:
self.answer_prefix_tokens_stripped = [
token.strip() for token in self.answer_prefix_tokens
]
else:
self.answer_prefix_tokens_stripped = self.answer_prefix_tokens
self.last_tokens = [""] * len(self.answer_prefix_tokens)
self.last_tokens_stripped = [""] * len(self.answer_prefix_tokens)
self.strip_tokens = strip_tokens
self.stream_prefix = stream_prefix
self.answer_reached = False
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None:
self.answer_reached = False
def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
"""Run on new LLM token. Only available when streaming is enabled."""
# Remember the last n tokens, where n = len(answer_prefix_tokens)
self.append_to_last_tokens(token)
# Check if the last n tokens match the answer_prefix_tokens list ...
if self.check_if_answer_reached():
self.answer_reached = True
if self.stream_prefix:
for t in self.last_tokens:
self.collected_tokens.append(t)
return
# ... if yes, then collect tokens from now on
if self.answer_reached:
self.collected_tokens.append(token)
def get_collected_tokens(self) -> str:
"""Return the collected tokens as a single string."""
return ''.join(self.collected_tokens)
```
### Suggestion:
_No response_ | Issue: CustomCallBackHandlers for catch all intermediate steps | https://api.github.com/repos/langchain-ai/langchain/issues/10315/comments | 2 | 2023-09-07T06:57:57Z | 2023-12-14T16:05:12Z | https://github.com/langchain-ai/langchain/issues/10315 | 1,885,236,282 | 10,315 |
[
"hwchase17",
"langchain"
]
| ### System Info
Apple Macbook M1 Pro
python: 3.11.2
langchain: 0.0.283
pydantic: 2.3.0
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. pip3.11 install langchain
2. run python3.11 in terminal
3. execute `from langchain.llms import OpenAI` which produce following error for me
```
Python 3.11.2 (main, Feb 16 2023, 02:55:59) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from langchain.llms import OpenAI
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/homebrew/lib/python3.11/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/opt/homebrew/lib/python3.11/site-packages/langchain/agents/__init__.py", line 31, in <module>
from langchain.agents.agent import (
File "/opt/homebrew/lib/python3.11/site-packages/langchain/agents/agent.py", line 14, in <module>
from langchain.agents.agent_iterator import AgentExecutorIterator
File "/opt/homebrew/lib/python3.11/site-packages/langchain/agents/agent_iterator.py", line 30, in <module>
from langchain.tools import BaseTool
File "/opt/homebrew/lib/python3.11/site-packages/langchain/tools/__init__.py", line 25, in <module>
from langchain.tools.arxiv.tool import ArxivQueryRun
File "/opt/homebrew/lib/python3.11/site-packages/langchain/tools/arxiv/tool.py", line 8, in <module>
from langchain.utilities.arxiv import ArxivAPIWrapper
File "/opt/homebrew/lib/python3.11/site-packages/langchain/utilities/__init__.py", line 7, in <module>
from langchain.utilities.apify import ApifyWrapper
File "/opt/homebrew/lib/python3.11/site-packages/langchain/utilities/apify.py", line 3, in <module>
from langchain.document_loaders import ApifyDatasetLoader
File "/opt/homebrew/lib/python3.11/site-packages/langchain/document_loaders/__init__.py", line 76, in <module>
from langchain.document_loaders.embaas import EmbaasBlobLoader, EmbaasLoader
File "/opt/homebrew/lib/python3.11/site-packages/langchain/document_loaders/embaas.py", line 54, in <module>
class BaseEmbaasLoader(BaseModel):
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/main.py", line 204, in __new__
fields[ann_name] = ModelField.infer(
^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/fields.py", line 488, in infer
return cls(
^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/fields.py", line 419, in __init__
self.prepare()
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/fields.py", line 539, in prepare
self.populate_validators()
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/fields.py", line 801, in populate_validators
*(get_validators() if get_validators else list(find_validators(self.type_, self.model_config))),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/validators.py", line 696, in find_validators
yield make_typeddict_validator(type_, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/validators.py", line 585, in make_typeddict_validator
TypedDictModel = create_model_from_typeddict(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/annotated_types.py", line 35, in create_model_from_typeddict
return create_model(typeddict_cls.__name__, **kwargs, **field_definitions)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/main.py", line 972, in create_model
return type(__model_name, __base__, namespace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/main.py", line 204, in __new__
fields[ann_name] = ModelField.infer(
^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/fields.py", line 488, in infer
return cls(
^^^^
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/fields.py", line 419, in __init__
self.prepare()
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/fields.py", line 534, in prepare
self._type_analysis()
File "/opt/homebrew/lib/python3.11/site-packages/pydantic/fields.py", line 638, in _type_analysis
elif issubclass(origin, Tuple): # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/[email protected]/3.11.2_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/typing.py", line 1551, in __subclasscheck__
return issubclass(cls, self.__origin__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: issubclass() arg 1 must be a class
```
### Expected behavior
To be able to use langchain | Langchain Quickstart is not working for me | https://api.github.com/repos/langchain-ai/langchain/issues/10314/comments | 2 | 2023-09-07T06:38:07Z | 2023-09-09T21:29:36Z | https://github.com/langchain-ai/langchain/issues/10314 | 1,885,210,992 | 10,314 |
[
"hwchase17",
"langchain"
]
| ### System Info
jupyter notebook, RTX 3090
### Who can help?
@agola11 @hwchase17 @ey
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.embeddings import SentenceTransformerEmbeddings
embedding=lambda x: x['combined_info'].apply(lambda text: embeddings.embed_documents(text))
```
does not work
Any workarounds on it?
### Expected behavior
outputs embeddings | apply embeddings to pandas dataframe | https://api.github.com/repos/langchain-ai/langchain/issues/10313/comments | 7 | 2023-09-07T06:02:33Z | 2023-12-14T16:05:18Z | https://github.com/langchain-ai/langchain/issues/10313 | 1,885,171,818 | 10,313 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
When I am using a tool with `return_direct = True` , the observation step will provide the answer. Is there a way to stream that observation from the intermediate steps using callbacks. I found no information regarding this. It is much appreciated if we can add something regarding that as well.
### Idea or request for content:
_No response_ | DOC: <Observation Streaming Using Call Back> | https://api.github.com/repos/langchain-ai/langchain/issues/10312/comments | 4 | 2023-09-07T04:57:35Z | 2023-12-30T18:21:43Z | https://github.com/langchain-ai/langchain/issues/10312 | 1,885,116,454 | 10,312 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
When working with the `CONVERSATIONAL_REACT_DESCRIPTION`, I found Observation was capped out when passing to the action input. For example I have retrieved a JsonArray by using a structured tool but when the agent passes it to the final tool jsonArray was capped out. Can I handle it?
### Suggestion:
_No response_ | Issue: Obeservation is capping out when passing to Action Input [CONVERSATIONAL_REACT_DESCRIPTION] | https://api.github.com/repos/langchain-ai/langchain/issues/10311/comments | 2 | 2023-09-07T04:34:10Z | 2023-12-14T16:05:27Z | https://github.com/langchain-ai/langchain/issues/10311 | 1,885,097,882 | 10,311 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
I wasn't able to find any information related to the customization of Action Input and Final Answer customization. It is much-appreciated if the documentation can provide some info regarding that as well.
### Idea or request for content:
_No response_ | DOC: Action Input and Final Answer | https://api.github.com/repos/langchain-ai/langchain/issues/10310/comments | 2 | 2023-09-07T04:03:13Z | 2023-09-15T03:15:07Z | https://github.com/langchain-ai/langchain/issues/10310 | 1,885,075,993 | 10,310 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
my test code
```
import unittest
from langchain.chat_models import ErnieBotChat
from langchain.schema import (
HumanMessage
)
class TestErnieBotCase(unittest.TestCase):
def test_ernie_bot(self):
chat_llm = ErnieBotChat(
ernie_client_id="xxx",
ernie_client_secret="xxx"
)
result = chat_llm.generate(messages=[HumanMessage(content="请列出清朝的所有皇帝的姓名和年号")])
print(result)
if __name__ == '__main__':
unittest.main()
```
it's return error:
```
Error
Traceback (most recent call last):
File "/Users/keigo/Workspace/study/langchain/tests/test_erniebot.py", line 14, in test_ernie_bot
result = chat_llm.generate(messages=[HumanMessage(content="请列出清朝的所有皇帝的姓名和年号")])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/keigo/Workspace/study/langchain/venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 309, in generate
raise e
File "/Users/keigo/Workspace/study/langchain/venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 299, in generate
self._generate_with_cache(
File "/Users/keigo/Workspace/study/langchain/venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 446, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "/Users/keigo/Workspace/study/langchain/venv/lib/python3.11/site-packages/langchain/chat_models/ernie.py", line 157, in _generate
"messages": [_convert_message_to_dict(m) for m in messages],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/keigo/Workspace/study/langchain/venv/lib/python3.11/site-packages/langchain/chat_models/ernie.py", line 157, in <listcomp>
"messages": [_convert_message_to_dict(m) for m in messages],
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/keigo/Workspace/study/langchain/venv/lib/python3.11/site-packages/langchain/chat_models/ernie.py", line 31, in _convert_message_to_dict
raise ValueError(f"Got unknown type {message}")
ValueError: Got unknown type ('content', '请列出清朝的所有皇帝的姓名和年号')
```
in ernie.py file function _convert_message_to_dict,the variable message is of type tuple。
### Suggestion:
_No response_ | Errors about ErnieBotChat | https://api.github.com/repos/langchain-ai/langchain/issues/10309/comments | 2 | 2023-09-07T03:46:18Z | 2023-09-07T04:09:47Z | https://github.com/langchain-ai/langchain/issues/10309 | 1,885,064,287 | 10,309 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Langchain pandas agents (create_pandas_dataframe_agent ) is hard to work with llama models. (the same scripts work well with gpt3.5.)
I am trying to use local model Vicuna 13b v1.5 (LLaMa2 based) to create a local Question&Answer system.
it works well dealing with doc QAs.
but it doesn't work well dealing with pandas data (by calling create_pandas_dataframe_agent ). Any suggestions for me ?
my calling code:
```
df=pd.read_excel(CSV_NAME)
pd_agent = create_pandas_dataframe_agent(
ChatOpenAI(temperature=0,model_name=set_model),
df,
verbose=True,
agent_type=AgentType.OPENAI_FUNCTIONS,
agent_executor_kwargs={"handle_parsing_errors":True}
)
pd_agent.run(query)
```
example: when I asking How many events occurred after May 20, 2023?
the final answer would be :
````From the data provided, we can find the GUID of the event, and then filter out the data after May 20, 2023 based on the date. The following is the Python code implementation:
```python
import pandas as pd
#Read data
data = pd.read_csv('your_data_file.csv')
# Filter the data in the date range
filtered_data = data[data['date'] >= '2023-05-20']
# Extract the GUID of the event
guids = filtered_data[filtered_data['element'] == 'events']['id'].unique()
# Output the GUID of the event
print(guids)
```
Please replace `your_data_file.csv` with your data file name. This code will output the GUID of the event after May 20, 2023.
````
it seems the chain did not proceed to the last step. any suggestions for me ?
1. other than create_pandas_dataframe_agent , is there other chain or agent can I try?
2. if i need to overwrite some methods, which method should I edit?
the only similar example I found is written by kvnsng https://github.com/langchain-ai/langchain/issues/7709#issuecomment-1653833036
### Suggestion:
_No response_ | Issue: create_pandas_dataframe_agent is hard to work with llama models | https://api.github.com/repos/langchain-ai/langchain/issues/10308/comments | 16 | 2023-09-07T03:36:15Z | 2024-06-28T16:05:13Z | https://github.com/langchain-ai/langchain/issues/10308 | 1,885,057,548 | 10,308 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.281
boto3==1.28.41
```python
s3 = boto3.resource(
"s3",
region_name=self.region_name,
api_version=self.api_version,
use_ssl=self.use_ssl,
verify=self.verify,
endpoint_url=self.endpoint_url,
aws_access_key_id=self.aws_access_key_id,
aws_secret_access_key=self.aws_secret_access_key,
aws_session_token=self.aws_session_token,
config=self.boto_config,
)
```
line 117 should be `config=self.boto_config` not `boto_config=self.boto_config`.
```python
for obj in bucket.objects.filter(Prefix=self.prefix):
if obj.get()["ContentLength"] == 0:
continue
loader = S3FileLoader(
self.bucket,
obj.key,
region_name=self.region_name,
api_version=self.api_version,
use_ssl=self.use_ssl,
verify=self.verify,
endpoint_url=self.endpoint_url,
aws_access_key_id=self.aws_access_key_id,
aws_secret_access_key=self.aws_secret_access_key,
aws_session_token=self.aws_session_token,
config=self.boto_config,
)
docs.extend(loader.load())
```
before line 122 should add the condition if object is a directory, and line 135 `boto_config` again.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
unnecessary
### Expected behavior
unnecessary | `langchain/document_loaders/s3_directory.py` S3DirectoryLoader has 3 bugs | https://api.github.com/repos/langchain-ai/langchain/issues/10294/comments | 2 | 2023-09-06T16:20:11Z | 2023-12-14T16:05:33Z | https://github.com/langchain-ai/langchain/issues/10294 | 1,884,353,219 | 10,294 |
[
"hwchase17",
"langchain"
]
| ### Discussed in https://github.com/langchain-ai/langchain/discussions/10068
<div type='discussions-op-text'>
<sup>Originally posted by **aMahanna** August 31, 2023</sup>
Hi!
Python version used: `3.10`
Virtual Environment Tool used: `venv`
I am trying to understand why `pip install langchain[all]` is installing LangChain `0.0.74`, as opposed to the latest version of LangChain.
Based on the installation output, I can see the installation of external modules, and a series of `Using cached langchain-X-py3-none-any.whl` logs, where `X` descends from `0.0.278` all the way to `0.0.74`
Perhaps I missed the documentation that speaks to this behaviour? I have been relying on the [installation.mdx](https://github.com/langchain-ai/langchain/blob/master/docs/snippets/get_started/installation.mdx) guide for this.
<details>
<summary>Output: `pip install "langchain[all]"`</summary>
```
❯ pip install "langchain[all]"
Collecting langchain[all]
Using cached langchain-0.0.278-py3-none-any.whl (1.6 MB)
Collecting async-timeout<5.0.0,>=4.0.0
Using cached async_timeout-4.0.3-py3-none-any.whl (5.7 kB)
Collecting langsmith<0.1.0,>=0.0.21
Using cached langsmith-0.0.30-py3-none-any.whl (35 kB)
Collecting numpy<2,>=1
Using cached numpy-1.25.2-cp310-cp310-macosx_11_0_arm64.whl (14.0 MB)
Collecting tenacity<9.0.0,>=8.1.0
Using cached tenacity-8.2.3-py3-none-any.whl (24 kB)
Collecting numexpr<3.0.0,>=2.8.4
Using cached numexpr-2.8.5-cp310-cp310-macosx_11_0_arm64.whl (90 kB)
Collecting SQLAlchemy<3,>=1.4
Using cached SQLAlchemy-2.0.20-cp310-cp310-macosx_11_0_arm64.whl (2.0 MB)
Collecting dataclasses-json<0.6.0,>=0.5.7
Using cached dataclasses_json-0.5.14-py3-none-any.whl (26 kB)
Collecting aiohttp<4.0.0,>=3.8.3
Using cached aiohttp-3.8.5-cp310-cp310-macosx_11_0_arm64.whl (343 kB)
Collecting pydantic<3,>=1
Using cached pydantic-2.3.0-py3-none-any.whl (374 kB)
Collecting requests<3,>=2
Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Collecting PyYAML>=5.3
Using cached PyYAML-6.0.1-cp310-cp310-macosx_11_0_arm64.whl (169 kB)
Collecting lark<2.0.0,>=1.1.5
Using cached lark-1.1.7-py3-none-any.whl (108 kB)
Collecting arxiv<2.0,>=1.4
Using cached arxiv-1.4.8-py3-none-any.whl (12 kB)
Collecting html2text<2021.0.0,>=2020.1.16
Using cached html2text-2020.1.16-py3-none-any.whl (32 kB)
Collecting jq<2.0.0,>=1.4.1
Using cached jq-1.5.0-cp310-cp310-macosx_11_0_arm64.whl (370 kB)
Collecting azure-identity<2.0.0,>=1.12.0
Using cached azure_identity-1.14.0-py3-none-any.whl (160 kB)
Collecting deeplake<4.0.0,>=3.6.8
Using cached deeplake-3.6.22.tar.gz (538 kB)
Preparing metadata (setup.py) ... done
Collecting torch<3,>=1
Using cached torch-2.0.1-cp310-none-macosx_11_0_arm64.whl (55.8 MB)
Collecting huggingface_hub<1,>=0
Using cached huggingface_hub-0.16.4-py3-none-any.whl (268 kB)
Collecting tiktoken<0.4.0,>=0.3.2
Using cached tiktoken-0.3.3-cp310-cp310-macosx_11_0_arm64.whl (706 kB)
Collecting transformers<5,>=4
Using cached transformers-4.32.1-py3-none-any.whl (7.5 MB)
Collecting openai<1,>=0
Using cached openai-0.27.10-py3-none-any.whl (76 kB)
Collecting pinecone-client<3,>=2
Using cached pinecone_client-2.2.2-py3-none-any.whl (179 kB)
Collecting aleph-alpha-client<3.0.0,>=2.15.0
Using cached aleph_alpha_client-2.17.0-py3-none-any.whl (41 kB)
Collecting azure-ai-formrecognizer<4.0.0,>=3.2.1
Using cached azure_ai_formrecognizer-3.3.0-py3-none-any.whl (297 kB)
Collecting wikipedia<2,>=1
Using cached wikipedia-1.4.0.tar.gz (27 kB)
Preparing metadata (setup.py) ... done
Collecting manifest-ml<0.0.2,>=0.0.1
Using cached manifest_ml-0.0.1-py2.py3-none-any.whl (42 kB)
Collecting momento<2.0.0,>=1.5.0
Using cached momento-1.9.1-py3-none-any.whl (134 kB)
Collecting pexpect<5.0.0,>=4.8.0
Using cached pexpect-4.8.0-py2.py3-none-any.whl (59 kB)
Collecting azure-cognitiveservices-speech<2.0.0,>=1.28.0
Using cached azure_cognitiveservices_speech-1.31.0-py3-none-macosx_11_0_arm64.whl (6.4 MB)
Collecting gptcache>=0.1.7
Using cached gptcache-0.1.40-py3-none-any.whl (124 kB)
Collecting azure-cosmos<5.0.0,>=4.4.0b1
Using cached azure_cosmos-4.5.0-py3-none-any.whl (226 kB)
Collecting requests-toolbelt<2.0.0,>=1.0.0
Using cached requests_toolbelt-1.0.0-py2.py3-none-any.whl (54 kB)
Collecting pyowm<4.0.0,>=3.3.0
Using cached pyowm-3.3.0-py3-none-any.whl (4.5 MB)
Collecting amadeus>=8.1.0
Using cached amadeus-8.1.0.tar.gz (39 kB)
Preparing metadata (setup.py) ... done
Collecting networkx<3.0.0,>=2.6.3
Using cached networkx-2.8.8-py3-none-any.whl (2.0 MB)
Collecting qdrant-client<2.0.0,>=1.3.1
Using cached qdrant_client-1.4.0-py3-none-any.whl (132 kB)
Collecting langkit<0.1.0,>=0.0.6
Using cached langkit-0.0.17-py3-none-any.whl (754 kB)
Collecting jinja2<4,>=3
Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting openlm<0.0.6,>=0.0.5
Using cached openlm-0.0.5-py3-none-any.whl (10 kB)
Collecting faiss-cpu<2,>=1
Using cached faiss_cpu-1.7.4-cp310-cp310-macosx_11_0_arm64.whl (2.7 MB)
Collecting psycopg2-binary<3.0.0,>=2.9.5
Using cached psycopg2_binary-2.9.7-cp310-cp310-macosx_11_0_arm64.whl (2.5 MB)
Collecting pinecone-text<0.5.0,>=0.4.2
Using cached pinecone_text-0.4.2-py3-none-any.whl (17 kB)
Collecting opensearch-py<3.0.0,>=2.0.0
Using cached opensearch_py-2.3.1-py2.py3-none-any.whl (327 kB)
Collecting weaviate-client<4,>=3
Using cached weaviate_client-3.23.2-py3-none-any.whl (108 kB)
Collecting sentence-transformers<3,>=2
Using cached sentence_transformers-2.2.2-py3-none-any.whl
Collecting google-search-results<3,>=2
Using cached google_search_results-2.4.2.tar.gz (18 kB)
Preparing metadata (setup.py) ... done
Collecting pytesseract<0.4.0,>=0.3.10
Using cached pytesseract-0.3.10-py3-none-any.whl (14 kB)
Collecting singlestoredb<0.8.0,>=0.7.1
Using cached singlestoredb-0.7.1-cp36-abi3-macosx_10_9_universal2.whl (196 kB)
Collecting marqo<2.0.0,>=1.2.4
Using cached marqo-1.2.4-py3-none-any.whl (32 kB)
Collecting nomic<2.0.0,>=1.0.43
Using cached nomic-1.1.14.tar.gz (31 kB)
Preparing metadata (setup.py) ... done
Collecting pypdf<4.0.0,>=3.4.0
Using cached pypdf-3.15.4-py3-none-any.whl (272 kB)
Collecting langchain[all]
Using cached langchain-0.0.277-py3-none-any.whl (1.6 MB)
Using cached langchain-0.0.276-py3-none-any.whl (1.6 MB)
Using cached langchain-0.0.275-py3-none-any.whl (1.6 MB)
Using cached langchain-0.0.274-py3-none-any.whl (1.6 MB)
Using cached langchain-0.0.273-py3-none-any.whl (1.6 MB)
Using cached langchain-0.0.272-py3-none-any.whl (1.6 MB)
Collecting google-api-core<3.0.0,>=2.11.1
Using cached google_api_core-2.11.1-py3-none-any.whl (120 kB)
Collecting langchain[all]
Using cached langchain-0.0.271-py3-none-any.whl (1.5 MB)
Using cached langchain-0.0.270-py3-none-any.whl (1.5 MB)
Using cached langchain-0.0.269-py3-none-any.whl (1.5 MB)
Using cached langchain-0.0.268-py3-none-any.whl (1.5 MB)
Using cached langchain-0.0.267-py3-none-any.whl (1.5 MB)
Collecting openapi-schema-pydantic<2.0,>=1.2
Using cached openapi_schema_pydantic-1.2.4-py3-none-any.whl (90 kB)
Collecting langchain[all]
Using cached langchain-0.0.266-py3-none-any.whl (1.5 MB)
Collecting pydantic<2,>=1
Using cached pydantic-1.10.12-cp310-cp310-macosx_11_0_arm64.whl (2.5 MB)
Collecting langchain[all]
Using cached langchain-0.0.265-py3-none-any.whl (1.5 MB)
Collecting anthropic<0.4,>=0.3
Using cached anthropic-0.3.11-py3-none-any.whl (796 kB)
Collecting xinference<0.0.7,>=0.0.6
Using cached xinference-0.0.6-py3-none-any.whl (65 kB)
Collecting spacy<4,>=3
Using cached spacy-3.6.1-cp310-cp310-macosx_11_0_arm64.whl (6.6 MB)
Collecting langchain[all]
Using cached langchain-0.0.264-py3-none-any.whl (1.5 MB)
Using cached langchain-0.0.263-py3-none-any.whl (1.5 MB)
Using cached langchain-0.0.262-py3-none-any.whl (1.5 MB)
Using cached langchain-0.0.261-py3-none-any.whl (1.5 MB)
Using cached langchain-0.0.260-py3-none-any.whl (1.5 MB)
Using cached langchain-0.0.259-py3-none-any.whl (1.5 MB)
Using cached langchain-0.0.258-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.257-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.256-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.255-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.254-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.253-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.252-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.251-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.250-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.249-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.248-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.247-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.246-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.245-py3-none-any.whl (1.4 MB)
Collecting awadb<0.4.0,>=0.3.3
Using cached awadb-0.3.10-cp310-cp310-macosx_13_0_arm64.whl (1.6 MB)
Collecting langchain[all]
Using cached langchain-0.0.244-py3-none-any.whl (1.4 MB)
Collecting cohere<4,>=3
Using cached cohere-3.10.0.tar.gz (15 kB)
Preparing metadata (setup.py) ... done
Collecting langchain[all]
Using cached langchain-0.0.243-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.242-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.240-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.239-py3-none-any.whl (1.4 MB)
Using cached langchain-0.0.238-py3-none-any.whl (1.3 MB)
Using cached langchain-0.0.237-py3-none-any.whl (1.3 MB)
Collecting langsmith<0.0.11,>=0.0.10
Using cached langsmith-0.0.10-py3-none-any.whl (27 kB)
Collecting langchain[all]
Using cached langchain-0.0.236-py3-none-any.whl (1.3 MB)
Using cached langchain-0.0.235-py3-none-any.whl (1.3 MB)
Collecting langsmith<0.0.8,>=0.0.7
Using cached langsmith-0.0.7-py3-none-any.whl (26 kB)
Collecting langchain[all]
Using cached langchain-0.0.234-py3-none-any.whl (1.3 MB)
Collecting langsmith<0.0.6,>=0.0.5
Using cached langsmith-0.0.5-py3-none-any.whl (25 kB)
Collecting langchain[all]
Using cached langchain-0.0.233-py3-none-any.whl (1.3 MB)
Collecting beautifulsoup4<5,>=4
Using cached beautifulsoup4-4.12.2-py3-none-any.whl (142 kB)
Collecting langchain[all]
Using cached langchain-0.0.232-py3-none-any.whl (1.3 MB)
Using cached langchain-0.0.231-py3-none-any.whl (1.3 MB)
Collecting langchainplus-sdk<0.0.21,>=0.0.20
Using cached langchainplus_sdk-0.0.20-py3-none-any.whl (25 kB)
Collecting langchain[all]
Using cached langchain-0.0.230-py3-none-any.whl (1.3 MB)
Using cached langchain-0.0.229-py3-none-any.whl (1.3 MB)
Using cached langchain-0.0.228-py3-none-any.whl (1.3 MB)
Using cached langchain-0.0.227-py3-none-any.whl (1.2 MB)
Using cached langchain-0.0.226-py3-none-any.whl (1.2 MB)
Using cached langchain-0.0.225-py3-none-any.whl (1.2 MB)
Collecting marqo<0.10.0,>=0.9.1
Using cached marqo-0.9.6-py3-none-any.whl (26 kB)
Collecting clarifai==9.1.0
Using cached clarifai-9.1.0-py3-none-any.whl (57 kB)
Collecting langchain[all]
Using cached langchain-0.0.224-py3-none-any.whl (1.2 MB)
Collecting anthropic<0.3.0,>=0.2.6
Using cached anthropic-0.2.10-py3-none-any.whl (6.3 kB)
Collecting langchain[all]
Using cached langchain-0.0.223-py3-none-any.whl (1.2 MB)
Using cached langchain-0.0.222-py3-none-any.whl (1.2 MB)
Using cached langchain-0.0.221-py3-none-any.whl (1.2 MB)
Using cached langchain-0.0.220-py3-none-any.whl (1.2 MB)
Using cached langchain-0.0.219-py3-none-any.whl (1.2 MB)
Using cached langchain-0.0.218-py3-none-any.whl (1.2 MB)
Using cached langchain-0.0.217-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.216-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.215-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.214-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.213-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.212-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.211-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.210-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.209-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.208-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.207-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.206-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.205-py3-none-any.whl (1.1 MB)
Collecting singlestoredb<0.7.0,>=0.6.1
Using cached singlestoredb-0.6.1-cp36-abi3-macosx_10_9_universal2.whl (193 kB)
Collecting langchain[all]
Using cached langchain-0.0.204-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.203-py3-none-any.whl (1.1 MB)
Using cached langchain-0.0.202-py3-none-any.whl (1.0 MB)
Using cached langchain-0.0.201-py3-none-any.whl (1.0 MB)
Using cached langchain-0.0.200-py3-none-any.whl (1.0 MB)
Using cached langchain-0.0.199-py3-none-any.whl (1.0 MB)
Using cached langchain-0.0.198-py3-none-any.whl (1.0 MB)
Using cached langchain-0.0.197-py3-none-any.whl (1.0 MB)
Using cached langchain-0.0.196-py3-none-any.whl (1.0 MB)
Using cached langchain-0.0.195-py3-none-any.whl (1.0 MB)
Using cached langchain-0.0.194-py3-none-any.whl (1.0 MB)
Using cached langchain-0.0.193-py3-none-any.whl (989 kB)
Collecting langchainplus-sdk<0.0.5,>=0.0.4
Using cached langchainplus_sdk-0.0.4-py3-none-any.whl (21 kB)
Collecting langchain[all]
Using cached langchain-0.0.192-py3-none-any.whl (989 kB)
Using cached langchain-0.0.191-py3-none-any.whl (993 kB)
Using cached langchain-0.0.190-py3-none-any.whl (983 kB)
Using cached langchain-0.0.189-py3-none-any.whl (975 kB)
Using cached langchain-0.0.188-py3-none-any.whl (969 kB)
Using cached langchain-0.0.187-py3-none-any.whl (960 kB)
Using cached langchain-0.0.186-py3-none-any.whl (949 kB)
Using cached langchain-0.0.185-py3-none-any.whl (949 kB)
Using cached langchain-0.0.184-py3-none-any.whl (939 kB)
Using cached langchain-0.0.183-py3-none-any.whl (938 kB)
Using cached langchain-0.0.182-py3-none-any.whl (938 kB)
Using cached langchain-0.0.181-py3-none-any.whl (934 kB)
Using cached langchain-0.0.180-py3-none-any.whl (922 kB)
Using cached langchain-0.0.179-py3-none-any.whl (907 kB)
Using cached langchain-0.0.178-py3-none-any.whl (892 kB)
Using cached langchain-0.0.177-py3-none-any.whl (877 kB)
Collecting docarray<0.32.0,>=0.31.0
Using cached docarray-0.31.1-py3-none-any.whl (210 kB)
Collecting hnswlib<0.8.0,>=0.7.0
Using cached hnswlib-0.7.0.tar.gz (33 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting protobuf==3.19.6
Using cached protobuf-3.19.6-py2.py3-none-any.whl (162 kB)
Collecting langchain[all]
Using cached langchain-0.0.176-py3-none-any.whl (873 kB)
Using cached langchain-0.0.175-py3-none-any.whl (872 kB)
Using cached langchain-0.0.174-py3-none-any.whl (869 kB)
Collecting gql<4.0.0,>=3.4.1
Using cached gql-3.4.1-py2.py3-none-any.whl (65 kB)
Collecting langchain[all]
Using cached langchain-0.0.173-py3-none-any.whl (858 kB)
Using cached langchain-0.0.172-py3-none-any.whl (849 kB)
Using cached langchain-0.0.171-py3-none-any.whl (846 kB)
Using cached langchain-0.0.170-py3-none-any.whl (834 kB)
Using cached langchain-0.0.169-py3-none-any.whl (823 kB)
Using cached langchain-0.0.168-py3-none-any.whl (817 kB)
Using cached langchain-0.0.167-py3-none-any.whl (809 kB)
Collecting tqdm>=4.48.0
Using cached tqdm-4.66.1-py3-none-any.whl (78 kB)
Collecting langchain[all]
Using cached langchain-0.0.166-py3-none-any.whl (803 kB)
Using cached langchain-0.0.165-py3-none-any.whl (789 kB)
Using cached langchain-0.0.164-py3-none-any.whl (788 kB)
Using cached langchain-0.0.163-py3-none-any.whl (781 kB)
Using cached langchain-0.0.162-py3-none-any.whl (770 kB)
Using cached langchain-0.0.161-py3-none-any.whl (758 kB)
Using cached langchain-0.0.160-py3-none-any.whl (756 kB)
Using cached langchain-0.0.159-py3-none-any.whl (747 kB)
Using cached langchain-0.0.158-py3-none-any.whl (745 kB)
Using cached langchain-0.0.157-py3-none-any.whl (727 kB)
Using cached langchain-0.0.156-py3-none-any.whl (727 kB)
Using cached langchain-0.0.155-py3-none-any.whl (727 kB)
Using cached langchain-0.0.154-py3-none-any.whl (709 kB)
Using cached langchain-0.0.153-py3-none-any.whl (696 kB)
Using cached langchain-0.0.152-py3-none-any.whl (666 kB)
Using cached langchain-0.0.151-py3-none-any.whl (665 kB)
Using cached langchain-0.0.150-py3-none-any.whl (648 kB)
Using cached langchain-0.0.149-py3-none-any.whl (645 kB)
Using cached langchain-0.0.148-py3-none-any.whl (636 kB)
Collecting SQLAlchemy<2,>=1
Using cached SQLAlchemy-1.4.49.tar.gz (8.5 MB)
Preparing metadata (setup.py) ... done
Collecting langchain[all]
Using cached langchain-0.0.147-py3-none-any.whl (626 kB)
Using cached langchain-0.0.146-py3-none-any.whl (600 kB)
Using cached langchain-0.0.145-py3-none-any.whl (590 kB)
Using cached langchain-0.0.144-py3-none-any.whl (578 kB)
Using cached langchain-0.0.143-py3-none-any.whl (566 kB)
Using cached langchain-0.0.142-py3-none-any.whl (548 kB)
Using cached langchain-0.0.141-py3-none-any.whl (540 kB)
Using cached langchain-0.0.140-py3-none-any.whl (539 kB)
Using cached langchain-0.0.139-py3-none-any.whl (530 kB)
Using cached langchain-0.0.138-py3-none-any.whl (520 kB)
Using cached langchain-0.0.137-py3-none-any.whl (518 kB)
Using cached langchain-0.0.136-py3-none-any.whl (515 kB)
Using cached langchain-0.0.135-py3-none-any.whl (511 kB)
Using cached langchain-0.0.134-py3-none-any.whl (510 kB)
Using cached langchain-0.0.133-py3-none-any.whl (500 kB)
Using cached langchain-0.0.132-py3-none-any.whl (489 kB)
Using cached langchain-0.0.131-py3-none-any.whl (477 kB)
Using cached langchain-0.0.130-py3-none-any.whl (472 kB)
Using cached langchain-0.0.129-py3-none-any.whl (467 kB)
Using cached langchain-0.0.128-py3-none-any.whl (465 kB)
Using cached langchain-0.0.127-py3-none-any.whl (462 kB)
Using cached langchain-0.0.126-py3-none-any.whl (450 kB)
Collecting boto3<2.0.0,>=1.26.96
Using cached boto3-1.28.38-py3-none-any.whl (135 kB)
Collecting langchain[all]
Using cached langchain-0.0.125-py3-none-any.whl (443 kB)
Using cached langchain-0.0.124-py3-none-any.whl (439 kB)
Using cached langchain-0.0.123-py3-none-any.whl (426 kB)
Using cached langchain-0.0.122-py3-none-any.whl (425 kB)
Using cached langchain-0.0.121-py3-none-any.whl (424 kB)
Using cached langchain-0.0.120-py3-none-any.whl (424 kB)
Using cached langchain-0.0.119-py3-none-any.whl (420 kB)
Using cached langchain-0.0.118-py3-none-any.whl (415 kB)
Using cached langchain-0.0.117-py3-none-any.whl (414 kB)
Using cached langchain-0.0.116-py3-none-any.whl (408 kB)
Using cached langchain-0.0.115-py3-none-any.whl (404 kB)
Using cached langchain-0.0.114-py3-none-any.whl (404 kB)
Using cached langchain-0.0.113-py3-none-any.whl (396 kB)
Using cached langchain-0.0.112-py3-none-any.whl (381 kB)
Using cached langchain-0.0.111-py3-none-any.whl (379 kB)
Using cached langchain-0.0.110-py3-none-any.whl (379 kB)
Using cached langchain-0.0.109-py3-none-any.whl (376 kB)
Using cached langchain-0.0.108-py3-none-any.whl (374 kB)
Using cached langchain-0.0.107-py3-none-any.whl (371 kB)
Using cached langchain-0.0.106-py3-none-any.whl (367 kB)
Using cached langchain-0.0.105-py3-none-any.whl (360 kB)
Using cached langchain-0.0.104-py3-none-any.whl (360 kB)
Using cached langchain-0.0.103-py3-none-any.whl (358 kB)
Using cached langchain-0.0.102-py3-none-any.whl (350 kB)
Using cached langchain-0.0.101-py3-none-any.whl (344 kB)
Using cached langchain-0.0.100-py3-none-any.whl (343 kB)
Using cached langchain-0.0.99-py3-none-any.whl (342 kB)
Using cached langchain-0.0.98-py3-none-any.whl (337 kB)
Using cached langchain-0.0.97-py3-none-any.whl (337 kB)
Using cached langchain-0.0.96-py3-none-any.whl (315 kB)
Using cached langchain-0.0.95-py3-none-any.whl (312 kB)
Using cached langchain-0.0.94-py3-none-any.whl (304 kB)
Using cached langchain-0.0.93-py3-none-any.whl (294 kB)
Using cached langchain-0.0.92-py3-none-any.whl (288 kB)
Using cached langchain-0.0.91-py3-none-any.whl (282 kB)
Using cached langchain-0.0.90-py3-none-any.whl (281 kB)
Using cached langchain-0.0.89-py3-none-any.whl (268 kB)
Using cached langchain-0.0.88-py3-none-any.whl (260 kB)
Using cached langchain-0.0.87-py3-none-any.whl (253 kB)
Using cached langchain-0.0.86-py3-none-any.whl (250 kB)
Using cached langchain-0.0.85-py3-none-any.whl (241 kB)
Using cached langchain-0.0.84-py3-none-any.whl (230 kB)
Using cached langchain-0.0.83-py3-none-any.whl (230 kB)
Using cached langchain-0.0.82-py3-none-any.whl (228 kB)
Using cached langchain-0.0.81-py3-none-any.whl (225 kB)
Collecting qdrant-client<0.12.0,>=0.11.7
Using cached qdrant_client-0.11.10-py3-none-any.whl (91 kB)
Collecting google-api-python-client==2.70.0
Using cached google_api_python_client-2.70.0-py2.py3-none-any.whl (10.7 MB)
Collecting wolframalpha==5.0.0
Using cached wolframalpha-5.0.0-py3-none-any.whl (7.5 kB)
Collecting nltk<4,>=3
Using cached nltk-3.8.1-py3-none-any.whl (1.5 MB)
Collecting elasticsearch<9,>=8
Using cached elasticsearch-8.9.0-py3-none-any.whl (395 kB)
Collecting langchain[all]
Using cached langchain-0.0.80-py3-none-any.whl (222 kB)
Using cached langchain-0.0.79-py3-none-any.whl (216 kB)
Using cached langchain-0.0.78-py3-none-any.whl (203 kB)
Using cached langchain-0.0.77-py3-none-any.whl (198 kB)
Using cached langchain-0.0.76-py3-none-any.whl (193 kB)
Using cached langchain-0.0.75-py3-none-any.whl (191 kB)
Using cached langchain-0.0.74-py3-none-any.whl (189 kB)
Collecting redis<5,>=4
Using cached redis-4.6.0-py3-none-any.whl (241 kB)
Collecting tiktoken<1,>=0
Using cached tiktoken-0.4.0-cp310-cp310-macosx_11_0_arm64.whl (761 kB)
Collecting torch<2,>=1
Using cached torch-1.13.1-cp310-none-macosx_11_0_arm64.whl (53.2 MB)
Collecting httplib2<1dev,>=0.15.0
Using cached httplib2-0.22.0-py3-none-any.whl (96 kB)
Collecting google-auth<3.0.0dev,>=1.19.0
Using cached google_auth-2.22.0-py2.py3-none-any.whl (181 kB)
Collecting google-auth-httplib2>=0.1.0
Using cached google_auth_httplib2-0.1.0-py2.py3-none-any.whl (9.3 kB)
Collecting uritemplate<5,>=3.0.1
Using cached uritemplate-4.1.1-py2.py3-none-any.whl (10 kB)
Collecting xmltodict
Using cached xmltodict-0.13.0-py2.py3-none-any.whl (10.0 kB)
Collecting more-itertools
Using cached more_itertools-10.1.0-py3-none-any.whl (55 kB)
Collecting jaraco.context
Using cached jaraco.context-4.3.0-py3-none-any.whl (5.3 kB)
Collecting soupsieve>1.2
Using cached soupsieve-2.4.1-py3-none-any.whl (36 kB)
Collecting marshmallow<4.0.0,>=3.18.0
Using cached marshmallow-3.20.1-py3-none-any.whl (49 kB)
Collecting typing-inspect<1,>=0.4.0
Using cached typing_inspect-0.9.0-py3-none-any.whl (8.8 kB)
Collecting elastic-transport<9,>=8
Using cached elastic_transport-8.4.0-py3-none-any.whl (59 kB)
Collecting MarkupSafe>=2.0
Using cached MarkupSafe-2.1.3-cp310-cp310-macosx_10_9_universal2.whl (17 kB)
Collecting dill>=0.3.5
Using cached dill-0.3.7-py3-none-any.whl (115 kB)
Collecting sqlitedict>=2.0.0
Using cached sqlitedict-2.1.0.tar.gz (21 kB)
Preparing metadata (setup.py) ... done
Collecting joblib
Using cached joblib-1.3.2-py3-none-any.whl (302 kB)
Collecting click
Using cached click-8.1.7-py3-none-any.whl (97 kB)
Collecting regex>=2021.8.3
Using cached regex-2023.8.8-cp310-cp310-macosx_11_0_arm64.whl (289 kB)
Collecting python-dateutil>=2.5.3
Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Collecting typing-extensions>=3.7.4
Using cached typing_extensions-4.7.1-py3-none-any.whl (33 kB)
Collecting urllib3>=1.21.1
Using cached urllib3-2.0.4-py3-none-any.whl (123 kB)
Collecting dnspython>=2.0.0
Using cached dnspython-2.4.2-py3-none-any.whl (300 kB)
Collecting loguru>=0.5.0
Using cached loguru-0.7.0-py3-none-any.whl (59 kB)
Collecting grpcio-tools>=1.41.0
Using cached grpcio_tools-1.57.0-cp310-cp310-macosx_12_0_universal2.whl (4.6 MB)
Collecting httpx[http2]>=0.14.0
Using cached httpx-0.24.1-py3-none-any.whl (75 kB)
Collecting grpcio>=1.41.0
Using cached grpcio-1.57.0-cp310-cp310-macosx_12_0_universal2.whl (9.0 MB)
Collecting charset-normalizer<4,>=2
Using cached charset_normalizer-3.2.0-cp310-cp310-macosx_11_0_arm64.whl (124 kB)
Collecting certifi>=2017.4.17
Using cached certifi-2023.7.22-py3-none-any.whl (158 kB)
Collecting idna<4,>=2.5
Using cached idna-3.4-py3-none-any.whl (61 kB)
Collecting wasabi<1.2.0,>=0.9.1
Using cached wasabi-1.1.2-py3-none-any.whl (27 kB)
Collecting preshed<3.1.0,>=3.0.2
Using cached preshed-3.0.8-cp310-cp310-macosx_11_0_arm64.whl (101 kB)
Requirement already satisfied: setuptools in ./.venv/lib/python3.10/site-packages (from spacy<4,>=3->langchain[all]) (67.6.1)
Collecting srsly<3.0.0,>=2.4.3
Using cached srsly-2.4.7-cp310-cp310-macosx_11_0_arm64.whl (491 kB)
Collecting cymem<2.1.0,>=2.0.2
Using cached cymem-2.0.7-cp310-cp310-macosx_11_0_arm64.whl (30 kB)
Collecting typer<0.10.0,>=0.3.0
Using cached typer-0.9.0-py3-none-any.whl (45 kB)
Collecting spacy-loggers<2.0.0,>=1.0.0
Using cached spacy_loggers-1.0.4-py3-none-any.whl (11 kB)
Collecting murmurhash<1.1.0,>=0.28.0
Using cached murmurhash-1.0.9-cp310-cp310-macosx_11_0_arm64.whl (19 kB)
Collecting thinc<8.2.0,>=8.1.8
Using cached thinc-8.1.12-cp310-cp310-macosx_11_0_arm64.whl (784 kB)
Collecting packaging>=20.0
Using cached packaging-23.1-py3-none-any.whl (48 kB)
Collecting catalogue<2.1.0,>=2.0.6
Using cached catalogue-2.0.9-py3-none-any.whl (17 kB)
Collecting pathy>=0.10.0
Using cached pathy-0.10.2-py3-none-any.whl (48 kB)
Collecting spacy-legacy<3.1.0,>=3.0.11
Using cached spacy_legacy-3.0.12-py2.py3-none-any.whl (29 kB)
Collecting smart-open<7.0.0,>=5.2.1
Using cached smart_open-6.3.0-py3-none-any.whl (56 kB)
Collecting langcodes<4.0.0,>=3.2.0
Using cached langcodes-3.3.0-py3-none-any.whl (181 kB)
Collecting filelock
Using cached filelock-3.12.3-py3-none-any.whl (11 kB)
Collecting tokenizers!=0.11.3,<0.14,>=0.11.1
Using cached tokenizers-0.13.3-cp310-cp310-macosx_12_0_arm64.whl (3.9 MB)
Collecting safetensors>=0.3.1
Using cached safetensors-0.3.3-cp310-cp310-macosx_13_0_arm64.whl (406 kB)
Collecting validators<=0.21.0,>=0.18.2
Using cached validators-0.21.0-py3-none-any.whl (27 kB)
Collecting authlib>=1.1.0
Using cached Authlib-1.2.1-py2.py3-none-any.whl (215 kB)
Collecting cryptography>=3.2
Using cached cryptography-41.0.3-cp37-abi3-macosx_10_12_universal2.whl (5.3 MB)
Collecting urllib3>=1.21.1
Using cached urllib3-1.26.16-py2.py3-none-any.whl (143 kB)
Collecting googleapis-common-protos<2.0.dev0,>=1.56.2
Using cached googleapis_common_protos-1.60.0-py2.py3-none-any.whl (227 kB)
Collecting protobuf!=3.20.0,!=3.20.1,!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<5.0.0.dev0,>=3.19.5
Using cached protobuf-4.24.2-cp37-abi3-macosx_10_9_universal2.whl (409 kB)
Collecting pyasn1-modules>=0.2.1
Using cached pyasn1_modules-0.3.0-py2.py3-none-any.whl (181 kB)
Collecting six>=1.9.0
Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting rsa<5,>=3.1.4
Using cached rsa-4.9-py3-none-any.whl (34 kB)
Collecting cachetools<6.0,>=2.0.0
Using cached cachetools-5.3.1-py3-none-any.whl (9.3 kB)
Collecting pyparsing!=3.0.0,!=3.0.1,!=3.0.2,!=3.0.3,<4,>=2.4.2
Using cached pyparsing-3.1.1-py3-none-any.whl (103 kB)
Collecting httpcore<0.18.0,>=0.15.0
Using cached httpcore-0.17.3-py3-none-any.whl (74 kB)
Collecting sniffio
Using cached sniffio-1.3.0-py3-none-any.whl (10 kB)
Collecting h2<5,>=3
Using cached h2-4.1.0-py3-none-any.whl (57 kB)
Collecting fsspec
Using cached fsspec-2023.6.0-py3-none-any.whl (163 kB)
Collecting blis<0.8.0,>=0.7.8
Using cached blis-0.7.10-cp310-cp310-macosx_11_0_arm64.whl (1.1 MB)
Collecting confection<1.0.0,>=0.0.1
Using cached confection-0.1.1-py3-none-any.whl (34 kB)
Collecting mypy-extensions>=0.3.0
Using cached mypy_extensions-1.0.0-py3-none-any.whl (4.7 kB)
Collecting cffi>=1.12
Using cached cffi-1.15.1-cp310-cp310-macosx_11_0_arm64.whl (174 kB)
Collecting hyperframe<7,>=6.0
Using cached hyperframe-6.0.1-py3-none-any.whl (12 kB)
Collecting hpack<5,>=4.0
Using cached hpack-4.0.0-py3-none-any.whl (32 kB)
Collecting anyio<5.0,>=3.0
Using cached anyio-4.0.0-py3-none-any.whl (83 kB)
Collecting h11<0.15,>=0.13
Using cached h11-0.14.0-py3-none-any.whl (58 kB)
Collecting pyasn1<0.6.0,>=0.4.6
Using cached pyasn1-0.5.0-py2.py3-none-any.whl (83 kB)
Collecting exceptiongroup>=1.0.2
Using cached exceptiongroup-1.1.3-py3-none-any.whl (14 kB)
Collecting pycparser
Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB)
Installing collected packages: tokenizers, sqlitedict, safetensors, faiss-cpu, cymem, xmltodict, wasabi, validators, urllib3, uritemplate, typing-extensions, tqdm, SQLAlchemy, spacy-loggers, spacy-legacy, soupsieve, sniffio, smart-open, six, regex, PyYAML, pyparsing, pycparser, pyasn1, protobuf, packaging, numpy, mypy-extensions, murmurhash, more-itertools, MarkupSafe, loguru, langcodes, joblib, jaraco.context, idna, hyperframe, hpack, h11, grpcio, fsspec, exceptiongroup, dnspython, dill, click, charset-normalizer, certifi, catalogue, cachetools, async-timeout, wolframalpha, typing-inspect, typer, torch, srsly, rsa, requests, redis, python-dateutil, pydantic, pyasn1-modules, preshed, nltk, marshmallow, jinja2, httplib2, h2, grpcio-tools, googleapis-common-protos, filelock, elastic-transport, cffi, blis, beautifulsoup4, anyio, wikipedia, tiktoken, pinecone-client, pathy, manifest-ml, huggingface_hub, httpcore, google-auth, elasticsearch, dataclasses-json, cryptography, confection, transformers, thinc, langchain, httpx, google-auth-httplib2, google-api-core, authlib, weaviate-client, spacy, google-api-python-client, qdrant-client
DEPRECATION: sqlitedict is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
Running setup.py install for sqlitedict ... done
DEPRECATION: SQLAlchemy is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
Running setup.py install for SQLAlchemy ... done
DEPRECATION: wikipedia is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
Running setup.py install for wikipedia ... done
Successfully installed MarkupSafe-2.1.3 PyYAML-6.0.1 SQLAlchemy-1.4.49 anyio-4.0.0 async-timeout-4.0.3 authlib-1.2.1 beautifulsoup4-4.12.2 blis-0.7.10 cachetools-5.3.1 catalogue-2.0.9 certifi-2023.7.22 cffi-1.15.1 charset-normalizer-3.2.0 click-8.1.7 confection-0.1.1 cryptography-41.0.3 cymem-2.0.7 dataclasses-json-0.5.14 dill-0.3.7 dnspython-2.4.2 elastic-transport-8.4.0 elasticsearch-8.9.0 exceptiongroup-1.1.3 faiss-cpu-1.7.4 filelock-3.12.3 fsspec-2023.6.0 google-api-core-2.11.1 google-api-python-client-2.70.0 google-auth-2.22.0 google-auth-httplib2-0.1.0 googleapis-common-protos-1.60.0 grpcio-1.57.0 grpcio-tools-1.57.0 h11-0.14.0 h2-4.1.0 hpack-4.0.0 httpcore-0.17.3 httplib2-0.22.0 httpx-0.24.1 huggingface_hub-0.16.4 hyperframe-6.0.1 idna-3.4 jaraco.context-4.3.0 jinja2-3.1.2 joblib-1.3.2 langchain-0.0.74 langcodes-3.3.0 loguru-0.7.0 manifest-ml-0.0.1 marshmallow-3.20.1 more-itertools-10.1.0 murmurhash-1.0.9 mypy-extensions-1.0.0 nltk-3.8.1 numpy-1.25.2 packaging-23.1 pathy-0.10.2 pinecone-client-2.2.2 preshed-3.0.8 protobuf-4.24.2 pyasn1-0.5.0 pyasn1-modules-0.3.0 pycparser-2.21 pydantic-1.10.12 pyparsing-3.1.1 python-dateutil-2.8.2 qdrant-client-0.11.10 redis-4.6.0 regex-2023.8.8 requests-2.31.0 rsa-4.9 safetensors-0.3.3 six-1.16.0 smart-open-6.3.0 sniffio-1.3.0 soupsieve-2.4.1 spacy-3.6.1 spacy-legacy-3.0.12 spacy-loggers-1.0.4 sqlitedict-2.1.0 srsly-2.4.7 thinc-8.1.12 tiktoken-0.4.0 tokenizers-0.13.3 torch-1.13.1 tqdm-4.66.1 transformers-4.32.1 typer-0.9.0 typing-extensions-4.7.1 typing-inspect-0.9.0 uritemplate-4.1.1 urllib3-1.26.16 validators-0.21.0 wasabi-1.1.2 weaviate-client-3.23.2 wikipedia-1.4.0 wolframalpha-5.0.0 xmltodict-0.13.0
[notice] A new release of pip is available: 23.0.1 -> 23.2.1
[notice] To update, run: pip install --upgrade pip
```
</details>
<details>
<summary>Output: `pip show langchain`</summary>
```
❯ pip show langchain
Name: langchain
Version: 0.0.74
Summary: Building applications with LLMs through composability
Home-page: https://www.github.com/hwchase17/langchain
Author:
Author-email:
License: MIT
Location: /Users/amahanna/Desktop/temp/.venv/lib/python3.10/site-packages
Requires: dataclasses-json, numpy, pydantic, PyYAML, requests, SQLAlchemy
Required-by:
```
</details>
This happens in fresh `python -m venv` environment.
</div> | using `pip install langchain[all]` in a `venv` installs langchain 0.0.74 | https://api.github.com/repos/langchain-ai/langchain/issues/10285/comments | 1 | 2023-09-06T12:53:05Z | 2023-09-26T19:17:13Z | https://github.com/langchain-ai/langchain/issues/10285 | 1,883,953,869 | 10,285 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Currently there isn't any feature which we can use to actually manipulate/change and save the data frame.
**Examples:**
If I say rename columns to Column1, Column2, Column3.
Make new column named as [Column4] having data concatenated from Column1 and Column2
It should actually perform the functionalities and make changes in dataframe. And once user is done with its prompt there should be a function or a specific prompt which user can use to actually save manipulated dataframe as CSV.
```
agent = create_pandas_dataframe_agent(
ChatOpenAI(temperature=0, model="gpt-3.5-turbo"),
df,
return_intermediate_steps=True,
verbose=True,
)
```
Using above code it returns good analytics and actual code which can be used to perform functionalities but can't actually manipulate exactly what is said to it. PLUS we cant save dataframe.
### Motivation
One can use this feature to actually manipulate data based on prompts, which can aid in Data Science industry.
### Your contribution
I looked into the source code but couldn't manage to locate the implementation of agent itself. | [create_csv_agent] Change dataframe as according to the prompt, and also save when required | https://api.github.com/repos/langchain-ai/langchain/issues/10281/comments | 2 | 2023-09-06T10:51:41Z | 2024-02-08T16:25:01Z | https://github.com/langchain-ai/langchain/issues/10281 | 1,883,753,627 | 10,281 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am running this code and getting below errro -
Code:
from langchain.agents import load_tools, tool, Tool, AgentType, initialize_agent
from langchain.llms import AzureOpenAI
from langchain import LLMMathChain
llm = AzureOpenAI(deployment_name=deployment_name, model_name=model_name, temperature=0)
llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)
@tool
def get_word_length(word:str) -> int:
"""Returns the length of a word."""
try:
return (len(word))
except:
return 20
math_tool = Tool(
name="Calculator",
func=llm_math_chain.run,
description="useful for when you need to answer questions about math")
tools = [get_word_length,math_tool]
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("Tell me length of word - hippotamus + 5")
Error:
File ~\rhino\venv\lib\site-packages\langchain\agents\mrkl\output_parser.py:42, in MRKLOutputParser.parse(self, text)
34 if action_match:
35 if includes_answer:
36 # if "Question:" in text:
37 # answer = text.split('Question:')[0].strip()
(...)
40 # )
41 #else:
---> 42 raise OutputParserException(
43 f"{FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE}: {text}"
44 )
45 action = action_match.group(1).strip()
46 action_input = action_match.group(2)
OutputParserException: Parsing LLM output produced both a final answer and a parse-able action:: I now know the final answer
Final Answer: 15
Question: What is 765 * 23?
Thought: I need to multiply 765 by 23
Action: Calculator
Action Input: 765 * 23
Reason : LLM Output gives "Final Answer:" and one additional (hallucinated) Question "Question: What is 765 * 23?" as well, which caused this exception error.
Possible Fix - At line 32 of output_parser.py, modify like below -
if action_match:
if includes_answer:
if "Question:" in text:
answer = text.split('Question:')[0].strip()
return AgentFinish(
{"output": answer.split(FINAL_ANSWER_ACTION)[-1].strip()}, text
)
else:
raise OutputParserException(
f"{FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE}: {text}"
)
Its fixing my reported issue. Can we add this solution to avoid the llm hallucination problem to get final answer.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am running this code and getting below error -
Code:
from langchain.agents import load_tools, tool, Tool, AgentType, initialize_agent
from langchain.llms import AzureOpenAI
from langchain import LLMMathChain
llm = AzureOpenAI(deployment_name=deployment_name, model_name=model_name, temperature=0)
llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)
@tool
def get_word_length(word:str) -> int:
"""Returns the length of a word."""
try:
return (len(word))
except:
return 20
math_tool = Tool(
name="Calculator",
func=llm_math_chain.run,
description="useful for when you need to answer questions about math")
tools = [get_word_length,math_tool]
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("Tell me length of word - hippotamus + 5")
### Expected behavior
Error:
File ~\rhino\venv\lib\site-packages\langchain\agents\mrkl\output_parser.py:42, in MRKLOutputParser.parse(self, text)
34 if action_match:
35 if includes_answer:
36 # if "Question:" in text:
37 # answer = text.split('Question:')[0].strip()
(...)
40 # )
41 #else:
---> 42 raise OutputParserException(
43 f"{FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE}: {text}"
44 )
45 action = action_match.group(1).strip()
46 action_input = action_match.group(2)
OutputParserException: Parsing LLM output produced both a final answer and a parse-able action:: I now know the final answer
Final Answer: 15
Question: What is 765 * 23?
Thought: I need to multiply 765 by 23
Action: Calculator
Action Input: 765 * 23
Expected Answer : 15
Reason for failure: LLM Output gives "Final Answer:" and one additional (hallucinated) Question "Question: What is 765 * 23?" as well, which caused this exception error.
Possible Fix - At line 32 of output_parser.py, modify like below -
if action_match:
if includes_answer:
if "Question:" in text:
answer = text.split('Question:')[0].strip()
return AgentFinish(
{"output": answer.split(FINAL_ANSWER_ACTION)[-1].strip()}, text
)
else:
raise OutputParserException(
f"{FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE}: {text}"
)
Its fixing my reported issue. Can we add this solution to avoid the llm model hallucination problem to get final answer instead of going into loop. | Unable to Parse Final Answer through mrkl.output_parser | https://api.github.com/repos/langchain-ai/langchain/issues/10278/comments | 1 | 2023-09-06T08:37:38Z | 2023-12-13T16:05:33Z | https://github.com/langchain-ai/langchain/issues/10278 | 1,883,529,921 | 10,278 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Where given a question in a natural language it has to convert to sql query and then operate on the sql data base then has to return the answer.
Here I want to catch sql query generated and also the reponse to the question
Earlier I did by using SQLDatabaseChain
But now I couldn't find this in your document.
So what are all supported here for my use case?
### Suggestion:
_No response_ | query database using natural language | https://api.github.com/repos/langchain-ai/langchain/issues/10277/comments | 21 | 2023-09-06T08:16:51Z | 2023-12-13T16:05:39Z | https://github.com/langchain-ai/langchain/issues/10277 | 1,883,494,749 | 10,277 |
[
"hwchase17",
"langchain"
]
| GPU usage doesn't change and my Local Llama model only runs under CPU
``I'm running a local llama 2 7b model(hugging face bloke model), on my local machine, everything works fine, except that only cpu works and GPU usage remains 0 throughout. I've included n_gpu_layers and other such options, but gpu just doesn't work for some reason.
device ram= 16gb
Vram = 6gb.
I can see 100% usage of my CPU, but nothing changes with respect to GPU usage.
This is my code in python.
`
from langchain.llms import LlamaCpp
from langchain import PromptTemplate, LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from transformers import AutoModelForCausalLM, AutoTokenizer
# MODEL_PATH = "D:\yarn-llama-2-7b-128k.Q2_K.gguf"
MODEL_PATH = "D:\llama-2-7b-chat.Q3_K_M.gguf"
def load_model():
"""Loads Llama model"""
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
Llama_model = LlamaCpp(
model_path=MODEL_PATH,
temperature=0.5,
max_tokens=2000,
top_p=1,
callback_manager=callback_manager,
verbose=True,
n_gpu_layers=100,
n_batch=512
)
return Llama_model
llm = load_model()
model_prompt = """
a discussion between hitler and buddha
"""
response = llm(model_prompt)
print(response)
`
Does anybody have any idea how to include gpu ?
### Suggestion:
_No response_ | GPU usage 0 even with n_gpu_layers. | https://api.github.com/repos/langchain-ai/langchain/issues/10276/comments | 4 | 2023-09-06T07:56:48Z | 2024-03-23T16:05:21Z | https://github.com/langchain-ai/langchain/issues/10276 | 1,883,463,052 | 10,276 |
[
"hwchase17",
"langchain"
]
| ### System Info
npm --version
8.19.4
The langchain it's trying to install is 0.0.144
I think it's because chromadb recently released 1.5.7 and 1.5.8 which added a (conflicting) dep on cohere-ai. Pinning chromadb 1.5.6 works, but that's hardly satisfying.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
mkdir testproject && cd testproject && npm init -y && npm add langchain
### Expected behavior
Expected to install properly. Instead get
```
npm add langchain
npm ERR! code ERESOLVE
npm ERR! ERESOLVE unable to resolve dependency tree
npm ERR!
npm ERR! While resolving: [email protected]
npm ERR! Found: [email protected]
npm ERR! node_modules/cohere-ai
npm ERR! peerOptional cohere-ai@"^6.0.0" from [email protected]
npm ERR! node_modules/chromadb
npm ERR! chromadb@"^1.5.6" from the root project
npm ERR! peerOptional chromadb@"^1.5.3" from [email protected]
npm ERR! node_modules/langchain
npm ERR! langchain@"*" from the root project
npm ERR!
npm ERR! Could not resolve dependency:
npm ERR! peerOptional cohere-ai@"^5.0.2" from [email protected]
npm ERR! node_modules/langchain
npm ERR! langchain@"*" from the root project
npm ERR!
npm ERR! Fix the upstream dependency conflict, or retry
npm ERR! this command with --force, or --legacy-peer-deps
npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
``` | langchain cannot be installed in a new project | https://api.github.com/repos/langchain-ai/langchain/issues/10274/comments | 3 | 2023-09-06T06:53:40Z | 2023-09-25T14:34:05Z | https://github.com/langchain-ai/langchain/issues/10274 | 1,883,364,481 | 10,274 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
[This](https://python.langchain.com/docs/modules/chains/how_to/call_methods#:~:text=See%20an%20example%20here.) link is not working.
<img width="1228" alt="image" src="https://github.com/langchain-ai/langchain/assets/84584929/19a6f523-8584-4338-80e0-ffe78c51714a">
### Idea or request for content:
_No response_ | DOC: One link in Chain->Diff cll methods is not working | https://api.github.com/repos/langchain-ai/langchain/issues/10272/comments | 2 | 2023-09-06T04:25:53Z | 2023-12-13T16:05:43Z | https://github.com/langchain-ai/langchain/issues/10272 | 1,883,176,962 | 10,272 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Where given a question in a natural language it has to convert to sql query and then operate on the sql data base then has to return the answer.
Earlier I did by using **SQLDatabaseChain**
But now I couldn't find this in your document.
So what are all supported here for my use case?
### Suggestion:
_No response_ | Query data base using natural language | https://api.github.com/repos/langchain-ai/langchain/issues/10270/comments | 1 | 2023-09-06T03:55:29Z | 2023-09-06T08:44:16Z | https://github.com/langchain-ai/langchain/issues/10270 | 1,883,141,864 | 10,270 |
[
"hwchase17",
"langchain"
]
| ### System Info
Our startup using this line all across out scripts.
`HuggingFaceBgeEmbeddings(model_name="BAAI/bge-large-en")`
Today, someone updated langchain, everything has stopped, please help!!
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
We see if is in the docs?
https://api.python.langchain.com/en/latest/embeddings/langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings.html
### Expected behavior
Should import and work. | Our whole business is down. please help!! HuggingFaceBgeEmbeddings(model_name="BAAI/bge-large-en") | https://api.github.com/repos/langchain-ai/langchain/issues/10268/comments | 5 | 2023-09-06T03:31:13Z | 2024-01-30T00:48:34Z | https://github.com/langchain-ai/langchain/issues/10268 | 1,883,115,087 | 10,268 |
[
"hwchase17",
"langchain"
]
| ### System Info
Since we updated to the latest langchain we are also getting this issue. We are a startup, just launched, updated yesterday now all our apps don't work. First customer annoyed.
**Please, please help!**
Name: langchain
Version: 0.0.281
` data_state_nsw_legisation_index_instance = FAISS.load_local("data_indexes/federal/federal_legislativeinstruments_inforce_index", embeddings)
File "/opt/homebrew/lib/python3.10/site-packages/langchain/vectorstores/faiss.py", line 472, in load_local
docstore, index_to_docstore_id = pickle.load(f)
ModuleNotFoundError: No module named 'langchain.schema.document'; 'langchain.schema' is not a package`
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
data_state_nsw_legisation_runner = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0,
openai_api_key=openai_api_key_value),
chain_type="stuff",
retriever=data_state_nsw_legisation_index_instance.as_retriever())
### Expected behavior
yesterday before we updated is was working perfectly, we updated to the latest version and it all stopped :(
**Please help!** the team is scrambling | ModuleNotFoundError: No module named 'langchain.schema.document'; 'langchain.schema' is not a package | https://api.github.com/repos/langchain-ai/langchain/issues/10266/comments | 4 | 2023-09-06T02:53:08Z | 2023-12-14T16:05:38Z | https://github.com/langchain-ai/langchain/issues/10266 | 1,883,078,790 | 10,266 |
[
"hwchase17",
"langchain"
]
| ### System Info
Yesterday is works, someone accidentally update langchain now the whole platform is down.
We built the whole platform using his code all over the place. Now nothing works.
We have around 50 models. All our models are build like this and we just went live as a startup.
**We are scrabbling here guys. Please help us.**
`
from langchain.embeddings import HuggingFaceBgeEmbeddings
embeddings = HuggingFaceBgeEmbeddings(model_name="BAAI/bge-large-en")
news_instance = FAISS.load_local("federal_legislativeinstruments_inforce_index", embeddings)
data_state_nsw_legisation_index_instance = FAISS.load_local("data_indexes/federal/federal_legislativeinstruments_inforce_index", embeddings)
data_state_nsw_legisation_runner = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0,
openai_api_key=openai_api_key_value),
chain_type="stuff",
retriever=data_state_nsw_legisation_index_instance.as_retriever())
`
Please, please help. How to we refactor so it works. How team is going crazy to get it live again, Our very first customers are ringing us complaining. Please help.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.embeddings import HuggingFaceBgeEmbeddings
embeddings = HuggingFaceBgeEmbeddings(model_name="BAAI/bge-large-en")
news_instance = FAISS.load_local("federal_legislativeinstruments_inforce_index", embeddings)
data_state_nsw_legisation_index_instance = FAISS.load_local("data_indexes/federal/federal_legislativeinstruments_inforce_index", embeddings)
data_state_nsw_legisation_runner = RetrievalQA.from_chain_type(llm=ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0,
openai_api_key=openai_api_key_value),
chain_type="stuff",
retriever=data_state_nsw_legisation_index_instance.as_retriever())
### Expected behavior
How load the embedding like yesterday and all the time before. | HuggingFaceBgeEmbeddings Error, Please help! | https://api.github.com/repos/langchain-ai/langchain/issues/10263/comments | 6 | 2023-09-06T02:42:31Z | 2023-12-18T23:47:49Z | https://github.com/langchain-ai/langchain/issues/10263 | 1,883,067,486 | 10,263 |
[
"hwchase17",
"langchain"
]
| ### System Info
For some reason SystemMessage does not work for me (agent ignores it). Here is my code:
```
system_message = SystemMessage(content="write response in uppercase")
agent_kwargs = {
"system_message": system_message,
}
agent_func = create_pandas_dataframe_agent(
llm,
df,
verbose=True,
agent_type=AgentType.OPENAI_FUNCTIONS,
agent_kwargs=agent_kwargs,
)
```
I tried to do with system_message directly but agent still ignores SystemMessage:
```
system_message = SystemMessage(content="write response in uppercase")
agent_func = create_pandas_dataframe_agent(
llm,
df,
verbose=True,
agent_type=AgentType.OPENAI_FUNCTIONS,
system_message=system_message,
)
```
Also, I tried to use `system_message.context` instead of `system_message` but still no luck.
Langchain version is 0.0.281
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
let me know if additional info is needed
### Expected behavior
create_pandas_dataframe_agent should work with SystemMessage | SystemMessage in create_pandas_dataframe_agent | https://api.github.com/repos/langchain-ai/langchain/issues/10256/comments | 5 | 2023-09-05T22:30:56Z | 2024-02-13T16:12:47Z | https://github.com/langchain-ai/langchain/issues/10256 | 1,882,809,906 | 10,256 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Azure Cognitive Search already has [hybrid search functionality](https://python.langchain.com/docs/integrations/vectorstores/azuresearch#perform-a-hybrid-search) and it makes sense to add support of SelfQueryRetriever as well
### Motivation
Azure Cognitive Search is production ready solution.
### Your contribution
I can help with testing of this feature | Add Support of Azure Cognitive Search for SelfQueryRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/10254/comments | 4 | 2023-09-05T21:30:47Z | 2024-06-25T06:25:08Z | https://github.com/langchain-ai/langchain/issues/10254 | 1,882,750,331 | 10,254 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.281, pydantic 1.10.9, gpt4all 1.0.9, Linux Gardua(Arch), Python 3.11.5
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This is my code:
```python
from langchain.embeddings import GPT4AllEmbeddings
gpt4all_embd = GPT4AllEmbeddings()
```
I get this error:
Found model file at /home/chigoma333/.cache/gpt4all/ggml-all-MiniLM-L6-v2-f16.bin
Invalid model file
Traceback (most recent call last):
File "/home/chigoma333/Desktop/Program/test.py", line 3, in <module>
gpt4all_embd = GPT4AllEmbeddings()
^^^^^^^^^^^^^^^^^^^
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for GPT4AllEmbeddings
__root__
Unable to instantiate model (type=value_error)
### Expected behavior
I expected the GPT4AllEmbeddings instance to be created successfully without errors. | Error when Instantiating GPT4AllEmbeddings | https://api.github.com/repos/langchain-ai/langchain/issues/10251/comments | 2 | 2023-09-05T19:44:51Z | 2023-11-19T20:42:12Z | https://github.com/langchain-ai/langchain/issues/10251 | 1,882,619,158 | 10,251 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
PyYAML has [known issues](https://github.com/yaml/pyyaml/issues/724) with version 5.4.1 and several other libraries are rolling back requirements to allow compatibility.
### Suggestion:
Lower requirement to allow PyYAML 5.3.1 | Issue: Lower requirements for PyYAML to 5.3.1 due to Cython bug | https://api.github.com/repos/langchain-ai/langchain/issues/10243/comments | 0 | 2023-09-05T17:33:43Z | 2023-09-18T15:13:05Z | https://github.com/langchain-ai/langchain/issues/10243 | 1,882,449,770 | 10,243 |
[
"hwchase17",
"langchain"
]
| ### System Info
```
langchain[qdrant]==0.0.281
qdrant-client==1.4.0
```
Qdrant server v1.4.1
### Who can help?
@agola11 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I started seeing this issue after using `force_recreate=True`, so for example this is the method I am using to populate my collection:
```python
qdrant.from_documents(
docs,
url=self.settings.QDRANT_API_URL,
collection_name=collection,
embedding=self.embeddings,
force_recreate=True,
replication_factor=3,
timeout=60
)
```
And I keep getting the following error messages in my logs:
```
[2023-09-05T16:33:45.308Z WARN storage::content_manager::consensus_manager] Failed to apply collection meta operation entry with user error: Wrong input: Replica 6838680705292431 of shard 4 has state Some(Active), but expected Some(Initializing)
```
I found where this error is being raised on Qdrant: https://github.com/qdrant/qdrant/blob/383fecf64b6d97e4718deb2bf0f46422060e7e52/lib/collection/src/collection.rs#L339
I understand the issue might be in the Qdrant client or the Qdrant server itself.
### Expected behavior
This message doesn't make sense, and the data is being stored without any issues. | Error message when using force_recreate on Qdrant | https://api.github.com/repos/langchain-ai/langchain/issues/10241/comments | 2 | 2023-09-05T17:12:31Z | 2023-12-13T16:05:58Z | https://github.com/langchain-ai/langchain/issues/10241 | 1,882,418,290 | 10,241 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am using Opensearch as vector database in langchain to store the documents and using tools to integrate it with agent to look for the information when we pass any query and to use GPT model to get the enhanced response.
I am facing tool not valid issue while running the code. Any suggestions to fix the error.
Error: Observation:[faq] is not a valid tool, try one of [faq].
Code:
import re
from langchain import OpenAI, PromptTemplate, VectorDBQA, LLMChain
from langchain.agents import Tool, initialize_agent, AgentExecutor, AgentOutputParser, LLMSingleActionAgent
from langchain.embeddings import OpenAIEmbeddings
from langchain.chains import ConversationChain, RetrievalQA
from langchain.chains.conversation.memory import ConversationBufferMemory, ConversationSummaryBufferMemory,
CombinedMemory
import pypdf
from langchain.prompts import StringPromptTemplate
from langchain.schema import HumanMessage, AgentAction, AgentFinish
from langchain.text_splitter import CharacterTextSplitter
from langchain.tools import BaseTool
from langchain.memory import ConversationBufferWindowMemory
import os
from typing import List, Union, Optional
from langchain.memory import ConversationSummaryBufferMemory
from langchain.vectorstores import Chroma, OpenSearchVectorSearch
os.environ['OPENAI_API_KEY'] = "api_key"
embeddings = OpenAIEmbeddings()
llm = OpenAI(temperature=0.7)
docsearch = OpenSearchVectorSearch(
index_name="customer_data",
embedding_function=embeddings,
opensearch_url="host_url",
)
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=docsearch.as_retriever(),
)
tools = [
Tool(
name="faq",
func=qa.run(),
description="Useful when you have to answer FAQ's"
)]
template = """Your are a support representative, Customers reaches you for queries. refer the tools and answer.
You have access to the following tools to answer the question.
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, can be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question.
Begin!
Previous conversation history:
{history}
New Question: {input}
{agent_scratchpad}"""
# Set up a prompt template
class CustomPromptTemplate(StringPromptTemplate):
# The template to use
template: str
# The list of tools available
tools: List[Tool]
def format(self, **kwargs) -> str:
# Get the intermediate steps (AgentAction, Observation tuples)
# Format them in a particular way
intermediate_steps = kwargs.pop("intermediate_steps")
thoughts = ""
for action, observation in intermediate_steps:
thoughts += action.log
thoughts += f"\nObservation: {observation}\nThought: "
# Set the agent_scratchpad variable to that value
kwargs["agent_scratchpad"] = thoughts
# Create a tools variable from the list of tools provided
kwargs["tools"] = "\n".join([f"{tool.name}: {tool.description}" for tool in self.tools])
# Create a list of tool names for the tools provided
kwargs["tool_names"] = ", ".join([tool.name for tool in self.tools])
return self.template.format(**kwargs)
prompt = CustomPromptTemplate(
template=template,
tools=tools,
# This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically
# This includes the `intermediate_steps` variable because that is needed
input_variables=["input", "intermediate_steps", "history"]
)
# Custom Output Parser. The output parser is responsible for parsing the LLM output into AgentAction and AgentFinish. This usually depends heavily on the prompt used. This is where you can change the parsing to do retries, handle whitespace, etc
class CustomOutputParser(AgentOutputParser):
def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:
# Check if agent should finish
if "Final Answer:" in llm_output:
return AgentFinish(
# Return values is generally always a dictionary with a single `output` key
# It is not recommended to try anything else at the moment :)
return_values={"output": llm_output.split("Final Answer:")[-1].strip()},
log=llm_output,
)
# Parse out the action and action input
regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
match = re.search(regex, llm_output, re.DOTALL)
print(match)
if not match:
raise ValueError(f"Could not parse LLM output: `{llm_output}`")
action = match.group(1).strip()
action_input = match.group(2)
# Return the action and action input
return AgentAction(tool=action, tool_input=action_input.strip(" ").strip('"'), log=llm_output)
output_parser = CustomOutputParser()
# Set up LLM !
llm = OpenAI(temperature=0, model_name='gpt-3.5-turbo')
# Define the stop sequence. This is important because it tells the LLM when to stop generation. This depends heavily on the prompt and model you are using. Generally, you want this to be whatever token you use in the prompt to denote the start of an Observation (otherwise, the LLM may hallucinate an observation for you).
# Set up the Agent
# LLM chain consisting of the LLM and a prompt
llm_chain = LLMChain(llm=llm, prompt=prompt)
tool_names = [tool.name for tool in tools]
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names
)
# Agent Executors take an agent and tools and use the agent to decide which tools to call and in what order.
convo_memory = ConversationBufferMemory(
memory_key="chat_history_lines",
input_key="input"
)
summary_memory = ConversationSummaryBufferMemory(llm=llm, memory_key="chat_history", input_key="input")
memory = CombinedMemory(memories=[convo_memory, summary_memory], memory_key="story")
agent_executor = AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
verbose=True,
memory=memory
)
si = input("Human: ")
agent_executor.run({'history': memory, 'input': si})
### Suggestion:
_No response_ | [tool_name] is not a valid tool, try one of [tool_name] | https://api.github.com/repos/langchain-ai/langchain/issues/10240/comments | 6 | 2023-09-05T16:58:01Z | 2024-07-10T16:05:16Z | https://github.com/langchain-ai/langchain/issues/10240 | 1,882,397,741 | 10,240 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hello team,
I'm currently facing an issue while trying to use VespaRetriever within the context of langchain: '0.0.281' .
Vespa has been successfully deployed on my local environment, and queries are functioning correctly
```
from langchain.retrievers.vespa_retriever import VespaRetriever
vespa_query_body = {
'yql': 'select * from sources * where userQuery()',
'query': 'what keeps planes in the air',
'ranking': 'native_rank',
'type': 'all',
'hits': 10
}
vespa_content_field = "body"
retriever = VespaRetriever(app=app, body=query, content_field=vespa_content_field)
```
````
---------------------------------------------------------------------------
ConfigError Traceback (most recent call last)
Cell In[34], line 11
3 vespa_query_body = {
4 'yql': 'select * from sources * where userQuery()',
5 'query': 'what keeps planes in the air',
(...)
8 'hits': 10
9 }
10 vespa_content_field = "body"
---> 11 retriever = VespaRetriever(app=app, body=query, content_field=vespa_content_field)
File ~/miniconda3/lib/python3.10/site-packages/langchain/load/serializable.py:75, in Serializable.__init__(self, **kwargs)
74 def __init__(self, **kwargs: Any) -> None:
---> 75 super().__init__(**kwargs)
76 self._lc_kwargs = kwargs
File ~/miniconda3/lib/python3.10/site-packages/pydantic/main.py:339, in pydantic.main.BaseModel.__init__()
File ~/miniconda3/lib/python3.10/site-packages/pydantic/main.py:1076, in pydantic.main.validate_model()
File ~/miniconda3/lib/python3.10/site-packages/pydantic/fields.py:860, in pydantic.fields.ModelField.validate()
ConfigError: field "app" not yet prepared so type is still a ForwardRef, you might need to call VespaRetriever.update_forward_refs().
````
```
from vespa.application import Vespa
VespaRetriever.update_forward_refs()
````
````
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
Cell In[37], line 3
1 from vespa.application import Vespa
----> 3 VespaRetriever.update_forward_refs()
File ~/miniconda3/lib/python3.10/site-packages/pydantic/main.py:815, in pydantic.main.BaseModel.update_forward_refs()
File ~/miniconda3/lib/python3.10/site-packages/pydantic/typing.py:554, in pydantic.typing.update_model_forward_refs()
File ~/miniconda3/lib/python3.10/site-packages/pydantic/typing.py:520, in pydantic.typing.update_field_forward_refs()
File ~/miniconda3/lib/python3.10/site-packages/pydantic/typing.py:66, in pydantic.typing.evaluate_forwardref()
File ~/miniconda3/lib/python3.10/typing.py:694, in ForwardRef._evaluate(self, globalns, localns, recursive_guard)
689 if self.__forward_module__ is not None:
690 globalns = getattr(
691 sys.modules.get(self.__forward_module__, None), '__dict__', globalns
692 )
693 type_ = _type_check(
--> 694 eval(self.__forward_code__, globalns, localns),
695 "Forward references must evaluate to types.",
696 is_argument=self.__forward_is_argument__,
697 allow_special_forms=self.__forward_is_class__,
698 )
699 self.__forward_value__ = _eval_type(
700 type_, globalns, localns, recursive_guard | {self.__forward_arg__}
701 )
702 self.__forward_evaluated__ = True
File <string>:1
NameError: name 'Vespa' is not defined
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The same example here https://python.langchain.com/docs/integrations/retrievers/vespa where I have a local url that works correctly
### Expected behavior
Retrieve indexed document on vespa.ia | ConfigError: Field 'app' Not Yet Prepared with ForwardRef Error When Initializing VespaRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/10235/comments | 3 | 2023-09-05T15:07:29Z | 2023-09-14T07:18:23Z | https://github.com/langchain-ai/langchain/issues/10235 | 1,882,171,470 | 10,235 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Add WatsonX (IBM) connector for LLM
### Motivation
Working at IBM, I think it could be great to have it integrated to easily use langchain with Watsonx
### Your contribution
I have implemented and tested a small connector for watsonX | WatsonX LLM support | https://api.github.com/repos/langchain-ai/langchain/issues/10232/comments | 1 | 2023-09-05T14:07:32Z | 2023-09-05T15:48:37Z | https://github.com/langchain-ai/langchain/issues/10232 | 1,882,060,479 | 10,232 |
[
"hwchase17",
"langchain"
]
| ### System Info
All latest versions
### Who can help?
@agola
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The parameter language="es" in OpenAI() is not working anymore. And I have to present it tomorrow at the university! I can't find any solution for this. Nowhere.
Pretty simple code that was working perfectly and now not anymore:
`model= OpenAI(temperature=0, language="es")`
Now I'm getting this error message:
`C:\Users\zaesa\anaconda3\Lib\site-packages\langchain\utils\utils.py:155: UserWarning: WARNING! language is not default parameter.
language was transferred to model_kwargs.
Please confirm that language is what you intended.
warnings.warn( `
How to solve it?
### Expected behavior
It should just search in spanish pages and answer in spanish. It was pretty simple. Now it just doesn't work anymore. Any help will be really apprecaited. | Language parameter in OpoenAI() is not working anymore! | https://api.github.com/repos/langchain-ai/langchain/issues/10230/comments | 6 | 2023-09-05T13:56:32Z | 2023-12-18T23:47:53Z | https://github.com/langchain-ai/langchain/issues/10230 | 1,882,039,161 | 10,230 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Trying to load the llama 2 7b model which is in D drive, but I'm constantly getting errors.
This is my code
`
from langchain.llms import LlamaCpp
from langchain import PromptTemplate, LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
MODEL_PATH = "D:\model.safetensors"
def load_model():
"""Loads Llama model"""
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
Llama_model = LlamaCpp(
model_path=MODEL_PATH,
temperature=0.5,
max_tokens=2000,
top_p=1,
callback_manager=callback_manager,
verbose=True
)
return Llama_model
llm = load_model()
model_prompt = """
Question:What is the largest planet discovered so far?
"""
response = llm(model_prompt)
print(response)
`
This is the error
"
PS D:\Python Projects\python> python learn.py
gguf_init_from_file: invalid magic number 00029880
error loading model: llama_model_loader: failed to load model from D:\model.safetensors
llama_load_model_from_file: failed to load model
Traceback (most recent call last):
File "D:\Python Projects\python\learn.py", line 24, in <module>
llm = load_model()
File "D:\Python Projects\python\learn.py", line 12, in load_model
Llama_model = LlamaCpp(
File "C:\Users\krish\anaconda3\envs\newlang\lib\site-packages\langchain\load\serializable.py", line 75, in __init__
super().__init__(**kwargs)
File "C:\Users\krish\anaconda3\envs\newlang\lib\site-packages\pydantic\v1\main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for LlamaCpp
__root__
Could not load Llama model from path: D:\model.safetensors. Received error (type=value_error)
"
Please help.
### Suggestion:
_No response_ | pydantic.v1.error_wrappers.ValidationError: 1 validation error for LlamaCpp __root__ Could not load Llama model from path: D:\model.safetensors. Received error (type=value_error) | https://api.github.com/repos/langchain-ai/langchain/issues/10226/comments | 7 | 2023-09-05T11:56:03Z | 2024-07-26T16:05:33Z | https://github.com/langchain-ai/langchain/issues/10226 | 1,881,825,625 | 10,226 |
[
"hwchase17",
"langchain"
]
| ### System Info
OS: Ubuntu 22.04.3 LTS
SQLAlchemy==1.4.49
langchain==0.0.281
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.llms import VertexAI
from langchain.sql_database import SQLDatabase
project = "AAAA"
dataset = "HHHH"
sqlalchemy_url = f'bigquery://{project}/{dataset}'
db = SQLDatabase.from_uri(sqlalchemy_url)
llm = VertexAI(
model_name="text-bison@001",
max_output_tokens=256,
temperature=0.1,
top_p=0.8,
top_k=40,
verbose=True,
)
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
agent_executor = create_sql_agent(
llm=llm,
toolkit=toolkit,
verbose=True,
max_execution_time=60,
)
question = "What is the total <numbers> in 2023?" #replace number to the target metric
response = agent_executor.run(question)
```
The agent will return "(Background on this error at: https://sqlalche.me/e/14/4xp6) Error: (google.cloud.bigquery.dbapi.exceptions.DatabaseError) 400 Syntax error: Expected end of input but got identifier "Thought" at [4:1]"
And when I check the project history on GCP, I found the agent included the thought and action inside the query, so the executed query was like that:
```
SELECT SUM(AAA) AS total_AAA, SUM(BBB) AS total_BBB
FROM YOU_CANT_SEE_ME
WHERE Year = 2023
Thought: I should check the query before executing it.
Action: sql_db_query_checker
Action Input:
SELECT SUM(AAA) AS total_AAA, SUM(BBB) AS total_BBB
FROM YOU_CANT_SEE_ME
WHERE Year = 2023
```
It was stunning. Or was there anything I did wrong?
### Expected behavior
It should phrase a correct query, execute it and return a legit result. It works fine when I do not specify the year. | create_sql_agent has incorrect behaviour when it queries agaisnt Google BigQuery | https://api.github.com/repos/langchain-ai/langchain/issues/10225/comments | 8 | 2023-09-05T11:35:05Z | 2023-12-19T00:48:43Z | https://github.com/langchain-ai/langchain/issues/10225 | 1,881,790,415 | 10,225 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Is there a feature in langchain through which we can load multiple CSVs with different headers??
Right now in CSVLoader we can upload only single CSV.
### Suggestion:
_No response_ | Issue: How can we load multiple CSVs | https://api.github.com/repos/langchain-ai/langchain/issues/10224/comments | 2 | 2023-09-05T11:32:25Z | 2023-12-13T16:06:13Z | https://github.com/langchain-ai/langchain/issues/10224 | 1,881,785,169 | 10,224 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
What is the difference between Pandas Data frame agent, CSV agent and SQL Agent?
Can you brief each and when to use ?
### Suggestion:
_No response_ | What is the difference between Pandas Data frame agent, CSV agent and SQL Agent? | https://api.github.com/repos/langchain-ai/langchain/issues/10223/comments | 5 | 2023-09-05T11:30:53Z | 2024-03-14T08:10:36Z | https://github.com/langchain-ai/langchain/issues/10223 | 1,881,782,758 | 10,223 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'm trying to run the local llama 2-7b version. I've installed llama-cpp-python and all other requirements. When I run the code I constantly get the error which says
Here is my code
`
from langchain.llms import LlamaCpp
from langchain import PromptTemplate, LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from pydantic import *
MODEL_PATH = "/D:/llama2-7b.bin"
def load_model():
"""Loads Llama model"""
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
Llama_model = LlamaCpp(
model_path=MODEL_PATH,
temperature=0.5,
max_tokens=2000,
top_p=1,
callback_manager=callback_manager,
verbose=True
)
return Llama_model
llm = load_model()
model_prompt = """
Question:What is the largest planet discovered so far?
"""
response = llm(model_prompt)
print(response)
`
This is the error
`
PS D:\Python Projects\python> python learn.py
Traceback (most recent call last):
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\pydantic_v1\__init__.py", line 15, in <module>
from pydantic.v1 import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\pydantic\__init__.py", line 3, in <module>
import pydantic_core
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\pydantic_core\__init__.py", line 6, in <module>
from ._pydantic_core import (
ModuleNotFoundError: No module named 'pydantic_core._pydantic_core'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\Python Projects\python\learn.py", line 1, in <module>
from langchain.llms import LlamaCpp
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\agents\__init__.py", line 31, in <module>
from langchain.agents.agent import (
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\agents\agent.py", line 14, in <module>
from langchain.agents.agent_iterator import AgentExecutorIterator
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\agents\agent_iterator.py", line 21, in <module>
from langchain.callbacks.manager import (
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\callbacks\__init__.py", line 10, in <module>
from langchain.callbacks.aim_callback import AimCallbackHandler
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\callbacks\aim_callback.py", line 5, in <module>
from langchain.schema import AgentAction, AgentFinish, LLMResult
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\schema\__init__.py", line 3, in <module>
from langchain.schema.cache import BaseCache
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\schema\cache.py", line 6, in <module>
from langchain.schema.output import Generation
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\schema\output.py", line 7, in <module>
from langchain.load.serializable import Serializable
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\load\serializable.py", line 4, in <module>
from langchain.pydantic_v1 import BaseModel, PrivateAttr
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\langchain\pydantic_v1\__init__.py", line 17, in <module>
from pydantic import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\pydantic\__init__.py", line 3, in <module>
import pydantic_core
File "C:\Users\krish\anaconda3\envs\lang\Lib\site-packages\pydantic_core\__init__.py", line 6, in <module>
from ._pydantic_core import (
ModuleNotFoundError: No module named 'pydantic_core._pydantic_core'
PS D:\Python Projects\python>
`
I tried installing pydantic but to no avail.
Please help
Python version = 3.11.4
pip version =23.2.1
### Suggestion:
_No response_ | from ._pydantic_core import ( ModuleNotFoundError: No module named 'pydantic_core._pydantic_core' | https://api.github.com/repos/langchain-ai/langchain/issues/10222/comments | 1 | 2023-09-05T11:26:34Z | 2023-09-05T11:54:29Z | https://github.com/langchain-ai/langchain/issues/10222 | 1,881,774,642 | 10,222 |
[
"hwchase17",
"langchain"
]
| ### System Info
windows
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import UnstructuredURLLoader
from langchain.chains import LLMRequestsChain
from langchain.chains.llm_requests import DEFAULT_HEADERS
from langchain.requests import TextRequestsWrapper
from bs4 import BeautifulSoup
# loader = UnstructuredURLLoader(
# urls=["https://baijiahao.baidu.com/s?id=1776165472932985664"],
# show_progress_bar=True,
# )
# data = loader.load()
# print(data)
url = "https://baijiahao.baidu.com/s?id=1776165472932985664"
a = TextRequestsWrapper(headers=DEFAULT_HEADERS)
res = a.get(url)
# extract the text from the html
soup = BeautifulSoup(res, "html.parser")
res = soup.get_text()
print(res)
```
I can not get the content from this url, while all other urls are OK.
### Expected behavior
Please take a look at this url | Please add support URL parse for BaijiaHao | https://api.github.com/repos/langchain-ai/langchain/issues/10219/comments | 3 | 2023-09-05T09:04:40Z | 2023-12-13T16:06:23Z | https://github.com/langchain-ai/langchain/issues/10219 | 1,881,536,484 | 10,219 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python : v3.10.10
Langchain : v0.0.281
Elasticsearch : v8.9.0
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was following this documentation https://python.langchain.com/docs/integrations/vectorstores/elasticsearch
my script was
```
# GENERATE INDEXING
loader = TextLoader("models/state_of_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=100, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = ElasticsearchStore.from_documents(
docs,
embeddings,
es_url="http://localhost:9200",
index_name="test-basic",
es_user=os.environ.get("ELASTIC_USERNAME"),
es_password=os.environ.get("ELASTIC_PASSWORD"),
)
```
but it raising an error when indexing the document
```
Created a chunk of size 132, which is longer than the specified 100
Created a chunk of size 107, which is longer than the specified 100
Created a chunk of size 103, which is longer than the specified 100
Created a chunk of size 104, which is longer than the specified 100
Error adding texts: 336 document(s) failed to index.
First error reason: failed to parse
Traceback (most recent call last):
File "D:\Project\elastic-langchain\main.py", line 31, in <module>
db = ElasticsearchStore.from_documents(
File "D:\Project\elastic-langchain\.venv\lib\site-packages\langchain\vectorstores\elasticsearch.py", line 1027, in from_documents
elasticsearchStore.add_documents(documents)
File "D:\Project\elastic-langchain\.venv\lib\site-packages\langchain\vectorstores\base.py", line 101, in add_documents
return self.add_texts(texts, metadatas, **kwargs)
File "D:\Project\elastic-langchain\.venv\lib\site-packages\langchain\vectorstores\elasticsearch.py", line 881, in add_texts
raise e
File "D:\Project\elastic-langchain\.venv\lib\site-packages\langchain\vectorstores\elasticsearch.py", line 868, in add_texts
success, failed = bulk(
File "D:\Project\elastic-langchain\.venv\lib\site-packages\elasticsearch\helpers\actions.py", line 521, in bulk
for ok, item in streaming_bulk(
File "D:\Project\elastic-langchain\.venv\lib\site-packages\elasticsearch\helpers\actions.py", line 436, in streaming_bulk
for data, (ok, info) in zip(
File "D:\Project\elastic-langchain\.venv\lib\site-packages\elasticsearch\helpers\actions.py", line 355, in _process_bulk_chunk
yield from gen
File "D:\Project\elastic-langchain\.venv\lib\site-packages\elasticsearch\helpers\actions.py", line 274, in _process_bulk_chunk_success
raise BulkIndexError(f"{len(errors)} document(s) failed to index.", errors)
elasticsearch.helpers.BulkIndexError: 336 document(s) failed to index.
```
### Expected behavior
It can indexing and not raising any errror | Error failed to index when using ElasticsearchStore.from_documents | https://api.github.com/repos/langchain-ai/langchain/issues/10218/comments | 19 | 2023-09-05T07:59:38Z | 2024-06-21T03:08:10Z | https://github.com/langchain-ai/langchain/issues/10218 | 1,881,424,933 | 10,218 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
class AzureMLEndpointClient(object):
"""AzureML Managed Endpoint client."""
def __init__(
self, endpoint_url: str, endpoint_api_key: str, deployment_name: str = ""
) -> None:
"""Initialize the class."""
if not endpoint_api_key or not endpoint_url:
raise ValueError(
"""A key/token and REST endpoint should
be provided to invoke the endpoint"""
)
self.endpoint_url = endpoint_url
self.endpoint_api_key = endpoint_api_key
self.deployment_name = deployment_name
def call(self, body: bytes, **kwargs: Any) -> bytes:
"""call."""
# The azureml-model-deployment header will force the request to go to a
# specific deployment. Remove this header to have the request observe the
# endpoint traffic rules.
headers = {
"Content-Type": "application/json",
"Authorization": ("Bearer " + self.endpoint_api_key),
}
if self.deployment_name != "":
headers["azureml-model-deployment"] = self.deployment_name
req = urllib.request.Request(self.endpoint_url, body, headers)
response = urllib.request.urlopen(req, timeout=kwargs.get("timeout", 50))
result = response.read()
return result
I am using this class in order to call a AzureML Endpoint and i am not able to pass the timeout as a parameter anywhere in the function call.
### Suggestion:
_No response_ | Issue: Timeout parameter in the AzureMLEndpointClient cannot be modified | https://api.github.com/repos/langchain-ai/langchain/issues/10217/comments | 2 | 2023-09-05T07:49:27Z | 2023-12-13T16:06:28Z | https://github.com/langchain-ai/langchain/issues/10217 | 1,881,407,285 | 10,217 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version:0.0271
python version:3.9
transformers:4.30.2
linux
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
model = ErnieBotChat(model="ERNIE-Bot")
tools = load_tools(["llm-math", "wikipedia"], llm=model)
agent = initialize_agent(
tools,
model,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True)
result = agent("300的25%是多少?")
print(result)
### Expected behavior
Error:
ValueError: Got unknown type content='Answer the following questions as best you can. You have access to the following tools:\n\nCalculator: Useful for when you need to answer questions about math.\nWikipedia: A wrapper around Wikipedia. Useful for when you need to answer general questions about people, places, companies, facts, historical events, or other subjects. Input should be a search query.\n\nThe way you use the tools is by specifying a json blob.\nSpecifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).\n\nThe only values that should be in the "action" field are: Calculator, Wikipedia\n\nThe $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:\n\n```\n{\n "action": $TOOL_NAME,\n "action_input": $INPUT\n}\n```\n\nALWAYS use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction:\n```\n$JSON_BLOB\n```\nObservation: the result of the action\n... (this Thought/Action/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin! Reminder to always use the exact characters `Final Answer` when responding.' additional_kwargs={}
When I was running ErnieBotChat and trying to join agent, an error occurred, as shown above. How to solve this? thank you! | Errors about ErnieBotChat using agent | https://api.github.com/repos/langchain-ai/langchain/issues/10215/comments | 2 | 2023-09-05T07:28:55Z | 2023-12-13T16:06:33Z | https://github.com/langchain-ai/langchain/issues/10215 | 1,881,374,483 | 10,215 |
[
"hwchase17",
"langchain"
]
| ### System Info
Kaggle notebook
### Who can help?
@agola11 @hw
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
'''
I installed:
```
pip install -qq -U langchain tiktoken pypdf chromadb faiss-gpu unstructured openai
pip install -qq -U transformers InstructorEmbedding sentence_transformers pydantic==1.9.0
pip uninstall pydantic-settings
pip uninstall inflect
pip install pydantic-settings
pip install inflect
```
but getting error:
```
PydanticImportError: `BaseSettings` has been moved to the `pydantic-settings` package. See https://docs.pydantic.dev/2.3/migration/#basesettings-has-moved-to-pydantic-settings for more details.
```
Even though i have chromadb installed:
```
ImportError: Could not import chromadb python package. Please install it with `pip install chromadb`.
```
```
from langchain.document_loaders import PyPDFLoader, DirectoryLoader, PDFMinerLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import SentenceTransformerEmbeddings
from langchain.vectorstores import Chroma, FAISS
import os
persist_directory = "db"
def main():
for root, dirs, files in os.walk("docs"):
for file in files:
if file.endswith(".pdf"):
print(file)
loader = PyPDFLoader(os.path.join(root, file))
documents = loader.load()
print("splitting into chunks")
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)
texts = text_splitter.split_documents(documents)
#create embeddings here
print("Loading sentence transformers model")
embeddings = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
#create vector store here
print(f"Creating embeddings. May take some minutes...")
db = Chroma.from_documents(texts, embeddings, persist_directory=persist_directory)
db.persist()
db=None
print(f"Ingestion complete! You can now run privateGPT.py to query your documents")
if __name__ == "__main__":
main()
```
### Expected behavior
shows results | Pydantic issue | https://api.github.com/repos/langchain-ai/langchain/issues/10210/comments | 3 | 2023-09-05T05:21:26Z | 2023-09-05T06:14:19Z | https://github.com/langchain-ai/langchain/issues/10210 | 1,881,211,959 | 10,210 |
[
"hwchase17",
"langchain"
]
| ### System Info
0.0.274
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
def _load(self) -> None:
"""Load the collection if available."""
from pymilvus import Collection
if isinstance(self.col, Collection) and self._get_index() is not None:
self.col.load(timeout=5)
### Expected behavior
https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/milvus.py#L387C3-L387C3
<img width="563" alt="ab96a8450969eaeae4eb91cf582828cf" src="https://github.com/langchain-ai/langchain/assets/16218592/2976b0de-fc41-46aa-88eb-fd807e3b6b57">
<img width="350" alt="26f720d9fa837900f91c8ff59d6fd815" src="https://github.com/langchain-ai/langchain/assets/16218592/740e0f7d-da60-4903-8d46-30e5e6c3faa2">
<img width="576" alt="02eee59006291866c52eda92921c9281" src="https://github.com/langchain-ai/langchain/assets/16218592/4de8f6dd-b154-4407-9153-b860a1f9a332">
| langchain/vectorstores/milvus.py _load function need timeout parameter | https://api.github.com/repos/langchain-ai/langchain/issues/10207/comments | 4 | 2023-09-05T03:46:17Z | 2024-02-08T04:20:22Z | https://github.com/langchain-ai/langchain/issues/10207 | 1,881,133,733 | 10,207 |
[
"hwchase17",
"langchain"
]
| I have the following code (see below). I have two prompts. One works ok (p1), and the other (p2) throws following error (complete error below):
`OutputParserException: Could not parse LLM output: I don't know how to answer the question because I don't have access to the casos_perfilados_P2 table.`
`p1 : Seleccionar los campos linea, nivel y genero, que contengan el valor F en genero y el valor D2 en el campo nse. Limitar el número de registros a 3. Armar un dataframe de pandas con el resultado anterior. `
`p2 : Seleccionar los campos linea y fecha, donde el campo fecha es mayor o igual a 2023-02-18 y menor o igual a 2023-02-24. Agrupar y sumar los valores del campo linea por el campo fecha. Armar un dataframe de pandas con el resultado anterior. `
...Why p2 has "access" issues while p1 doesn't...any clues ?
```python
# google
import vertexai
# Alchemy
from sqlalchemy import *
from sqlalchemy.schema import *
# Langchain
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from langchain.callbacks import StreamlitCallbackHandler
from langchain.llms import VertexAI
from langchain.agents.agent_types import AgentType
from langchain.agents import initialize_agent
from langchain.tools.python.tool import PythonREPLTool
from langchain.agents.agent_toolkits import create_python_agent
from langchain.tools import Tool
# Streamlit
import streamlit as st
# Settings
PROJECT_ID = "xxxxxxx"
REGION = "xxxxxxx"
dataset = "xxxxxxx"
# Initialize Vertex AI SDK
vertexai.init(project=PROJECT_ID, location=REGION)
# BQ db
sqlalchemy_url = f'bigquery://{PROJECT_ID}/{dataset}'
db = SQLDatabase.from_uri(sqlalchemy_url)
# llm
llm = VertexAI(
model_name="text-bison@001",
max_output_tokens=1024,
temperature=0,
top_p=0.8,
top_k=40,
verbose=True
)
# SQL Agent
sql_agent = create_sql_agent(
llm=llm,
toolkit=SQLDatabaseToolkit(db=db, llm=llm),
verbose=True,
top_k=1000,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)
# Python Agent
python_agent = create_python_agent(
llm,
tool=PythonREPLTool(),
verbose=True
)
# Main Agent
agent = initialize_agent(
tools=[
Tool(
name="SQLAgent",
func=sql_agent.run,
description="""Useful to execute sql commands""",
),
Tool(
name="PythonAgent",
func=python_agent.run,
description="""Useful to run python commands""",
),
],
llm=llm,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True,
verbose=True,
)
if prompt := st.chat_input():
st.chat_message("user").write(prompt)
with st.chat_message("assistant"):
st_callback = StreamlitCallbackHandler(st.container())
response = agent.run(prompt)
st.write(response)
```
```bash
> Entering new AgentExecutor chain...
Action: sql_db_list_tables
Action Input:
Observation: casos_perfilados_P2
Thought:2023-09-04 19:23:34.050 Uncaught app exception
Traceback (most recent call last):
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "/home/flow/Documentos/genai/chat1.py", line 91, in <module>
response = agent.run(prompt)
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/chains/base.py", line 475, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/chains/base.py", line 282, in __call__
raise e
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/chains/base.py", line 276, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/agents/agent.py", line 1036, in _call
next_step_output = self._take_next_step(
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/agents/agent.py", line 891, in _take_next_step
observation = tool.run(
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/tools/base.py", line 351, in run
raise e
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/tools/base.py", line 323, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/tools/base.py", line 493, in _run
self.func(
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/chains/base.py", line 475, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/chains/base.py", line 282, in __call__
raise e
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/chains/base.py", line 276, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/agents/agent.py", line 1036, in _call
next_step_output = self._take_next_step(
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/agents/agent.py", line 844, in _take_next_step
raise e
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/agents/agent.py", line 833, in _take_next_step
output = self.agent.plan(
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/agents/agent.py", line 457, in plan
return self.output_parser.parse(full_output)
File "/home/flow/anaconda3/envs/llm/lib/python3.10/site-packages/langchain/agents/mrkl/output_parser.py", line 52, in parse
raise OutputParserException(
langchain.schema.output_parser.OutputParserException: Could not parse LLM output: `I don't know how to answer the question because I don't have access to the casos_perfilados_P2 table.`
```
| Issue: SQLDatabaseToolkit inconsistency | https://api.github.com/repos/langchain-ai/langchain/issues/10205/comments | 4 | 2023-09-05T02:15:33Z | 2023-12-07T02:53:55Z | https://github.com/langchain-ai/langchain/issues/10205 | 1,881,063,737 | 10,205 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain 0.0.281
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use this lines of code
```
schema = {
"properties": {
"visit": {"type": "string"},
"date": {"type": "string"},
"gender": {"type": "string"},
"age": {"type": "integer"},
}
}
inp = """This 23-year-old white female presents with complaint of allergies.
She used to have allergies when she lived in Seattle but she thinks they are worse here.
In the past, she has tried Claritin, and Zyrtec. Both worked for short time but then seemed to lose effectiveness. """
llm = Replicate(
model="a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5",
input={"temperature": 0.75, "max_length": 500, "top_p": 1},
)
chain = create_extraction_chain(schema, llm)
chain.run(inp)
```
---------------------------------------------------------------------------
```
OutputParserException Traceback (most recent call last)
[<ipython-input-9-5e77f11609b2>](https://localhost:8080/#) in <cell line: 72>()
70 )
71 chain = create_extraction_chain(schema, llm)
---> 72 chain.run(inp)["data"]
8 frames
[/usr/local/lib/python3.10/dist-packages/langchain/output_parsers/openai_functions.py](https://localhost:8080/#) in parse_result(self, result)
21 generation = result[0]
22 if not isinstance(generation, ChatGeneration):
---> 23 raise OutputParserException(
24 "This output parser can only be used with a chat generation."
25 )
OutputParserException: This output parser can only be used with a chat generation.
```
### Expected behavior
Structrued JSON based on schema | create_extraction_chain does not work with other LLMs? Replicate models fails to load | https://api.github.com/repos/langchain-ai/langchain/issues/10201/comments | 24 | 2023-09-05T00:08:26Z | 2024-05-06T07:34:01Z | https://github.com/langchain-ai/langchain/issues/10201 | 1,880,981,812 | 10,201 |
[
"hwchase17",
"langchain"
]
| ### System Info
0.0.274
### Who can help?
@hwchase17 @agola11 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.llms import VertexAI
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
import os
PROMPT= """
You are a customer service assistant.
Your task is to respond to user questions.
Start by greeting the user and introducing yourself as BotAssistant, and end with a polite closing. If you don't know the answer, suggest contacting customer service at 3232. Always conclude with: "I hope I've answered your request
Query: """
llm_model=VertexAI(
model_name="text-bison@001",
max_output_tokens=1024,
temperature=0.1,
top_p=0.8,
top_k=40,
verbose=True,
)
conversation = ConversationChain(
llm=llm_model,
verbose=True,
memory=ConversationBufferMemory(),
)
qst="I have an issue with my order; I received the wrong item. What should I do?"
conversation.predict(input=PROMPT+qst)
```
### Expected behavior
I want to develop an llm that acts as a customer assistant( to use for E comerce), responding to user queries using a JSON dataset containing question-answer pairs,possibly incorporating pdf documents of terms and conditions and offer descriptions. How can I effectively use Retrieval Augmented Generation to address this challenge? Is fine-tuning a recommended approach?
Sometimes, the answers to queries are indirect, and they may include links to previously provided answers on the same topic.do you think that a graph representation of the questions answers pairs dataset is relevant?
Any help,links would be appreciated, | How to develop an efficient question answering solution acting as a customer assistant based on RAG or Fine tuning? | https://api.github.com/repos/langchain-ai/langchain/issues/10188/comments | 4 | 2023-09-04T15:44:40Z | 2024-02-10T16:18:02Z | https://github.com/langchain-ai/langchain/issues/10188 | 1,880,533,324 | 10,188 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Adopt [Classy-fire](https://github.com/microsoft/classy-fire)
### Motivation
See above link on benefits of the approach
### Your contribution
Can adapt classy-fire for easier integration as requested | Adopt Microsoft's Classy-fire classification approach | https://api.github.com/repos/langchain-ai/langchain/issues/10187/comments | 3 | 2023-09-04T14:28:14Z | 2023-12-13T16:06:43Z | https://github.com/langchain-ai/langchain/issues/10187 | 1,880,407,689 | 10,187 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
After PR [#8612](https://github.com/langchain-ai/langchain/pull/8612), access to [RedisVectorStoreRetriever](https://github.com/langchain-ai/langchain/blob/27944cb611ee8face34fbe764c83e37841f96eb7/libs/langchain/langchain/vectorstores/redis/base.py#L1293) has been removed
### Suggestion:
Include **RedisVectorStoreRetriever** import in [redis/__init__.py](https://github.com/langchain-ai/langchain/blob/27944cb611ee8face34fbe764c83e37841f96eb7/libs/langchain/langchain/vectorstores/redis/__init__.py) on line 1
current: `from .base import Redis`
suggestion update: `from .base import Redis, RedisVectorStoreRetriever`
| Issue: RedisVectorStoreRetriever not accessible | https://api.github.com/repos/langchain-ai/langchain/issues/10186/comments | 4 | 2023-09-04T14:21:34Z | 2023-09-12T22:29:54Z | https://github.com/langchain-ai/langchain/issues/10186 | 1,880,395,414 | 10,186 |
[
"hwchase17",
"langchain"
]
| I think it'd better if there was a flag in the ConversationalRetreivalQAChain() where we can choose to not select the chain of question rephrasing before generation. Can this be considered as an issue and dealt with accordingly?
_Originally posted by @AshminJayson in https://github.com/langchain-ai/langchain/issues/4076#issuecomment-1705339045_
| Add option to disable question augmentation in ConversationalRetrievalQAChain | https://api.github.com/repos/langchain-ai/langchain/issues/10185/comments | 1 | 2023-09-04T14:11:55Z | 2023-12-11T16:04:43Z | https://github.com/langchain-ai/langchain/issues/10185 | 1,880,378,119 | 10,185 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
#1. Open terminal, install bedrock specific version boto3 SDK,langchain
curl -sS https://d2eo22ngex1n9g.cloudfront.net/Documentation/SDK/bedrock-python-sdk.zip > sdk.zip
sudo yum install unzip -y
unzip sdk.zip -d sdk
pip install --no-build-isolation --force-reinstall ./sdk/awscli-*-py3-none-any.whl ./sdk/boto3-*-py3-none-any.whl ./sdk/botocore-*-py3-none-any.whl
pip install --quiet langchain==0.0.249
#pip install 'jupyter-ai>=1.0,<2.0' # If you use JupyterLab 3pip install jupyter-ai # If you use JupyterLab 4
#2. change the default token count to 1024
vi ~/anaconda3/lib/python3.11/site-packages/langchain/llms/sagemaker_endpoint.py
Insert below lines after body = self.content_handler.transform_input(prompt, _model_kwargs)
parameters={"max_new_tokens": 1024, "top_p": 0.9, "temperature": 0.6, "return_full_text": True}
t = json.loads(body)
t["parameters"] = parameters
body = json.dumps(t)
Insert the line CustomAttributes='accept_eula=true’, between Accept=accepts, and **_endpoint_kwargs,
#3. aws configure default profile, make sure the aksk has enough permissions(SageMakerFullAccess)
aws configure
#4.run %%ai in *.ipynb file on ec2 instead of SageMaker notebook instance / SageMaker Studio [also can run in VSCODE] after making sure your Amazon SageMaker endpoint is health
%load_ext jupyter_ai
%%ai sagemaker-endpoint:jumpstart-dft-meta-textgeneration-llama-2-7b --region-name=us-east-1 --request-schema={"inputs":"<prompt>"} --response-path=[0]['generation']
write somthing on Humor
### Suggestion:
_No response_ | Issue: How to configure Amazon SageMaker endpoint | https://api.github.com/repos/langchain-ai/langchain/issues/10184/comments | 3 | 2023-09-04T14:11:27Z | 2023-12-25T16:08:45Z | https://github.com/langchain-ai/langchain/issues/10184 | 1,880,377,259 | 10,184 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
how to configure Amazon Bedrock step by step:
Answers:
#1. Open terminal, install bedrock specific version boto3 SDK,langchain
curl -sS https://d2eo22ngex1n9g.cloudfront.net/Documentation/SDK/bedrock-python-sdk.zip > sdk.zip
sudo yum install unzip -y
unzip sdk.zip -d sdk
pip install --no-build-isolation --force-reinstall ./sdk/awscli-*-py3-none-any.whl ./sdk/boto3-*-py3-none-any.whl ./sdk/botocore-*-py3-none-any.whl
pip install --quiet langchain==0.0.249
#pip install 'jupyter-ai>=1.0,<2.0' # If you use JupyterLab 3pip install jupyter-ai # If you use JupyterLab 4
#2. change the default token count to 2048
vi ~/anaconda3/lib/python3.11/site-packages/langchain/llms/bedrock.py
change this line: input_body["max_tokens_to_sample"] = 2048
#3. aws configure default profile, make sure the aksk has enough permissions(BedrockFullAccess)
aws configure
#4.run %%ai in *.ipynb file on ec2/local machine [also can run in VSCODE] instead of SageMaker notebook instance / SageMaker Studio
%load_ext jupyter_ai
%%ai bedrock:anthropic.claude-v2
Write something about Amazon
### Suggestion:
_No response_ | Issue: how to configure Amazon Bedrock | https://api.github.com/repos/langchain-ai/langchain/issues/10182/comments | 3 | 2023-09-04T14:09:32Z | 2023-12-13T16:06:47Z | https://github.com/langchain-ai/langchain/issues/10182 | 1,880,373,791 | 10,182 |
[
"hwchase17",
"langchain"
]
| ### System Info
torch = "2.0.1"
transformers = "4.31.0"
langchain= "0.0.251"
kor= "0.13.0"
openai= "0.27.8"
pydantic= "1.10.8"
python_version = "3.9"
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from db.tools import DbTools
from langchain.chat_models import ChatOpenAI
from kor.nodes import Object, Text, Number
from kor import create_extraction_chain, Object, Text
llm = ChatOpenAI(
model_name="gpt-3.5-turbo",
temperature=0,
max_tokens=2000,
frequency_penalty=0,
presence_penalty=0,
openai_api_key = "" ,
top_p=1.0,
)
### Expected behavior
Hello I encountered the following error when trying to import the langchain.schema module:
Traceback (most recent call last):
File "/home/alpha/platform/shared/libpython/computation/fine_instruction_GPT.py", line 2, in <module>
from langchain.chat_models import ChatOpenAI
File "/home/alpha/.local/share/virtualenvs/platform-_bj8LfuO/lib/python3.9/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/home/alpha/.local/share/virtualenvs/platform-_bj8LfuO/lib/python3.9/site-packages/langchain/agents/__init__.py", line 31, in <module>
from langchain.agents.agent import (
File "/home/alpha/.local/share/virtualenvs/platform-_bj8LfuO/lib/python3.9/site-packages/langchain/agents/agent.py", line 15, in <module>
from langchain.agents.agent_iterator import AgentExecutorIterator
File "/home/alpha/.local/share/virtualenvs/platform-_bj8LfuO/lib/python3.9/site-packages/langchain/agents/agent_iterator.py", line 21, in <module>
from langchain.callbacks.manager import (
File "/home/alpha/.local/share/virtualenvs/platform-_bj8LfuO/lib/python3.9/site-packages/langchain/callbacks/__init__.py", line 10, in <module>
from langchain.callbacks.aim_callback import AimCallbackHandler
File "/home/alpha/.local/share/virtualenvs/platform-_bj8LfuO/lib/python3.9/site-packages/langchain/callbacks/aim_callback.py", line 5, in <module>
from langchain.schema import AgentAction, AgentFinish, LLMResult
File "/home/alpha/.local/share/virtualenvs/platform-_bj8LfuO/lib/python3.9/site-packages/langchain/schema/__init__.py", line 4, in <module>
from langchain.schema.memory import BaseChatMessageHistory, BaseMemory
File "/home/alpha/.local/share/virtualenvs/platform-_bj8LfuO/lib/python3.9/site-packages/langchain/schema/memory.py", line 7, in <module>
from langchain.schema.messages import AIMessage, BaseMessage, HumanMessage
File "/home/alpha/.local/share/virtualenvs/platform-_bj8LfuO/lib/python3.9/site-packages/langchain/schema/messages.py", line 147, in <module>
class HumanMessageChunk(HumanMessage, BaseMessageChunk):
File "pydantic/main.py", line 367, in pydantic.main.ModelMetaclass.__new__
File "/usr/lib/python3.9/abc.py", line 85, in __new__
cls = super().__new__(mcls, name, bases, namespace, **kwargs)
TypeError: multiple bases have instance lay-out conflict | Multiple bases have instance lay-out conflict | https://api.github.com/repos/langchain-ai/langchain/issues/10179/comments | 7 | 2023-09-04T12:03:49Z | 2023-12-06T08:21:24Z | https://github.com/langchain-ai/langchain/issues/10179 | 1,880,134,640 | 10,179 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.279
### Who can help?
@agola11
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
def ask_local_vector_db(question):
# old docsearch_db
docs = docsearch_db.similarity_search(question, k=10)
pretty_print_docs(docs)
print("**************************************************")
cleaned_matches = []
total_toknes = 0
# print(docs)
for context in docs:
cleaned_context = context.page_content.replace('\n', ' ').strip()
cleaned_context = f"{cleaned_context}"
tokens = tokenizers.encode(cleaned_context, add_special_tokens=False)
if total_toknes + len(tokens) <= (1536 * 8):
cleaned_matches.append(cleaned_context)
total_toknes += len(tokens)
else:
break
# 将清理过的匹配项组合合成一个字符串
combined_text = " ".join(cleaned_matches)
answer = local_chain.predict(combined_text=combined_text, human_input=question)
return answer
# 创建工具列表
tools = [
Tool(
name="Google_Search",
func=GoogleSerper_search.run,
description="""
当你用本地向量数据库问答后说无法找到答案的之后,你可以使用互联网搜索引擎工具进行信息查询,尝试直接找到问题答案。
注意你需要提出非常有针对性准确的问题。
""",
),
Tool(
name="Local_Search",
func=ask_local_vector_db,
description="""
你可以首先通过本地向量数据知识库尝试寻找问答案。
注意你需要提出非常有针对性准确的问题
"""
)
]
agent_kwargs = {
"extra_prompt_messages": [MessagesPlaceholder(variable_name="memory")],
}
memory = ConversationBufferMemory(memory_key="memory", return_messages=True)
# 初始化agent代理
agent_open_functions = initialize_agent(
tools,
llm=llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
agent_kwargs=agent_kwargs,
memory=memory,
max_iterations=10,
early_stopping_method="generate",
handle_parsing_errors=True, # 初始化代理并处理解析错误
callbacks=[handler],
)
### Expected behavior
请输入您的问题(纯文本格式),换行输入 n 以结束:
本地搜索gpt
n
===============Thinking===================
> Entering new AgentExecutor chain...
Invoking: `Local_Search` with `gpt`
An error occurred: not enough values to unpack (expected 2, got 1)
请输入您的问题(纯文本格式),换行输入 n 以结束:
| An error occurred: not enough values to unpack (expected 2, got 1) | https://api.github.com/repos/langchain-ai/langchain/issues/10178/comments | 2 | 2023-09-04T11:59:08Z | 2023-12-11T16:04:53Z | https://github.com/langchain-ai/langchain/issues/10178 | 1,880,127,009 | 10,178 |
[
"hwchase17",
"langchain"
]
| ### System Info
- LangChain: 0.0.279
- Python: 3.11.4
- Platform: Linux and MacOS
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.schema.messages import ChatMessageChunk
message = ChatMessageChunk(role="User", content="I am") + ChatMessageChunk(role="User", content=" indeed.")
```
Here is the error info:
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[2], line 1
----> 1 message = ChatMessageChunk(role="User", content="I am") + ChatMessageChunk(role="User", content=" indeed.")
File ~/pyenv/venv/lib/python3.11/site-packages/langchain/schema/messages.py:120, in BaseMessageChunk.__add__(self, other)
115 def __add__(self, other: Any) -> BaseMessageChunk: # type: ignore
116 if isinstance(other, BaseMessageChunk):
117 # If both are (subclasses of) BaseMessageChunk,
118 # concat into a single BaseMessageChunk
--> 120 return self.__class__(
121 content=self.content + other.content,
122 additional_kwargs=self._merge_kwargs_dict(
123 self.additional_kwargs, other.additional_kwargs
124 ),
125 )
126 else:
127 raise TypeError(
128 'unsupported operand type(s) for +: "'
129 f"{self.__class__.__name__}"
130 f'" and "{other.__class__.__name__}"'
131 )
File ~/pyenv/venv/lib/python3.11/site-packages/langchain/load/serializable.py:74, in Serializable.__init__(self, **kwargs)
73 def __init__(self, **kwargs: Any) -> None:
---> 74 super().__init__(**kwargs)
75 self._lc_kwargs = kwargs
File ~/pyenv/venv/lib/python3.11/site-packages/pydantic/v1/main.py:341, in BaseModel.__init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for ChatMessageChunk
role
field required (type=value_error.missing
```
### Expected behavior
Expected output:
```python
ChatMessageChunk(content='I am indeed.', additional_kwargs={}, role='User')
``` | ChatMessageChunk concat error | https://api.github.com/repos/langchain-ai/langchain/issues/10173/comments | 0 | 2023-09-04T08:22:02Z | 2023-10-27T02:15:23Z | https://github.com/langchain-ai/langchain/issues/10173 | 1,879,764,637 | 10,173 |
[
"hwchase17",
"langchain"
]
| ### System Info
```
url = "https://www.wsj.com"
loader = RecursiveUrlLoader(url=url, max_depth=2, extractor=lambda x: Soup(x, "html.parser").text)
docs = loader.load()
```
Docs don't have any data related to `url`. The problem is related to DFS in the codebase. It doesn't handle the root case.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
To produce the issue executed the provided code snippet.
### Expected behavior
RecursiveUrlLoader should hold the the given `url` data along with it's child. | RecursiveUrlLoader doesn't include root URL content | https://api.github.com/repos/langchain-ai/langchain/issues/10172/comments | 2 | 2023-09-04T08:12:04Z | 2023-12-11T16:04:58Z | https://github.com/langchain-ai/langchain/issues/10172 | 1,879,747,922 | 10,172 |
[
"hwchase17",
"langchain"
]
| HI team,
I am using get_openai_callback to fetch the total token usage for agent.
Is there any idea to get the token usage for each tool?
Thanks | Can I get the token usage for each tool in agent? | https://api.github.com/repos/langchain-ai/langchain/issues/10170/comments | 4 | 2023-09-04T07:47:46Z | 2024-04-29T14:32:04Z | https://github.com/langchain-ai/langchain/issues/10170 | 1,879,709,967 | 10,170 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I would like to output the answer in a specific language (e.g. chinese) when I am using agent. But when I tried to do that with the code below, it gives me an error ```OutputParserException: Could not parse LLM output:```
```
# retrieval qa chain
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=vector_store.as_retriever()
)
from langchain.agents import Tool
tools = [
Tool(
name='Knowledge Base',
func=qa.run,
description=(
'<some description>'
)
)
]
from langchain.agents import initialize_agent
agent = initialize_agent(
agent='chat-conversational-react-description',
tools=tools,
llm=llm,
verbose=True,
max_iterations=3,
early_stopping_method='generate',
memory=conversational_memory
)
prompt_prefix = f"""请用中文回答"""
agent.agent.llm_chain.prompt += prompt_prefix
query = "<some question>"
agent(query)
```
Anyone knows how to add custom prompt to agent to enforce the output language?
### Suggestion:
_No response_ | How can I append suffix to the prompt when I am using agents to control the output language | https://api.github.com/repos/langchain-ai/langchain/issues/10161/comments | 2 | 2023-09-04T04:09:02Z | 2023-12-11T16:05:08Z | https://github.com/langchain-ai/langchain/issues/10161 | 1,879,470,162 | 10,161 |
[
"hwchase17",
"langchain"
]
| ### System Info
Windows 10
langchain 0.0.279
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-small-en"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': False}
hf = HuggingFaceBgeEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
### Expected behavior

from langchain.embeddings import HuggingFaceBgeEmbeddings
Couldn't find HuggingFaceBgeEmbeddings | HuggingFaceBgeEmbeddings error | https://api.github.com/repos/langchain-ai/langchain/issues/10159/comments | 2 | 2023-09-04T02:12:44Z | 2023-09-04T02:26:11Z | https://github.com/langchain-ai/langchain/issues/10159 | 1,879,386,202 | 10,159 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
_No response_
### Suggestion:
_No response_ | why not found ernie bot from baidu in llms package? Thanks | https://api.github.com/repos/langchain-ai/langchain/issues/10150/comments | 2 | 2023-09-03T15:08:29Z | 2023-09-04T02:48:52Z | https://github.com/langchain-ai/langchain/issues/10150 | 1,879,131,847 | 10,150 |
[
"hwchase17",
"langchain"
]
| How can I stream the OpenAI response in DRF? | Issue: DRF response streaming | https://api.github.com/repos/langchain-ai/langchain/issues/10143/comments | 3 | 2023-09-03T09:47:09Z | 2024-03-16T16:04:31Z | https://github.com/langchain-ai/langchain/issues/10143 | 1,879,029,116 | 10,143 |
[
"hwchase17",
"langchain"
]
| ### Feature request
[Graph Of Thoughts](https://arxiv.org/pdf/2308.09687.pdf) looks promising.
It is possible to implement it with LangChain?
### Motivation
More performant prompt technique
### Your contribution
I can help with documentation. | Graph Of Thoughts | https://api.github.com/repos/langchain-ai/langchain/issues/10137/comments | 5 | 2023-09-02T23:14:01Z | 2024-01-30T16:17:48Z | https://github.com/langchain-ai/langchain/issues/10137 | 1,878,874,363 | 10,137 |
[
"hwchase17",
"langchain"
]
| ### System Info
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connection.py", line 174, in _new_conn
conn = connection.create_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/util/connection.py", line 95, in create_connection
raise err
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
TimeoutError: [Errno 60] Operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connection.py", line 358, in connect
self.sock = conn = self._new_conn()
^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connection.py", line 179, in _new_conn
raise ConnectTimeoutError(
urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x159e351d0>, 'Connection to openaipublic.blob.core.windows.net timed out. (connect timeout=None)')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/adapters.py", line 489, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py", line 787, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x159e351d0>, 'Connection to openaipublic.blob.core.windows.net timed out. (connect timeout=None)'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/fsndzomga/Downloads/react-mega/LCEL2.py", line 29, in <module>
vectorstore = Chroma.from_texts([obama_text], embedding=OpenAIEmbeddings())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 576, in from_texts
chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/vectorstores/chroma.py", line 186, in add_texts
embeddings = self._embedding_function.embed_documents(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 478, in embed_documents
return self._get_len_safe_embeddings(texts, engine=self.deployment)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/embeddings/openai.py", line 331, in _get_len_safe_embeddings
encoding = tiktoken.encoding_for_model(model_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tiktoken/model.py", line 75, in encoding_for_model
return get_encoding(encoding_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tiktoken/registry.py", line 63, in get_encoding
enc = Encoding(**constructor())
^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tiktoken_ext/openai_public.py", line 64, in cl100k_base
mergeable_ranks = load_tiktoken_bpe(
^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tiktoken/load.py", line 114, in load_tiktoken_bpe
contents = read_file_cached(tiktoken_bpe_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tiktoken/load.py", line 46, in read_file_cached
contents = read_file(blobpath)
^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/tiktoken/load.py", line 24, in read_file
return requests.get(blobpath).content
^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/api.py", line 73, in get
return request("get", url, params=params, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/adapters.py", line 553, in send
raise ConnectTimeout(e, request=request)
requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x159e351d0>, 'Connection to openaipublic.blob.core.windows.net timed out. (connect timeout=None)'))
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Here is my code:
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
from langchain.schema.runnable import RunnablePassthrough
from langchain.schema.output_parser import StrOutputParser
from langchain.prompts import ChatPromptTemplate
from langchain.chat_models import ChatOpenAI
from operator import itemgetter
from apikey import OPENAI_API_KEY
import os
# Set the OpenAI API key
os.environ['OPENAI_API_KEY'] = OPENAI_API_KEY
# Initialize the ChatOpenAI model
model = ChatOpenAI()
# Create a long text about Barack Obama to serve as the context
obama_text = """
Barack Obama served as the 44th President of the United States from 2009 to 2017.
He was born in Honolulu, Hawaii, on August 4, 1961. Obama is a graduate of Columbia University
and Harvard Law School, where he served as president of the Harvard Law Review. He was a community
organizer in Chicago before earning his law degree and worked as a civil rights attorney and taught
constitutional law at the University of Chicago Law School between 1992 and 2004. He served three
terms representing the 13th District in the Illinois Senate from 1997 until 2004, when he ran for the
U.S. Senate. Obama received the Nobel Peace Prize in 2009.
"""
# Create the retriever with the Obama text as the context
vectorstore = Chroma.from_texts([obama_text], embedding=OpenAIEmbeddings())
retriever = vectorstore.as_retriever()
# Define the prompt template
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
# Create the chain for answering questions
chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| model
| StrOutputParser()
)
# Invoke the chain to answer a question
print(chain.invoke("When was Barack Obama born?"))
# Create a new prompt template that allows for translation
template_with_language = """Answer the question based only on the following context:
{context}
Question: {question}
Answer in the following language: {language}
"""
prompt_with_language = ChatPromptTemplate.from_template(template_with_language)
# Create the chain for answering questions in different languages
chain_with_language = {
"context": itemgetter("question") | retriever,
"question": itemgetter("question"),
"language": itemgetter("language")
} | prompt_with_language | model | StrOutputParser()
# Invoke the chain to answer a question in Italian
print(chain_with_language.invoke({"question": "When was Barack Obama born?", "language": "italian"}))
### Expected behavior
a response to my question | Connection Timeout and Max Retries Exceeded in HTTPS Request | https://api.github.com/repos/langchain-ai/langchain/issues/10135/comments | 1 | 2023-09-02T22:18:19Z | 2023-09-03T19:47:46Z | https://github.com/langchain-ai/langchain/issues/10135 | 1,878,862,756 | 10,135 |
[
"hwchase17",
"langchain"
]
| ### System Info
Latest
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use ConversationalRetrievalChain with an Azure OpenAI gpt model
This is the code:
llm = AzureOpenAI(engine=OPENAI_DEPLOYMENT_NAME, model=OPENAI_MODEL, temperature=0.0)
qa = ConversationalRetrievalChain.from_llm(llm, retriever)
I get the following error:
usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for LLMChain
llm
Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, invoke, predict, predict_messages (type=type_error)
### Expected behavior
The code executes without any error | ConversationalRetrievalChain doesn't work with Azure OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/10128/comments | 4 | 2023-09-02T15:34:17Z | 2024-02-18T23:59:07Z | https://github.com/langchain-ai/langchain/issues/10128 | 1,878,736,357 | 10,128 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.279 / Langchain experimental 0.0.12/ Python 3.10
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Follow the guideline: https://python.langchain.com/docs/guides/privacy/presidio_data_anonymization
I was able to make it work by downloading the source of the langchain experimental and copy & pasting the missing folder into the local lib folder.
However, this won't be a good option when building a docker image. Why isn't it part of the experimental library 0.0.12?
### Expected behavior
Example codes will work. | ModuleNotFoundError: No module named 'langchain_experimental.data_anonymizer' | https://api.github.com/repos/langchain-ai/langchain/issues/10126/comments | 8 | 2023-09-02T14:16:42Z | 2024-02-13T05:16:02Z | https://github.com/langchain-ai/langchain/issues/10126 | 1,878,707,467 | 10,126 |
[
"hwchase17",
"langchain"
]
| ### Feature request
With the new announced [support for Streaming in SageMaker Endpoints](https://aws.amazon.com/blogs/machine-learning/elevating-the-generative-ai-experience-introducing-streaming-support-in-amazon-sagemaker-hosting/), Langchain can add a Streaming capability to the `SagemakerEndpoint` class.
We can leverage the code that already exists as part of the blog post and extend where required by using the [boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker-runtime/client/invoke_endpoint_with_response_stream.html) API
### Motivation
Decreasing latency is important for developers. Streaming is currently available in `OpenAI`, `ChatOpenAI`, `ChatAnthropic`, `Hugging Face Text Generation Inference`, and `Replicate`. Having `SageMaker` opens the possibilities to a wide range of developers that leverage this AWS Service.
### Your contribution
I can work on this feature to start the process with guidance from the core development team and the community. | SageMaker Endpoints with Streaming Capability | https://api.github.com/repos/langchain-ai/langchain/issues/10125/comments | 3 | 2023-09-02T13:37:32Z | 2024-02-13T16:12:54Z | https://github.com/langchain-ai/langchain/issues/10125 | 1,878,693,225 | 10,125 |
[
"hwchase17",
"langchain"
]
| ### Feature request
**Issue Description:**
I would like to propose the addition of a new Agent called something like 'ConverseUntilAnswered.' This new agent would engage in multi-step interactions with users to obtain an answer according to a required schema, and self-terminate when an answer is obtained.
**Example Use Case:**
Let's consider a scenario where a chatbot needs to collect user satisfaction ratings for its website. The 'ConverseUntilAnswered' agent could be used to facilitate this interaction. Perhaps this is achieved by the LangChain class injecting System messages to ChatGPT at specific times. Consider in the following dialogue that all System messages are injected by the LangChain class.
1. **System Message:** Your task is to ask the user how satisfied they are with this website on a scale from 1 to 10, where 10 is the highest. You are limited to a maximum of 2 interactions with the user. If you are reasonably certain of the user’s score, you are permitted to guess their answer. If you are unable to determine a numerical score, give a rating of “NA”. When you are ready to give a score, reply in the following format: ```ANSWER:<integer score>``` for example, for an answer of 5, you would reply ```ANSWER:5``` and add nothing else to your response.
2. **Chatbot:** On a scale from 1 to 10, how satisfied are you with this website?
3. **User:** Oh, it’s OK.
4. **Chatbot:** It sounds like you are moderately satisfied. Would you say your satisfaction is a 5 out of 10?
5. **User:** Maybe, it depends on the day.
6. **System Message:** You have used 2 of your 3 available interactions with the user. Please no longer interact with the user, but simply state your final response with the format previously specified.
8. **Chatbot:** ```ANSWER:NA```
By introducing a 'ConverseUntilAnswered' agent, we make it possible to develop an application that conducts a lengthy interview over a series of questions as defined by the programmer. For example:
1. Ask the customer their satisfaction with the website (1-10 scale)
2. Ask whether the customer is likely to recommend this website to their friends (Why or Why not)
3. Ask what feature they would like to see added to this website.
### Motivation
**Justification:**
There is a need for specialized functionality for chatbots that can conduct multi-step conversations and self-terminate once a satisfactory response is obtained. Currently the only way I know to do this is to equip an Agent with a tool that it calls when a task has been completed, but there is no guarantee the agent will ever call that tool, leaving the chatbot open to a lengthy never-ending conversation with the user. By introducing a specialized 'ConverseUntilAnswered' agent, we can minimize calls to the LLM API and ensure closure of a line of inquery to the user.
### Your contribution
I have only once before contributed to an OpenSource project, and this would be my first contribution to the LangChain project. I humbly ask for the community's guidance regarding possible best approaches for implementation using existing LangChain features, and what else should be considered before making a Pull Request.
| Proposal for 'ConverseUntilAnswered' Agent | https://api.github.com/repos/langchain-ai/langchain/issues/10122/comments | 2 | 2023-09-02T08:56:43Z | 2023-12-09T16:04:16Z | https://github.com/langchain-ai/langchain/issues/10122 | 1,878,495,416 | 10,122 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
There is no documentation about the `SQLRecordManager` in the [langchain.indexes.base.RecordManager](https://api.python.langchain.com/en/latest/indexes/langchain.indexes.base.RecordManager.html#langchain.indexes.base.RecordManager) documentation, although is used in an indexing example [here](https://python.langchain.com/docs/modules/data_connection/indexing#quickstart).
### Idea or request for content:
Please include the documentation for `SQLRecordManager`, as well as other supported databases that may be currently available to use as a Record Manager. | DOC: Inexistent API documentation about SQLRecordManager | https://api.github.com/repos/langchain-ai/langchain/issues/10120/comments | 2 | 2023-09-02T02:38:30Z | 2023-12-30T16:06:39Z | https://github.com/langchain-ai/langchain/issues/10120 | 1,878,286,644 | 10,120 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version: 0.0.276
Python version: 3.11.2
Platform: x86_64 Debian 12.2.0-14
Weaviate as vectorstore
SQLite for document index
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Followed the provided [Indexing directions](https://python.langchain.com/docs/modules/data_connection/indexing) of LangChain's documentation.
```
import os, time, json, weaviate, openai
from langchain.vectorstores import Weaviate
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.indexes import SQLRecordManager, index
from langchain.text_splitter import CharacterTextSplitter
from langchain.schema import Document
from datetime import datetime
VECTORS_INDEX_NAME = 'LaborIA_vectors_TEST'
COLLECTION_NAME = 'LaborIA_docs_TEST'
NAMESPACE = f"weaviate/{COLLECTION_NAME}"
record_manager = SQLRecordManager(NAMESPACE, db_url="sqlite:///record_manager_cache.sql")
record_manager.create_schema()
def _clear():
"""Hacky helper method to clear content. See the `full` mode section to to understand why it works."""
index([], record_manager, weaviate_vectorstore, cleanup="full", source_id_key="source")
_clear()
```
Results in the following error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[60], line 1
----> 1 _clear()
Cell In[59], line 3, in _clear()
1 def _clear():
2 """Hacky helper method to clear content. See the `full` mode section to to understand why it works."""
----> 3 index([], record_manager, weaviate_vectorstore, cleanup="full", source_id_key="source")
TypeError: index() got an unexpected keyword argument 'cleanup'
```
Declaring the `_clear()` function with either `cleanup="incremental"` or `cleanup=None` deletion modes results in the same `TypeError`. It shows no errors if the `cleanup` parameter is completely removed.
Will `index` execute in deletion mode `None` (as specified [here](https://python.langchain.com/docs/modules/data_connection/indexing#none-deletion-mode) if the `cleanup` parameter is not present?
### Expected behavior
No errors when the `cleanup` mode parameter is set when `index` is called. | Indexing deletion modes not working | https://api.github.com/repos/langchain-ai/langchain/issues/10118/comments | 3 | 2023-09-02T02:31:04Z | 2023-12-09T16:04:21Z | https://github.com/langchain-ai/langchain/issues/10118 | 1,878,277,419 | 10,118 |
[
"hwchase17",
"langchain"
]
| ### System Info
```sh
pydantic==2.3.0
langchain==0.0.279
```
### Who can help?
@agola11 , @hwchase17: Having trouble learning how to have LLMs as attributes in pydantic2. I keep getting this confusing error which I cannot troubleshoot.
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
import os
os.environ['OPENAI_API_KEY'] = 'foo'
from pydantic import BaseModel
from langchain.base_language import BaseLanguageModel
from langchain.llms import OpenAI
class Foo(BaseModel):
llm: BaseLanguageModel = OpenAI()
llm = OpenAI()
# Works
Foo()
# Fails
Foo(llm=llm)
```
Error:
```sh
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[1], line 14
12 Foo()
13 # Fails
---> 14 Foo(llm=llm)
File ~/miniconda3/envs/chain/lib/python3.11/site-packages/pydantic/main.py:165, in BaseModel.__init__(__pydantic_self__, **data)
163 # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
164 __tracebackhide__ = True
--> 165 __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)
TypeError: BaseModel.validate() takes 2 positional arguments but 3 were given
```
### Expected behavior
Be able to set the field without validation errors | Cannot have models with BaseLanguageModel in pydantic 2: TypeError: BaseModel.validate() takes 2 positional arguments but 3 were given | https://api.github.com/repos/langchain-ai/langchain/issues/10112/comments | 3 | 2023-09-01T22:33:12Z | 2023-11-16T14:53:22Z | https://github.com/langchain-ai/langchain/issues/10112 | 1,878,116,654 | 10,112 |
[
"hwchase17",
"langchain"
]
| ### System Info
```
File Python\Python39\site-packages\langchain\vectorstores\pinecone.py:301, in Pinecone.max_marginal_relevance_search(self, query, k, fetch_k, lambda_mult, filter,
namespace, **kwargs)
284 """Return docs selected using the maximal marginal relevance.
285
286 Maximal marginal relevance optimizes for similarity to query AND diversity
(...)
298 List of Documents selected by maximal marginal relevance.
299 """
300 embedding = self._embed_query(query)
--> 301 return self.max_marginal_relevance_search_by_vector(
302 embedding, k, fetch_k, lambda_mult, filter, namespace
303 )
File Python\Python39\site-packages\langchain\vectorstores\pinecone.py:269, in Pinecone.max_marginal_relevance_search_by_vector(self, embedding, k, fetch_k, lambda_mult, filter, namespace, **kwargs)
262 mmr_selected = maximal_marginal_relevance(
263 np.array([embedding], dtype=np.float32),
264 [item["values"] for item in results["matches"]],
265 k=k,
266 lambda_mult=lambda_mult,
267 )
268 selected = [results["matches"][i]["metadata"] for i in mmr_selected]
--> 269 return [
270 Document(page_content=metadata.pop((self._text_key)), metadata=metadata)
271 for metadata in selected
272 ]
File Python\Python39\site-packages\langchain\vectorstores\pinecone.py:270, in <listcomp>(.0)
262 mmr_selected = maximal_marginal_relevance(
263 np.array([embedding], dtype=np.float32),
264 [item["values"] for item in results["matches"]],
265 k=k,
266 lambda_mult=lambda_mult,
267 )
268 selected = [results["matches"][i]["metadata"] for i in mmr_selected]
269 return [
--> 270 Document(page_content=metadata.pop((self._text_key)), metadata=metadata)
271 for metadata in selected
272 ]
File Python\Python39\site-packages\langchain\load\serializable.py:74, in Serializable.__init__(self, **kwargs)
73 def __init__(self, **kwargs: Any) -> None:
---> 74 super().__init__(**kwargs)
75 self._lc_kwargs = kwargs
File Python\Python39\site-packages\pydantic\v1\main.py:341, in BaseModel.__init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for Document
page_content
str type expected (type=type_error.str)
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Connect to a Pinecone vector store
```
vectorstore = Pinecone(pinecone_index, embedding_model, "text")
```
2. Use MMR search to query the vector store. Here, my vectorstore **does not have any information about Bill Gates**.
```
query = "Who is Bill Gates?"
res = vectorstore.max_marginal_relevance_search(
query=query,
k=4,
fetch_k=20,
lambda_mult=0.5
)
```
3. Got the error
By the way, here is the [API doc](https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html#langchain.vectorstores.pinecone.Pinecone.max_marginal_relevance_search) for the `max_marginal_relevance_search`.
### Expected behavior
It should show it cannot find any context instead of raising an error. | Got ValidationError when searching a content does not exist in the Pinecone vector store using Langchain Pinecone connection | https://api.github.com/repos/langchain-ai/langchain/issues/10111/comments | 3 | 2023-09-01T22:05:26Z | 2023-12-18T23:48:02Z | https://github.com/langchain-ai/langchain/issues/10111 | 1,878,096,535 | 10,111 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.277
Python 3.9
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.agents import create_sql_agent, initialize_agent, create_spark_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit, SparkSQLToolkit
from langchain.sql_database import SQLDatabase
from langchain_experimental.sql.base import SQLDatabaseChain
from langchain.llms import HuggingFacePipeline
from langchain.agents import AgentExecutor
from langchain.agents.agent_types import AgentType
from langchain.chat_models import ChatOpenAI
from snowflake.sqlalchemy import URL
from sqlalchemy import create_engine
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
model_id= "Photolens/llama-2-13b-langchain-chat"
user= "***"
password= "***"
account= "**-**"
database="**SNOWFLAKE_SAMPLE_DATA*"
schema="****"
warehouse="***"
def load_model():
model_id = "Photolens/llama-2-13b-langchain-chat"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,
device_map='auto',
low_cpu_mem_usage=True,
trust_remote_code=True
)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_length=1100,
repetition_penalty=1.15,
top_p=0.95,
temperature=0.2,
pad_token_id=tokenizer.eos_token_id,
max_new_tokens=300
)
local_llm = HuggingFacePipeline(pipeline=pipe)
return local_llm
LLM=load_model()
engine = create_engine(URL(
user= "***",
password= "****",
account= "**-**",
database="**SNOWFLAKE_SAMPLE_DATA,
schema="***",
warehouse="***")
)
db = SQLDatabase(engine)
#here comes the problem, SQLDatabase makes a wrong query on Snowflake
### Expected behavior
I'm expecting to generate a connection with SQLDatabase, but it doesn't in fact makes a wrong query that i dont get it. I'm new at this so i would apreciate some help.
This is the error a i get:
/home/zeusone/anaconda3/envs/snowflake_ai/bin/python /home/zeusone/Documents/ChatbotFalcon/SQLagent/snoflake_simple_agent.py
/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/connector/options.py:96: UserWarning: You have an incompatible version of 'pyarrow' installed (13.0.0), please install a version that adheres to: 'pyarrow<8.1.0,>=8.0.0; extra == "pandas"'
warn_incompatible_dep(
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:15<00:00, 5.20s/it]
Traceback (most recent call last):
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1910, in _execute_context
self.dialect.do_execute(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/connector/cursor.py", line 827, in execute
Error.errorhandler_wrapper(self.connection, self, error_class, errvalue)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/connector/errors.py", line 275, in errorhandler_wrapper
handed_over = Error.hand_to_other_handler(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/connector/errors.py", line 330, in hand_to_other_handler
cursor.errorhandler(connection, cursor, error_class, error_value)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/connector/errors.py", line 209, in default_errorhandler
raise error_class(
snowflake.connector.errors.ProgrammingError: 001059 (22023): SQL compilation error:
Must specify the full search path starting from database for SNOWFLAKE_SAMPLE_DATA
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/zeusone/Documents/ChatbotFalcon/SQLagent/snoflake_simple_agent.py", line 56, in <module>
db = SQLDatabase(engine)#"snowflake://ADRIANOCABRERA:Semilla_1@EKKFOPI-YK08475/SNOWFLAKE_SAMPLE_DATA/TPCH-SF1?warehouse=COMPUTE_WH")
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/langchain/utilities/sql_database.py", line 111, in __init__
self._metadata.reflect(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 4901, in reflect
Table(name, self, **reflect_opts)
File "<string>", line 2, in __new__
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/util/deprecations.py", line 375, in warned
return fn(*args, **kwargs)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 619, in __new__
metadata._remove_table(name, schema)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 614, in __new__
table._init(name, metadata, *args, **kw)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 689, in _init
self._autoload(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 724, in _autoload
conn_insp.reflect_table(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py", line 774, in reflect_table
for col_d in self.get_columns(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py", line 497, in get_columns
col_defs = self.dialect.get_columns(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/sqlalchemy/snowdialect.py", line 669, in get_columns
schema_columns = self._get_schema_columns(connection, schema, **kw)
File "<string>", line 2, in _get_schema_columns
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py", line 55, in cache
ret = fn(self, con, *args, **kw)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/sqlalchemy/snowdialect.py", line 479, in _get_schema_columns
schema_primary_keys = self._get_schema_primary_keys(
File "<string>", line 2, in _get_schema_primary_keys
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py", line 55, in cache
ret = fn(self, con, *args, **kw)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/sqlalchemy/snowdialect.py", line 323, in _get_schema_primary_keys
result = connection.execute(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1385, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection
return connection._execute_clauseelement(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1577, in _execute_clauseelement
ret = self._execute_context(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1953, in _execute_context
self._handle_dbapi_exception(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 2134, in _handle_dbapi_exception
util.raise_(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1910, in _execute_context
self.dialect.do_execute(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/connector/cursor.py", line 827, in execute
Error.errorhandler_wrapper(self.connection, self, error_class, errvalue)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/connector/errors.py", line 275, in errorhandler_wrapper
handed_over = Error.hand_to_other_handler(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/connector/errors.py", line 330, in hand_to_other_handler
cursor.errorhandler(connection, cursor, error_class, error_value)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/connector/errors.py", line 209, in default_errorhandler
raise error_class(
sqlalchemy.exc.ProgrammingError: (snowflake.connector.errors.ProgrammingError) 001059 (22023): SQL compilation error:
Must specify the full search path starting from database for SNOWFLAKE_SAMPLE_DATA
[SQL: SHOW /* sqlalchemy:_get_schema_primary_keys */PRIMARY KEYS IN SCHEMA snowflake_sample_data]
(Background on this error at: https://sqlalche.me/e/14/f405) | Using SQLDatabase with Llama 2 for snowflake connection, i get ProgramingError | https://api.github.com/repos/langchain-ai/langchain/issues/10106/comments | 2 | 2023-09-01T19:03:24Z | 2023-12-08T16:04:15Z | https://github.com/langchain-ai/langchain/issues/10106 | 1,877,924,878 | 10,106 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I have created an agent using ConversationalChatAgent which uses custom tools (CoversationRetrievalQA based) to answer users questions. When determining which tool to use the agent is sometimes stripping the question to single word. For example question like `What is XYZ?` is reduced to
```
{
action: tool_name,
action_input: XYZ
}
```
How can I change the behavior to include full question in this scenario?
expected behavior
```
{
action: tool_name,
action_input: What is XYZ? or Define XYZ?
}
```
### Suggestion:
_No response_ | Issue: ConversationalChatAgent reduces the user question question to single word action_input when parsing | https://api.github.com/repos/langchain-ai/langchain/issues/10100/comments | 2 | 2023-09-01T17:01:58Z | 2023-12-08T16:04:20Z | https://github.com/langchain-ai/langchain/issues/10100 | 1,877,764,117 | 10,100 |
[
"hwchase17",
"langchain"
]
| ### System Info
python 3.9 langchain 0.0.250
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Follow the steps in [Access Intermediate Steps](https://github.com/langchain-ai/langchain/blob/master/docs/extras/modules/agents/how_to/intermediate_steps.ipynb) within the Agent "How To".
When converting the steps to json:
print(json.dumps(response["intermediate_steps"], indent=2))
This raises the error:
TypeError: Object of type AgentAction is not JSON serializable
### Expected behavior
This issue is similar to the one raised in #8815.
However the bot answer is not satisfying as using
```
from langchain.load.dump import dumps
print(dumps(response["intermediate_steps"], pretty=True))
```
will not serialize the `AgentAction`
I can propose a `__json__()` function to correct this the lib json-fix or either inherit the class from dict | Parsing intermediate steps: Object of type AgentAction is not JSON serializable | https://api.github.com/repos/langchain-ai/langchain/issues/10099/comments | 3 | 2023-09-01T16:30:39Z | 2023-12-18T23:48:07Z | https://github.com/langchain-ai/langchain/issues/10099 | 1,877,718,213 | 10,099 |
[
"hwchase17",
"langchain"
]
| ### System Info
I have 2000 document present at my opensearch index, i have filter out the 2 documents present from this opensearch index and added to newly created index. After that i am trying to use vector_db.similarity_search(request.query,k=2) on the newly created index, its returning me empty list.
below are the code
Code for index creation:
updated_mapping = {"mappings":{"properties": {"vector_field": {"type": "knn_vector","dimension": 1536,"method": {"engine": "nmslib","space_type": "l2","name": "hnsw","parameters": {"ef_construction": 512,"m": 16}}}}}}
opensearch_client.indices.create(index="temp_check878999", body=updated_mapping)
Code to update the index with filtered document:
sea=["1460210.pdf",'P-Reality-X Manuscript_Draft 1_17Feb22 (PAL1144).pdf']
for i in sea:
query={"query":{
"match":{
"metadata.filename":f"*{i}"
}
}}
print(query)
rest=opensearch_client.search(index="lang_demo",body=query)
create=rest['hits']['hits']
for hit in create:
sr=hit['_source']
doc_id=hit['_id']
opensearch_client.index(index="temp_check878999",id=doc_id,body=sr)
Langchain simillarity search code i am using on newly created index
from langchain.vectorstores import OpenSearchVectorSearch
vector_db = OpenSearchVectorSearch(
index_name="temp_check878999",
embedding_function=embed_wrapper(engine="text-embedding-ada-002"),
opensearch_url=*****,
http_auth=(******, *****),
is_aoss=False,
)
vector_db.similarity_search("star",k=2)
Quick reply will be very much helpful
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
No idea
### Expected behavior
It should return me the simmilarity search text with newly created index. Please note its working fine on the index where i had used vector_db.addtext(text). This issue is if i create the new open search index with opensearch client then on that index the simmilarity search one is not working. | vector_db.similarity_search(request.query,k=2) Not working with the opensearch index | https://api.github.com/repos/langchain-ai/langchain/issues/10089/comments | 6 | 2023-09-01T10:37:55Z | 2023-12-11T16:05:23Z | https://github.com/langchain-ai/langchain/issues/10089 | 1,877,181,409 | 10,089 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.278.
I try to publish a pull request in the project experimental. I must update the dependencies of langchain, but I receive a lot of errors.
See [here](https://github.com/langchain-ai/langchain/actions/runs/6047844508/job/16412056269?pr=7278), outside of my code.
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
In the libs/experimental/pyproject.toml, change:
langchain = ">=0.0.278"
then
poetry run mypy .
### Expected behavior
No error | autonomous_agents : poetry run mypy . with experimental fails with langchain version 0.0.278 | https://api.github.com/repos/langchain-ai/langchain/issues/10088/comments | 3 | 2023-09-01T09:51:26Z | 2023-09-19T08:28:37Z | https://github.com/langchain-ai/langchain/issues/10088 | 1,877,113,181 | 10,088 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain: 0.0.278
python: 3.10
windows10
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I wrote with reference to[ this link](https://python.langchain.com/docs/use_cases/web_scraping#asynchtmlloader), and the code is as follows:
```python
from langchain.document_loaders import `AsyncHtmlLoader`
urls = ['https://python.langchain.com/docs/use_cases/web_scraping#asynchtmlloader']
loader = AsyncHtmlLoader(urls)
doc = loader.load()
print(doc)
```
Return the following error after running:
```
Exception ignored in: <function _ProactorBasePipeTransport.__del__ at 0x0000023EFFD45900>
Traceback (most recent call last):
File "C:\Users\97994\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 116, in __del__
self.close()
File "C:\Users\97994\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 108, in close
self._loop.call_soon(self._call_connection_lost, None)
File "C:\Users\97994\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 750, in call_soon
self._check_closed()
File "C:\Users\97994\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 515, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
```
### Expected behavior
None | When I call the 'loader()' function of AsyncHtmlLoader, I receive an 'Event loop is closed' error after it completes execution. | https://api.github.com/repos/langchain-ai/langchain/issues/10086/comments | 2 | 2023-09-01T08:31:58Z | 2023-12-08T16:04:30Z | https://github.com/langchain-ai/langchain/issues/10086 | 1,876,993,301 | 10,086 |
[
"hwchase17",
"langchain"
]
| ```
text = "foo _bar_ baz_ 123"
separator = "_"
text_splitter = CharacterTextSplitter(
chunk_size=4,
chunk_overlap=0,
separator="_",
keep_separator=True,
)
print(text_splitter.split_text(text))
```
RETURNS:
`['foo', '_bar', '_ baz', '_ 123']`
EXPECTED:
`['foo ', '_bar', '_ baz', '_ 123']`
^see whitespace next to `foo`
https://github.com/langchain-ai/langchain/blame/324c86acd5be9bc9d5b6dd248d686bdbb2c11cdc/libs/langchain/langchain/text_splitter.py#L155 removes all whitespace from the text. I can't figure out the purpose of this line. | Text splitting with keep_separator is True still removes any whitespace, even if separator is whitespace | https://api.github.com/repos/langchain-ai/langchain/issues/10085/comments | 4 | 2023-09-01T08:14:34Z | 2023-09-08T02:01:40Z | https://github.com/langchain-ai/langchain/issues/10085 | 1,876,967,752 | 10,085 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am using chroma db as retriever in ConversationalRetrievalChain, but the parameter "where_document" does not work.
```python
search_kwargs = {
"k": k,
"filter": filter,
"where_document": {"$contains": "1000001"}
}
retriever = vectordb.as_retriever(
search_kwargs=search_kwargs
)
```
In chroma official site [chroma](https://docs.trychroma.com/usage-guide), it says:
Chroma supports filtering queries by metadata and document contents. The where filter is used to filter by metadata, and the where_document filter is used to filter by document contents.
### Suggestion:
can ConversationalRetrievalChain support where_document filter for chroma db? | Issue: chroma retriever where_document parameter passed in search_kwargs is invalid | https://api.github.com/repos/langchain-ai/langchain/issues/10082/comments | 3 | 2023-09-01T07:52:13Z | 2024-03-17T16:04:11Z | https://github.com/langchain-ai/langchain/issues/10082 | 1,876,932,057 | 10,082 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I have uploaded my files to Vectara and would love to query the corpus now with langchain. However I can only find examples of how to upload documents and then directly query them. I would like to avoid uploading the documents all the time and just straight query the existing corpus . Is this possible ?
Thank you so much!
Regards
### Suggestion:
_No response_ | Vectara query a already uploaded Corpus | https://api.github.com/repos/langchain-ai/langchain/issues/10081/comments | 1 | 2023-09-01T07:50:13Z | 2023-12-08T16:04:35Z | https://github.com/langchain-ai/langchain/issues/10081 | 1,876,929,360 | 10,081 |
[
"hwchase17",
"langchain"
]
| [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | StructuredTool ainvoke isn't await parent class ainvoke | https://api.github.com/repos/langchain-ai/langchain/issues/10080/comments | 0 | 2023-09-01T07:36:50Z | 2023-09-08T02:54:54Z | https://github.com/langchain-ai/langchain/issues/10080 | 1,876,911,576 | 10,080 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I need to use Python REPL tool to take data frame and user query and answer based on the data frame.
### Suggestion:
_No response_ | how to use PythonREPL tool to take dataframe and query | https://api.github.com/repos/langchain-ai/langchain/issues/10079/comments | 3 | 2023-09-01T05:51:51Z | 2023-12-08T16:04:40Z | https://github.com/langchain-ai/langchain/issues/10079 | 1,876,788,507 | 10,079 |
[
"hwchase17",
"langchain"
]
| ### System Info
the memories are pruned after saving using .pop(0). However, db-backed histories read messages and copies into list each turn. This makes it so that the actual db does not change at every turn, and so max_token_limit parameter gets ignored and the memory prints out the entire conversation for history.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Initialize redis chat history
use redis chat history as chat_history for ConversationSummaryBufferMemory
set max_token_limit to 1.
Print history at every turn.
Still prints the entire history
### Expected behavior
Initialize redis chat history
use redis chat history as chat_history for ConversationSummaryBufferMemory
set max_token_limit to 1.
Print history at every turn.
Print nothing since max_token_limit = 0 | ConversationTokenBufferMemory and ConversationSummaryBufferMemory does not work with db-backed histories | https://api.github.com/repos/langchain-ai/langchain/issues/10078/comments | 3 | 2023-09-01T05:28:56Z | 2024-02-12T16:14:29Z | https://github.com/langchain-ai/langchain/issues/10078 | 1,876,763,580 | 10,078 |
[
"hwchase17",
"langchain"
]
| ### System Info
On the elasticsearch authentication, you have implemented it this way on the elasticsearch.py file found in the vectorstores folder
```py
if api_key:
connection_params["api_key"] = api_key
elif username and password:
connection_params["basic_auth"] = (username, password)
```
but i think it should be this way
```py
if api_key:
connection_params["api_key"] = api_key
elif username and password:
connection_params["http_auth"] = (username, password)
```
with that change the authentication succeeeds. What i have just changed is this connection_params["basic_auth"] = (username, password) to this connection_params["http_auth"] = (username, password)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create an elasticsearch instance on an EC2 instance using docker that has SSL and also uses username and password and then try to authenticate to that elasticsearch instance using langchain you will see the error
### Expected behavior
The expected behavior is successfull authentication | ElasticSearch authentication | https://api.github.com/repos/langchain-ai/langchain/issues/10077/comments | 2 | 2023-09-01T05:07:53Z | 2023-12-08T16:04:45Z | https://github.com/langchain-ai/langchain/issues/10077 | 1,876,736,736 | 10,077 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: `0.0278`
Python: `3.10`
Runpod version: `1.2.0`
I'm experiencing some issues when running Runpod (TGI gpu cloud) and langchain, primarily when I try to run the chain.
For reference, I'm using TheBloke/VicUnlocked-30B-LoRA-GPTQ model in TGI on Runpod A4500 GPU cloud.
I initialize the pod from the UI and connect to it with the runpod-python library (version 1.2.0) in my python 3.10 environment.
My prompt template is as follows:
```
prompt = PromptTemplate(
input_variables=['instruction','summary'],
template="""### Instruction:
{instruction}
### Input:
{summary}
### Response:
""")
```
The instruction is a simple instruction to extract relevant insights from a summary. My LLM is instantiated as such:
inference_server_url = f'https://{pod["id"]}-{port}.proxy.runpod.net' ### note: the pod and port is defined previously.
llm = HuggingFaceTextGenInference(inference_server_url=inference_server_url)
And I am trying to run the model as such:
```
summary = ... # summary here
instruction = ... #instruction here
chain.run({"instruction": instruction, "summary": summary}) #**_Note: Error occurs from this line!!_**
```
But I get this error:
```
File ~/anaconda3/envs/py310/lib/python3.10/site-packages/langchain/chains/base.py:282, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
280 except (KeyboardInterrupt, Exception) as e:
281 run_manager.on_chain_error(e)
--> 282 raise e
283 run_manager.on_chain_end(outputs)
284 final_outputs: Dict[str, Any] = self.prep_outputs(
285 inputs, outputs, return_only_outputs
...
---> 81 message = payload["error"]
82 if "error_type" in payload:
83 error_type = payload["error_type"]
KeyError: 'error'
```
Any ideas?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
My prompt template is as follows:
prompt = PromptTemplate(
input_variables=['instruction','summary'],
template="""### Instruction:
{instruction}
### Input:
{summary}
### Response:
""")
The instruction is a simple instruction to extract relevant insights from a summary. My LLM is instantiated as such:
inference_server_url = f'https://{pod["id"]}-{port}.proxy.runpod.net' ### note: the pod and port is defined previously.
llm = HuggingFaceTextGenInference(inference_server_url=inference_server_url)
And I am trying to run the model as such:
summary = ... # summary here
instruction = ... #instruction here
chain.run({"instruction": instruction, "summary": summary})
And then running chain.run I get the error as mentioned above.
### Expected behavior
What's expected is that I should be receiving the output from the runpod GPU cloud that is hosting the model, as per this guide that I am following:
https://colab.research.google.com/drive/10BJcKRBtMlpm2hsS2antarSRgEQY3AQq#scrollTo=lyVYLW2thTMg | Receiving a unclear KeyError: 'error' when using Langchain HuggingFaceTextInference on Runpod GPU | https://api.github.com/repos/langchain-ai/langchain/issues/10072/comments | 2 | 2023-09-01T00:24:57Z | 2023-12-08T16:04:50Z | https://github.com/langchain-ai/langchain/issues/10072 | 1,876,474,014 | 10,072 |
[
"hwchase17",
"langchain"
]
| ### System Info
### Error
I am using **Supabase Vector Store**:
```python
embeddings = OpenAIEmbeddings()
vectorstore_public = SupabaseVectorStore(
client=supabase_client,
embedding=embeddings,
table_name="documents",
query_name="match_documents",
)
```
And crawling pages from internet using the `WebResearchRetriever`
But, in `WebResearchRetriever._get_relevant_documents`
```python
# Search for relevant splits
# TODO: make this async
logger.info("Grabbing most relevant splits from urls...")
docs = []
for query in questions:
docs.extend(self.vectorstore.similarity_search(query))
```
The `vectorstore.similarity_search` guides to an rpc call using the `SupabaseVectorStore`
```python
def similarity_search_by_vector_with_relevance_scores(
self, query: List[float], k: int, filter: Optional[Dict[str, Any]] = None
) -> List[Tuple[Document, float]]:
match_documents_params = self.match_args(query, k, filter)
print("match_documents_params", match_documents_params)
print("self.query_name", self.query_name)
res = self._client.rpc(self.query_name, match_documents_params).execute() # here is where the error is thrown
print("res", res)
match_result = [
(
Document(
metadata=search.get("metadata", {}), # type: ignore
page_content=search.get("content", ""),
),
search.get("similarity", 0.0),
)
for search in res.data
if search.get("content")
]
return match_result
```
Error thrown:
```python
File "/lib/python3.10/site-packages/postgrest/_sync/request_builder.py", line 68, in execute
raise APIError(r.json())
postgrest.exceptions.APIError: {'code': '42804', 'details': 'Returned type text does not match expected type bigint in column 1.', 'hint': None, 'message': 'structure of query does not match function result type'}
```
Complete Traceback
```python
Traceback (most recent call last):
File "/lib/python3.10/site-packages/flask/app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
File "/lib/python3.10/site-packages/flask/app.py", line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/lib/python3.10/site-packages/flask_cors/extension.py", line 176, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/lib/python3.10/site-packages/flask/app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
File "/lib/python3.10/site-packages/flask/app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/app.py", line 143, in complete
answer = completion(message)
File "/methods/content_completion.py", line 15, in completion
sources = web_explorer(blog_topic)
File "/methods/web_explorer.py", line 93, in web_explorer
result = retrieve_answer_and_sources(question, llm, web_retriever)
File "/methods/web_explorer.py", line 80, in retrieve_answer_and_sources
return qa_chain({"question": question}, return_only_outputs=True)
File "/lib/python3.10/site-packages/langchain/chains/base.py", line 288, in __call__
raise e
File "/lib/python3.10/site-packages/langchain/chains/base.py", line 282, in __call__
self._call(inputs, run_manager=run_manager)
File "/lib/python3.10/site-packages/langchain/chains/qa_with_sources/base.py", line 151, in _call
docs = self._get_docs(inputs, run_manager=_run_manager)
File "/lib/python3.10/site-packages/langchain/chains/qa_with_sources/retrieval.py", line 50, in _get_docs
docs = self.retriever.get_relevant_documents(
File "/lib/python3.10/site-packages/langchain/schema/retriever.py", line 208, in get_relevant_documents
raise e
File "/lib/python3.10/site-packages/langchain/schema/retriever.py", line 201, in get_relevant_documents
result = self._get_relevant_documents(
File "/methods/retrievers.py", line 252, in _get_relevant_documents
docs.extend(self.vectorstore.similarity_search(query))
File "/lib/python3.10/site-packages/langchain/vectorstores/supabase.py", line 172, in similarity_search
return self.similarity_search_by_vector(
File "/lib/python3.10/site-packages/langchain/vectorstores/supabase.py", line 183, in similarity_search_by_vector
result = self.similarity_search_by_vector_with_relevance_scores(
File "/lib/python3.10/site-packages/langchain/vectorstores/supabase.py", line 217, in similarity_search_by_vector_with_relevance_scores
res = self._client.rpc(self.query_name, match_documents_params).execute()
File "/lib/python3.10/site-packages/postgrest/_sync/request_builder.py", line 68, in execute
raise APIError(r.json())
postgrest.exceptions.APIError: {'code': '42804', 'details': 'Returned type text does not match expected type bigint in column 1.', 'hint': None, 'message': 'structure of query does not match function result type'}
```
requirements.txt to reproduce:
```
aiohttp==3.8.5
aiosignal==1.3.1
anyio==4.0.0
async-timeout==4.0.3
attrs==23.1.0
azure-core==1.29.3
backoff==2.2.1
bcrypt==4.0.1
beautifulsoup4==4.12.2
bleach==6.0.0
blinker==1.6.2
cachetools==5.3.1
certifi==2023.7.22
chardet==5.2.0
charset-normalizer==3.2.0
chroma-hnswlib==0.7.2
chromadb==0.4.8
click==8.1.7
click-log==0.4.0
colorama==0.4.6
coloredlogs==15.0.1
dataclasses-json==0.5.14
deprecation==2.1.0
dnspython==2.4.2
docutils==0.20.1
dotty-dict==1.3.1
emoji==2.8.0
exceptiongroup==1.1.3
fastapi==0.99.1
filetype==1.2.0
Flask==2.3.3
Flask-Cors==4.0.0
flatbuffers==23.5.26
frozenlist==1.4.0
gitdb==4.0.10
GitPython==3.1.32
google-api-core==2.11.1
google-api-python-client==2.97.0
google-auth==2.22.0
google-auth-httplib2==0.1.0
googleapis-common-protos==1.60.0
gotrue==1.0.4
h11==0.14.0
html-sanitizer==2.2.0
httpcore==0.16.3
httplib2==0.22.0
httptools==0.6.0
httpx==0.23.3
humanfriendly==10.0
idna==3.4
importlib-metadata==6.8.0
importlib-resources==6.0.1
invoke==1.7.3
itsdangerous==2.1.2
jaraco.classes==3.3.0
Jinja2==3.1.2
joblib==1.3.2
keyring==24.2.0
langchain==0.0.277
langsmith==0.0.30
lxml==4.9.3
Markdown==3.4.4
MarkupSafe==2.1.3
marshmallow==3.20.1
monotonic==1.6
more-itertools==10.1.0
mpmath==1.3.0
multidict==6.0.4
mypy-extensions==1.0.0
nltk==3.8.1
numexpr==2.8.5
numpy==1.25.2
onnxruntime==1.15.1
openai==0.27.10
outcome==1.2.0
overrides==7.4.0
packaging==23.1
pkginfo==1.9.6
postgrest==0.10.6
posthog==3.0.2
protobuf==4.24.2
pulsar-client==3.3.0
pyasn1==0.5.0
pyasn1-modules==0.3.0
pydantic==1.10.12
Pygments==2.16.1
pymongo==4.5.0
pyparsing==3.1.1
PyPika==0.48.9
PySocks==1.7.1
python-dateutil==2.8.2
python-dotenv==1.0.0
python-gitlab==3.15.0
python-magic==0.4.27
python-semantic-release==7.33.2
PyYAML==6.0.1
readme-renderer==41.0
realtime==1.0.0
regex==2023.8.8
requests==2.31.0
requests-toolbelt==1.0.0
rfc3986==1.5.0
rsa==4.9
selenium==4.11.2
semver==2.13.0
six==1.16.0
smmap==5.0.0
sniffio==1.3.0
sortedcontainers==2.4.0
soupsieve==2.4.1
SQLAlchemy==2.0.20
starlette==0.27.0
storage3==0.5.3
StrEnum==0.4.15
supabase==1.0.3
supafunc==0.2.2
sympy==1.12
tabulate==0.9.0
tenacity==8.2.3
tiktoken==0.4.0
tokenizers==0.13.3
tomlkit==0.12.1
tqdm==4.66.1
trio==0.22.2
trio-websocket==0.10.3
twine==3.8.0
typing-inspect==0.9.0
typing_extensions==4.7.1
unstructured==0.10.10
uritemplate==4.1.1
urllib3==1.26.16
uvicorn==0.23.2
uvloop==0.17.0
waitress==2.1.2
watchfiles==0.20.0
webencodings==0.5.1
websockets==10.4
Werkzeug==2.3.7
wsproto==1.2.0
yarl==1.9.2
zipp==3.16.2
```
### Who can help?
@agola11 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code from this repo: https://github.com/langchain-ai/web-explorer/blob/main/web_explorer.py
But instead of faiss, use `SupabaseVectorStore` as above.
### Expected behavior
Is it coming from the config of my table on Supabase, as i get: `Returned type text does not match expected type bigint in column 1.` or wherever else ?
<img width="1176" alt="Capture d’écran 2023-08-31 à 20 15 58" src="https://github.com/langchain-ai/langchain/assets/39488794/743d9cd9-fa30-474d-88ce-288e85c50e71">
| SupabaseVectorStore: Error thrown when calling similarity_search_by_vector_with_relevance_scores | https://api.github.com/repos/langchain-ai/langchain/issues/10065/comments | 6 | 2023-08-31T18:18:02Z | 2023-12-07T16:05:15Z | https://github.com/langchain-ai/langchain/issues/10065 | 1,876,055,791 | 10,065 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Implement Self Query retriever to Redis vector stores.
### Motivation
I was trying the different retrievers for my project (AI chatbot that answers various human resources and labor law questions based on a dataset built from complete local labor legislation, company handbooks, job descriptions and SOPs).
I plan to deploy this chatbot in a production environment, therefore I chose Redis (for its robustness and speed) as a vector store.
### Your contribution
I am not a pro developer; so, unfortunately, the only contributions I can make is limited to "real-world" testing. | Add SelfQueryRetriever support for Redis Vector Stores | https://api.github.com/repos/langchain-ai/langchain/issues/10064/comments | 1 | 2023-08-31T18:17:38Z | 2023-09-12T22:30:38Z | https://github.com/langchain-ai/langchain/issues/10064 | 1,876,055,216 | 10,064 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.276
Windows
Python 3.11.4
I create a custom llm just like in the tutorial but when i use it on a mkrl agent it does not return to the agent the custom response.
I printed insided my agent and it is called and returns the correct answer, but the agent in the end of line does not get this answer.
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://colab.research.google.com/drive/1cHVx3GlzmF4ECV_63ZjlrMPt8P840GSw?usp=sharing
### Expected behavior
The expected behaviour is the agent to receive the response of the CustomLLM. | CustomLLM when called inside agent does not return custom behaviour | https://api.github.com/repos/langchain-ai/langchain/issues/10061/comments | 2 | 2023-08-31T17:53:26Z | 2023-12-07T16:05:20Z | https://github.com/langchain-ai/langchain/issues/10061 | 1,876,017,939 | 10,061 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version -- 0.0.277
python verion -- 3.8.8
window platform
### Who can help?
@hwchase17
@agola11
Hi I am not able to run any of the langchain syntex on my windows laptop. For example if I just run from **langchain.llms import OpenAI** I get the below error message. Can you please help!
TypeError Traceback (most recent call last)
<ipython-input-3-5af9a0f5ffa4> in <module>
1 import os
2 from dotenv import load_dotenv
----> 3 from langchain.llms import OpenAI
~\Anaconda3\lib\site-packages\langchain\__init__.py in <module>
4 from typing import Optional
5
----> 6 from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
7 from langchain.chains import (
8 ConversationChain,
~\Anaconda3\lib\site-packages\langchain\agents\__init__.py in <module>
29
30 """ # noqa: E501
---> 31 from langchain.agents.agent import (
32 Agent,
33 AgentExecutor,
~\Anaconda3\lib\site-packages\langchain\agents\agent.py in <module>
12 import yaml
13
---> 14 from langchain.agents.agent_iterator import AgentExecutorIterator
15 from langchain.agents.agent_types import AgentType
16 from langchain.agents.tools import InvalidTool
~\Anaconda3\lib\site-packages\langchain\agents\agent_iterator.py in <module>
19 )
20
---> 21 from langchain.callbacks.manager import (
22 AsyncCallbackManager,
23 AsyncCallbackManagerForChainRun,
~\Anaconda3\lib\site-packages\langchain\callbacks\__init__.py in <module>
8 """
9
---> 10 from langchain.callbacks.aim_callback import AimCallbackHandler
11 from langchain.callbacks.argilla_callback import ArgillaCallbackHandler
12 from langchain.callbacks.arize_callback import ArizeCallbackHandler
~\Anaconda3\lib\site-packages\langchain\callbacks\aim_callback.py in <module>
3
4 from langchain.callbacks.base import BaseCallbackHandler
----> 5 from langchain.schema import AgentAction, AgentFinish, LLMResult
6
7
~\Anaconda3\lib\site-packages\langchain\schema\__init__.py in <module>
1 """**Schemas** are the LangChain Base Classes and Interfaces."""
2 from langchain.schema.agent import AgentAction, AgentFinish
----> 3 from langchain.schema.cache import BaseCache
4 from langchain.schema.chat_history import BaseChatMessageHistory
5 from langchain.schema.document import BaseDocumentTransformer, Document
~\Anaconda3\lib\site-packages\langchain\schema\cache.py in <module>
4 from typing import Any, Optional, Sequence
5
----> 6 from langchain.schema.output import Generation
7
8 RETURN_VAL_TYPE = Sequence[Generation]
~\Anaconda3\lib\site-packages\langchain\schema\output.py in <module>
7 from langchain.load.serializable import Serializable
8 from langchain.pydantic_v1 import BaseModel, root_validator
----> 9 from langchain.schema.messages import BaseMessage, BaseMessageChunk
10
11
~\Anaconda3\lib\site-packages\langchain\schema\messages.py in <module>
146
147
--> 148 class HumanMessageChunk(HumanMessage, BaseMessageChunk):
149 """A Human Message chunk."""
150
~\Anaconda3\lib\site-packages\pydantic\main.cp38-win_amd64.pyd in pydantic.main.ModelMetaclass.__new__()
~\Anaconda3\lib\abc.py in __new__(mcls, name, bases, namespace, **kwargs)
83 """
84 def __new__(mcls, name, bases, namespace, **kwargs):
---> 85 cls = super().__new__(mcls, name, bases, namespace, **kwargs)
86 _abc_init(cls)
87 return cls
TypeError: multiple bases have instance lay-out conflict
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
!pip -q install openai langchain huggingface_hub
import openai
import dotenv
import os
os.environ['OPENAI_API_KEY'] = ' ... '
from langchain.llms import OpenAI
### Expected behavior
Should be able to run without error | TypeError: multiple bases have instance lay-out conflict | https://api.github.com/repos/langchain-ai/langchain/issues/10060/comments | 6 | 2023-08-31T17:02:38Z | 2023-12-11T16:05:28Z | https://github.com/langchain-ai/langchain/issues/10060 | 1,875,918,071 | 10,060 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version: 0.0.276
Python version: 3.11.2
Platform: x86_64 Debian 12.2.0-14
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
import os, openai
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Redis
embeddings = OpenAIEmbeddings()
rds = Redis.from_existing_index(
embeddings,
index_name = INDEX_NAME,
schema = "redis_schema_kws.yaml",
redis_url = "redis://10.0.1.21:6379",
)
returned_docs_mmr = rds.max_marginal_relevance_search(question, k=3, fetch_k=3, lambda_mult=0.8)
```
The returned error message is: `NotImplementedError:`
Could you provide additional information on your ETA to have MMR search implemented in Redis?
### Expected behavior
Retrieve the Document objects. | max_marginal_relevance_search not implemented in Redis | https://api.github.com/repos/langchain-ai/langchain/issues/10059/comments | 4 | 2023-08-31T16:34:30Z | 2023-09-12T22:31:16Z | https://github.com/langchain-ai/langchain/issues/10059 | 1,875,877,282 | 10,059 |
[
"hwchase17",
"langchain"
]
| ### System Info
Given a string resembling a JSON-string array, such as I'm often getting from LLM completions, I'm trying to convert the string to a Python list of strings. I'm finding the the `PydanticOutputParser` is throwing an error when plain old `json.loads` with `strict=False` is doing fine.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
For example this code:
```python
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field
import json
class Lines(BaseModel):
lines: list[str] = Field(description="array of strings")
line_parser = PydanticOutputParser(pydantic_object=Lines)
lines = '[\n "line 1",\n "line 2"\n]'
print("Just JSON:")
print(json.loads(lines, strict=False))
print("LangChain Pydantic:")
print(line_parser.parse(lines))
```
produces the following output:
```sh
Just JSON:
['line 1', 'line 2']
LangChain Pydantic:
Traceback (most recent call last):
File "/opt/miniconda3/envs/gt/lib/python3.10/site-packages/langchain/output_parsers/pydantic.py", line 27, in parse
json_object = json.loads(json_str, strict=False)
File "/opt/miniconda3/envs/gt/lib/python3.10/json/__init__.py", line 359, in loads
return cls(**kw).decode(s)
File "/opt/miniconda3/envs/gt/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/opt/miniconda3/envs/gt/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/benjaminbasseri/Dropbox/Work/GT Schools/scratch work/meme-generation/parse_test.py", line 14, in <module>
print(line_parser.parse(lines))
File "/opt/miniconda3/envs/gt/lib/python3.10/site-packages/langchain/output_parsers/pydantic.py", line 33, in parse
raise OutputParserException(msg, llm_output=text)
langchain.schema.output_parser.OutputParserException: Failed to parse Lines from completion [
"line 1",
"line 2"
]. Got: Expecting value: line 1 column 1 (char 0)
```
### Expected behavior
I would expect the parser to be able to handle a simple case like this | PydanticOutputParser failing to parse basic string into JSON | https://api.github.com/repos/langchain-ai/langchain/issues/10057/comments | 7 | 2023-08-31T16:12:05Z | 2024-04-10T16:14:50Z | https://github.com/langchain-ai/langchain/issues/10057 | 1,875,844,490 | 10,057 |
[
"hwchase17",
"langchain"
]
| ### System Info
0.0.272, linux, python 3.11.2
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
I have a pretty standard RetievalQA chain like this. The `on_llm_start` callback isn't executed since version 0.0.272 (I verified that 0.0.271 is working)
```
loader = TextLoader("data.txt")
documents = loader.load()
openai = ChatOpenAI(**llm_params, callbacks= [StreamingStdOutCallbackHandler()])
text_splitter = RecursiveCharacterTextSplitter(**chunking_params)
texts = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_documents(texts, embeddings)
qa = RetrievalQA.from_chain_type(
llm=openai,
retriever=docsearch.as_retriever(),
)
```
### Expected behavior
`on_llm_start` callback should be called | StreamingStdOutCallbackHandler().on_llm_start isn't called since version 0.0.272 (0.0.271 still works) | https://api.github.com/repos/langchain-ai/langchain/issues/10054/comments | 6 | 2023-08-31T15:30:31Z | 2024-08-07T19:33:12Z | https://github.com/langchain-ai/langchain/issues/10054 | 1,875,770,074 | 10,054 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.249
Windows
Python 3.11.4
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The official example described here:
https://python.langchain.com/docs/integrations/vectorstores/matchingengine#create-vectorstore-from-texts
Does not work.
My code throws beloe exception:
from langchain.vectorstores import MatchingEngine
texts = [
"The cat sat on",
"the mat.",
"I like to",
"eat pizza for",
"dinner.",
"The sun sets",
"in the west.",
]
vector_store = MatchingEngine.from_components(
project_id="",
region="",
gcs_bucket_name="",
index_id="",
endpoint_id="",
)
### Expected behavior
Traceback (most recent call last):
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\google\api_core\grpc_helpers.py", line 72, in error_remapped_callable
return callable_(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\grpc\_channel.py", line 1030, in __call__
return _end_unary_response_blocking(state, call, False, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\grpc\_channel.py", line 910, in _end_unary_response_blocking
raise _InactiveRpcError(state) # pytype: disable=not-instantiable
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Request contains an invalid argument."
debug_error_string = "UNKNOWN:Error received from peer ipv4:172.217.16.42:443 {created_time:"2023-08-31T14:00:16.506362524+00:00", grpc_status:3, grpc_message:"Request contains an invalid argument."}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\GitHub\onecloud-chatbot\matching_engine_insert.py", line 13, in <module>
vector_store = MatchingEngine.from_components(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\langchain\vectorstores\matching_engine.py", line 280, in from_components
endpoint = cls._create_endpoint_by_id(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\langchain\vectorstores\matching_engine.py", line 386, in _create_endpoint_by_id
return aiplatform.MatchingEngineIndexEndpoint(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\google\cloud\aiplatform\matching_engine\matching_engine_index_endpoint.py", line 130, in __init__
self._gca_resource = self._get_gca_resource(resource_name=index_endpoint_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\google\cloud\aiplatform\base.py", line 648, in _get_gca_resource
return getattr(self.api_client, self._getter_method)(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\google\cloud\aiplatform_v1\services\index_endpoint_service\client.py", line 707, in get_index_endpoint
response = rpc(
^^^^
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\google\api_core\gapic_v1\method.py", line 113, in __call__
return wrapped_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\google\api_core\retry.py", line 349, in retry_wrapped_func
return retry_target(
^^^^^^^^^^^^^
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\google\api_core\retry.py", line 191, in retry_target
return target()
^^^^^^^^
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\google\api_core\grpc_helpers.py", line 74, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.InvalidArgument: 400 Request contains an invalid argument. | langchain.vectorstores.MatchingEngine.from_components() throws InvalidArgument exception | https://api.github.com/repos/langchain-ai/langchain/issues/10050/comments | 4 | 2023-08-31T14:01:53Z | 2024-01-30T00:41:11Z | https://github.com/langchain-ai/langchain/issues/10050 | 1,875,605,668 | 10,050 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python Version: 3.11.4 (main, Jul 5 2023, 08:40:20) [Clang 14.0.6 ]
Langchain Version: 0.0.273
Jupyter Notebook Version: 5.3.0
### Who can help?
@hwchase17
@agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce the behaviour:
1. Create a function to be used
```
def _handle_error(error) -> str:
return str(error)[:50]
```
2. Run `create_pandas_dataframe_agent`, pass in `handle_parsing_errors=_handle_error`
### Expected behavior
I expect an output similar to the last section of https://python.langchain.com/docs/modules/agents/how_to/handle_parsing_errors, but instead I still get the normal OutputParserError | `create_pandas_dataframe_agent` does not pass the `handle_parsing_error` variable into the underlying Agent. | https://api.github.com/repos/langchain-ai/langchain/issues/10045/comments | 6 | 2023-08-31T11:53:16Z | 2023-09-03T21:31:02Z | https://github.com/langchain-ai/langchain/issues/10045 | 1,875,375,970 | 10,045 |
[
"hwchase17",
"langchain"
]
| ### System Info
`langchain` - 0.0.267
`openai` - 0.27.7
MacOS 13.4.1 (22F82)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This code was working when I used `langchain 0.0.200` (The call is to `AzureOpenAI` endpoint)
```python
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(model_name=model_config.model_engine,
deployment_id=model_config.model_engine,
temperature=model_config.temperature,
max_tokens=model_config.max_tokens_for_request,
top_p=model_config.top_p,
openai_api_key=endpoint_config.api_key.secret,
api_base=endpoint_config.api_base,
api_type=endpoint_config.api_type,
api_version=endpoint_config.api_version)
chat_prompt = [SystemMessage(...), HumanMessage(...)]
response = llm(chat_prompt)
```
Once I updated `langchain` to `0.0.267`, I'm getting this error
```shell
openai.error.InvalidRequestError: Invalid URL (POST /v1/openai/deployments/gpt-35-turbo/chat/completions)
```
The endpoint itself is working (once I reverted back to `0.0.200` all started to work again)
### Expected behavior
I except to get an answer from the LLM | Getting invalid URL post after updating langchain from 0.0.200 to 0.0.267 | https://api.github.com/repos/langchain-ai/langchain/issues/10044/comments | 2 | 2023-08-31T11:17:31Z | 2023-12-07T16:05:30Z | https://github.com/langchain-ai/langchain/issues/10044 | 1,875,323,135 | 10,044 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'm using function calls to obtain and process data. I have two classes designed as follows:
```python
class GetDataInput:
user_input: str = Field(description="User input")
class ProcessDataInput:
data_url: str = Field(description="Link to the data")
```
Moreover, I'm employing the `ConversationBufferWindowMemory`.
While the initial question posed to the bot yields the expected response, I encounter an issue when asking a subsequent question. Occasionally, instead of fetching new data, the bot utilizes the previous `data_url`.
How can I effectively manage this kind of situation?
Thank you for your assistance.
### Suggestion:
_No response_ | Function calling use the wrong context | https://api.github.com/repos/langchain-ai/langchain/issues/10040/comments | 4 | 2023-08-31T10:12:09Z | 2023-12-07T16:05:35Z | https://github.com/langchain-ai/langchain/issues/10040 | 1,875,216,407 | 10,040 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.