issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am currently working on the task of **article rewriting**, and both the QA chain and Summarize chain are not quite suitable for my task.
Since I need to rewrite **multiple "lengthy"** articles, I would like to know how to use the chain_type in LLMchain. Alternatively, are there any other methods to achieve segment-level rewriting? Thank you.
### Suggestion:
_No response_ | Issue: How to use chain_type args in LLMchain? | https://api.github.com/repos/langchain-ai/langchain/issues/8565/comments | 2 | 2023-08-01T06:55:55Z | 2023-11-08T16:07:15Z | https://github.com/langchain-ai/langchain/issues/8565 | 1,830,592,050 | 8,565 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Add embedding support to the callback system. Here is one approach I have in mind.
- [ ] Add `on_embedding_start` method on `CallbackManagerMixin` in `libs/langchain/langchain/callbacks/base.py`.
- [ ] Implement `EmbeddingManagerMixin` with `on_embedding_end` and `on_embedding_error` methods in `libs/langchain/langchain/callbacks/base.py`.
- [ ] Add embedding callback hook to `Embeddings` abstract base class in `libs/langchain/langchain/embeddings/base.py`.
- [ ] Tweak concrete embeddings implementations in `libs/langchain/langchain/embeddings` as necessary.
One minimally invasive approach would be:
- Implement concrete `embed_documents`, `embed_query`, `aembed_documents`, and `aembed_query` methods on the abstract `Embeddings` base class that contain the embeddings callback hook. Add abstract methods `_embed_documents` and `_embed_query` methods and unimplemented `_aembed_documents` and `_aembed_query` methods to the base class.
- Rename existing concrete implementations of `embed_documents`, `embed_query`, `aembed_documents`, and `aembed_query` to `_embed_documents`, `_embed_query`, `_aembed_documents`, and `_aembed_query`.
### Motivation
Embeddings are useful for LLM application monitoring and debugging. I want to build a callback handler that enables LangChain users to visualize their data in [Phoenix](https://github.com/Arize-ai/phoenix), an open-source tool that provides debugging workflows for retrieval-augmented generation. At the moment, it is not possible to get the query embeddings out of LangChain's callback system, for example, when using the `RetrievalQA` chain. Here is an [example notebook](https://github.com/Arize-ai/phoenix/blob/main/tutorials/langchain_pinecone_search_and_retrieval_tutorial.ipynb) where I sub-class `OpenAIEmbeddings` to get out the embedding data:
```
class OpenAIEmbeddingsWrapper(OpenAIEmbeddings):
"""
A wrapper around OpenAIEmbeddings that stores the query and document
embeddings.
"""
query_text_to_embedding: Dict[str, List[float]] = {}
document_text_to_embedding: Dict[str, List[float]] = {}
def embed_query(self, text: str) -> List[float]:
embedding = super().embed_query(text)
self.query_text_to_embedding[text] = embedding
return embedding
def embed_documents(self, texts: List[str], chunk_size: Optional[int] = 0) -> List[List[float]]:
embeddings = super().embed_documents(texts, chunk_size)
for text, embedding in zip(texts, embeddings):
self.document_text_to_embedding[text] = embedding
return embeddings
@property
def query_embedding_dataframe(self) -> pd.DataFrame:
return self._convert_text_to_embedding_map_to_dataframe(self.query_text_to_embedding)
@property
def document_embedding_dataframe(self) -> pd.DataFrame:
return self._convert_text_to_embedding_map_to_dataframe(self.document_text_to_embedding)
@staticmethod
def _convert_text_to_embedding_map_to_dataframe(
text_to_embedding: Dict[str, List[float]]
) -> pd.DataFrame:
texts, embeddings = map(list, zip(*text_to_embedding.items()))
embedding_arrays = [np.array(embedding) for embedding in embeddings]
return pd.DataFrame.from_dict(
{
"text": texts,
"text_vector": embedding_arrays,
}
)
```
I would like the LangChain callback system to support this use-case.
This feature has been [requested for TypeScript](https://github.com/hwchase17/langchainjs/issues/586) and has an [open PR](https://github.com/hwchase17/langchainjs/pull/1859). An additional motivation is to maintain parity with the TypeScript library.
### Your contribution
I am willing to implement, test, and document this feature with guidance from the LangChain team. I am also happy to provide feedback on an implementation by the LangChain team by building an example callback handler using the embeddings hook. | Add callback support for embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/8564/comments | 2 | 2023-08-01T06:13:53Z | 2023-11-07T16:06:08Z | https://github.com/langchain-ai/langchain/issues/8564 | 1,830,536,926 | 8,564 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Currently, langchain supports petals, but I think we should also support using a petals API endpoint. https://github.com/petals-infra/chat.petals.dev
### Motivation
The idea here is that users don't need to run the base application on their system, and can just use the API directly.
I think this is useful as a developer for speed, and just testing things quickly. I also think it would be the easiest and most reliabile way for any langchain user to get access to a high quality LLM on low-spec hardware.
### Your contribution
I am happy to add this myself if I can be guided on what parts of the code need changed. | Petals API Support | https://api.github.com/repos/langchain-ai/langchain/issues/8563/comments | 2 | 2023-08-01T05:38:35Z | 2023-11-28T11:01:45Z | https://github.com/langchain-ai/langchain/issues/8563 | 1,830,501,646 | 8,563 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Most databases include comment metadata on both tables and columns in a table. It would be nice to be able to pass this additional context to the LLM to get a better response.
The implementation could include adding two parameters to the [SQLDatabase](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/utilities/sql_database.py) object. For example:
- `table_metadata_sql` (perhaps `table_comment_sql`) runs a query and expects a table with two columns `table_name` and `comment`
- `column_metadata_sql` (perhaps `column_comment_sql` runs a query and expects a table with three columns `table_name`, `col_name`, and `comment`
Perhaps these two params could be combined into a single `metadata_sql` which returns for column table with `table_name`, `table_comment`, `column_name`, and `column_comment`
### Motivation
The main motivation behind this is to provide the LLM additional context beyond the CREATE TABLE and sample rows. Although this will be a more costly request (more tokens), I believe in many instances will lead to better SQL being generated. I also want to encourage better documentation of database objects in the data warehouse or data lake.
### Your contribution
Happy to submit a PR for this. Please weigh in on any design decisions:
- how many parameters to use 1 or 2?
- where to include the comments in the prompt, part of the CREATE TABLE as SQL comments? | Add metadata parameter(s) to SQLDatabase class | https://api.github.com/repos/langchain-ai/langchain/issues/8558/comments | 2 | 2023-08-01T01:08:42Z | 2023-11-07T16:06:18Z | https://github.com/langchain-ai/langchain/issues/8558 | 1,830,288,016 | 8,558 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain: 0.0.248
Python: 3.9.17
OS version: Linux 6.1.27-43.48.amzn2023.x86_64
### Who can help?
I will submit a PR for a solution to this problem
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code:
```python
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.document_loaders import UnstructuredMarkdownLoader
from langchain.text_splitter import CharacterTextSplitter
def testElement():
loader = UnstructuredMarkdownLoader(
"filepath", mode="elements")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=0)
split_docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_documents(split_docs, embeddings)
```
Also need to have a link format in the markdown file to be load, for example:
```
- [Google Developer Documentation Style Guide](https://developers.google.com/style)
```
Error Message:
```
138 # isinstance(True, int) evaluates to True, so we need to check for bools separately
139 if not isinstance(value, (str, int, float)) or isinstance(value, bool):
--> 140 raise ValueError(
141 f"Expected metadata value to be a str, int, or float, got {value} which is a {type(value)}"
142 )
ValueError: Expected metadata value to be a str, int, or float, got [{'text': 'Git', 'url': '#git'}] which is a <class 'list'>
```
### Expected behavior
I expect to see the split documents loaded into Chroma, however, this raise error for not passing type check for metadata. | ValueError: Expected metadata value to be a str, int, or float, got [{'text': 'Git', 'url': '#git'}] which is a <class 'list'> when storing into Chroma vector stores using using element mode of UnstructuredMarkdownLoader | https://api.github.com/repos/langchain-ai/langchain/issues/8556/comments | 17 | 2023-08-01T00:31:42Z | 2024-04-14T15:47:48Z | https://github.com/langchain-ai/langchain/issues/8556 | 1,830,261,522 | 8,556 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python: v3.11
Langchain: v0.0.248
### Who can help?
I'll submit a PR for it tonight, just wanted to get the Issue in before.
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [x] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Background
I'm using Azure Cognitive Search as my vector store. In `azuresearch`, if you call `addtexts` with no metadata, you get an exception
Steps to reproduce behavior:
1. create AzureSearch object
2. call `add_texts` without specifying the metadata parameter
3. You'll get an error
> UnboundLocalError: cannot access local variable 'additional_fields' where it is not associated with a value
### Expected behavior
No error occurs it adds the text to the `texts` parameter to the Vector store. | Vector Store: Azure Cognitive Search :: add_texts throws an error if called with no metadata | https://api.github.com/repos/langchain-ai/langchain/issues/8544/comments | 2 | 2023-07-31T21:13:43Z | 2023-11-06T16:05:33Z | https://github.com/langchain-ai/langchain/issues/8544 | 1,830,035,282 | 8,544 |
[
"hwchase17",
"langchain"
]
| ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | https://api.github.com/repos/langchain-ai/langchain/issues/8542/comments | 3 | 2023-07-31T21:01:43Z | 2023-08-14T23:45:18Z | https://github.com/langchain-ai/langchain/issues/8542 | 1,830,021,223 | 8,542 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.sql_database.SQLDatabase.html
### Idea or request for content:
Documentation is missing, although i can import this class in the lateste version. | DOC: Missing documentation langchain.utilities.SQLDatabase | https://api.github.com/repos/langchain-ai/langchain/issues/8535/comments | 1 | 2023-07-31T19:01:18Z | 2023-11-06T16:05:38Z | https://github.com/langchain-ai/langchain/issues/8535 | 1,829,840,969 | 8,535 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain ==0.0.248
Platform Windows 10
Python == 3.10.9
Whenever I use from langchain.schema import HumanMessage I get error,
**ImportError: cannot import name 'HumanMessage' from 'langchain.schema'**
I have tried updating llama-index but still getting same error
@agola11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.llms import OpenAI
llm = OpenAI(openai_api_key="key")
from langchain.schema import HumanMessage
### Expected behavior
It should work | Cannot import name 'HumanMessage' from 'langchain.schema' | https://api.github.com/repos/langchain-ai/langchain/issues/8527/comments | 8 | 2023-07-31T17:47:47Z | 2023-08-01T16:35:32Z | https://github.com/langchain-ai/langchain/issues/8527 | 1,829,736,076 | 8,527 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
search = SerpAPIWrapper()
tools = define_tools(llm_chain,search)
tools = [
Tool(
name="Search",
func=search.run,
description="useful for when you need to answer questions about current events",
# return_direct = True
),
]
prompt = CustomPromptTemplate(
template=template,
tools=tools,
input_variables=["input", "intermediate_steps", "chat_history"]
)
output_parser = CustomOutputParser()
llm = ChatOpenAI(temperature=0, model_name = "gpt-4-0613", streaming = True, callbacks = [MyCustomHandler(new_payload=new_payload)])
llm_chain = LLMChain(llm=llm, prompt=prompt)
agent = LLMSingleActionAgent(llm_chain=llm_chain, output_parser=output_parser, stop=["\nObservation:"], allowed_tools=tools)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent,
tools=tools,
verbose=True, memory = memory, early_stopping_method='generate')
### Suggestion:
_No response_ | how to Stream Search tool responses | https://api.github.com/repos/langchain-ai/langchain/issues/8526/comments | 2 | 2023-07-31T17:26:05Z | 2023-11-16T16:06:41Z | https://github.com/langchain-ai/langchain/issues/8526 | 1,829,703,530 | 8,526 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am just trying to from langchain import LLMMathChain, SerpAPIWrapper, SQLDatabase, SQLDatabaseChain
Traceback (most recent call last):
File "/home/huangj/01_LangChain/LangChainCHSample/05_05_SQL_Chain.py", line 1, in <module>
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
ImportError: cannot import name 'SQLDatabaseChain' from 'langchain' (/home/huangj/01_LangChain/langchain_env/lib/python3.8/site-packages/langchain/__init__.py)
### Suggestion:
_No response_ | Issue: ImportError: cannot import name 'SQLDatabaseChain' from 'langchain' | https://api.github.com/repos/langchain-ai/langchain/issues/8524/comments | 9 | 2023-07-31T16:53:56Z | 2024-01-26T19:04:33Z | https://github.com/langchain-ai/langchain/issues/8524 | 1,829,657,483 | 8,524 |
[
"hwchase17",
"langchain"
]
| ### Feature request
MMR Support for Vertex AI Matching Engine
### Motivation
The results of Matching Engine are not optimal
### Your contribution
MMR Support for Vertex AI Matching Engine
| MMR Support for Matching Engine | https://api.github.com/repos/langchain-ai/langchain/issues/8514/comments | 1 | 2023-07-31T13:08:29Z | 2023-11-06T16:05:43Z | https://github.com/langchain-ai/langchain/issues/8514 | 1,829,156,986 | 8,514 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
when i want to use openai, i install it with command"pip3 install openai", but i really wanti to use chatglm, when i run "pip3 install chartglm ", it does not work",please help to answer the question
### Suggestion:
_No response_ | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/8512/comments | 4 | 2023-07-31T11:47:12Z | 2023-11-07T16:06:23Z | https://github.com/langchain-ai/langchain/issues/8512 | 1,829,012,261 | 8,512 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am trying to use `RedisChatMessageHistory` within an agent, but I'm encountering this error:

My redis url I am using looks like that:
`redis_url = "redis+sentinel://:password@host:port/service_name/db"`
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
message_history = RedisChatMessageHistory(url=redis_url, ttl=600, session_id="test_id")
message_history.add_user_message("hi!")
message_history.add_ai_message("whats up?")
memory = ConversationBufferWindowMemory(memory_key="memory", chat_memory=message_history, return_messages=True, k=15)
```
### Expected behavior
See on Redis the messages I added in my code. | Add memory to an Agent using RedisChatMessageHistory with sentinels throwing an error | https://api.github.com/repos/langchain-ai/langchain/issues/8511/comments | 3 | 2023-07-31T09:34:40Z | 2023-11-08T16:07:19Z | https://github.com/langchain-ai/langchain/issues/8511 | 1,828,789,616 | 8,511 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
So It's possible to parse "complex" objects with
```python
response_schemas = [
ResponseSchema(name="date", description="The date of the event"),
ResponseSchema(name="place", description="The place where the event will happen"),
]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
```
and
```python
class Pair_of_numbers(BaseModel):
L: int = Field(description="A number")
R: int = Field(description="A number that ends in 2, 3 or 4")
@validator("R")
def question_ends_with_question_mark(cls, field):
if int(str(field)[-1]) not in [2, 3, 4]:
raise ValueError("No bro :(")
return field
parser = PydanticOutputParser(pydantic_object=Pair_of_numbers)
```
But I can't seem to find information on how to parse a list of custom objects/dicts? There is the [list parser](https://python.langchain.com/docs/modules/model_io/output_parsers/comma_separated), but this is just for simple strings. It's possible there is a way to achieve this wrapping a pydantic `BaseModel` in a python `List`, but so far I've had no luck 😫
### Suggestion:
_No response_ | Issue: output_parsers to work with lists of custom objects? | https://api.github.com/repos/langchain-ai/langchain/issues/8510/comments | 3 | 2023-07-31T09:34:19Z | 2024-07-14T04:12:39Z | https://github.com/langchain-ai/langchain/issues/8510 | 1,828,789,032 | 8,510 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
I did not find this page on https://python.langchain.com/docs/get_started/introduction.html
I'm just wondering, what's the motive ?
### Idea or request for content:
_No response_ | DOC: why remove concepts.md from the latest document page | https://api.github.com/repos/langchain-ai/langchain/issues/8506/comments | 1 | 2023-07-31T07:18:44Z | 2023-11-06T16:05:58Z | https://github.com/langchain-ai/langchain/issues/8506 | 1,828,562,645 | 8,506 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain/document_loaders/web_base.py > works for me only when i change:
```
return await response.text()
```
with:
```
body = await response.read()
return body.decode('utf-8', errors='ignore')
```
otherwise:
der Code produziert leider einen Fehler:
/home/codespace/.py
thon/current/bin/python3 /workspaces/b3rn_zero_ai/notebooks/ignite_vectorstore.py
Fetching pages: 13%|###8 | 33/256 [00:03<00:19, 11.18it/s]Traceback (most recent call last):
File "/workspaces/b3rn_zero_ai/notebooks/ignite_vectorstore.py", line 68, in <module>
documents = loader.load()
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/document_loaders/sitemap.py", line 142, in load
results = self.scrape_all([el["loc"].strip() for el in els if "loc" in el])
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/document_loaders/web_base.py", line 168, in scrape_all
results = asyncio.run(self.fetch_all(urls))
File "/home/codespace/.local/lib/python3.10/site-packages/nest_asyncio.py", line 35, in run
return loop.run_until_complete(task)
File "/home/codespace/.local/lib/python3.10/site-packages/nest_asyncio.py", line 90, in run_until_complete
return f.result()
File "/home/codespace/.python/current/lib/python3.10/asyncio/futures.py", line 201, in result
raise self._exception.with_traceback(self._exception_tb)
File "/home/codespace/.python/current/lib/python3.10/asyncio/tasks.py", line 232, in __step
result = coro.send(None)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/document_loaders/web_base.py", line 148, in fetch_all
return await tqdm_asyncio.gather(
File "/home/codespace/.python/current/lib/python3.10/site-packages/tqdm/asyncio.py", line 79, in gather
res = [await f for f in cls.as_completed(ifs, loop=loop, timeout=timeout,
File "/home/codespace/.python/current/lib/python3.10/site-packages/tqdm/asyncio.py", line 79, in <listcomp>
res = [await f for f in cls.as_completed(ifs, loop=loop, timeout=timeout,
File "/home/codespace/.python/current/lib/python3.10/asyncio/tasks.py", line 571, in _wait_for_one
return f.result() # May raise f.exception().
File "/home/codespace/.python/current/lib/python3.10/asyncio/futures.py", line 201, in result
raise self._exception.with_traceback(self._exception_tb)
File "/home/codespace/.python/current/lib/python3.10/asyncio/tasks.py", line 234, in __step
result = coro.throw(exc)
File "/home/codespace/.python/current/lib/python3.10/site-packages/tqdm/asyncio.py", line 76, in wrap_awaitable
return i, await f
File "/home/codespace/.python/current/lib/python3.10/asyncio/futures.py", line 285, in __await__
yield self # This tells Task to wait for completion.
File "/home/codespace/.python/current/lib/python3.10/asyncio/tasks.py", line 304, in __wakeup
future.result()
File "/home/codespace/.python/current/lib/python3.10/asyncio/futures.py", line 201, in result
raise self._exception.with_traceback(self._exception_tb)
File "/home/codespace/.python/current/lib/python3.10/asyncio/tasks.py", line 232, in __step
result = coro.send(None)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/document_loaders/web_base.py", line 136, in _fetch_with_rate_limit
return await self._fetch(url)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/document_loaders/web_base.py", line 120, in _fetch
return await response.text()
File "/home/codespace/.python/current/lib/python3.10/site-packages/aiohttp/client_reqrep.py", line 1086, in text
return self._body.decode( # type: ignore[no-any-return,union-attr]
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb5 in position 11: invalid start byte
Fetching pages: 15%|####4 | 38/256 [00:04<00:23, 9.25it/s]
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I tried to make embedding from a website in "french" language.
### Expected behavior
we need a solution when : UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb5 in position 11: invalid start byte | langchain/document_loaders/web_base.py | https://api.github.com/repos/langchain-ai/langchain/issues/8505/comments | 2 | 2023-07-31T07:02:32Z | 2023-11-06T16:06:03Z | https://github.com/langchain-ai/langchain/issues/8505 | 1,828,539,383 | 8,505 |
[
"hwchase17",
"langchain"
]
| I have a operation manual pdf file about a website and i want to use langchain let dolly can responed the question about the website
below is my code:
```
from langchain.embeddings import HuggingFaceEmbeddings
from PyPDF2 import PdfReader
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import ElasticVectorSearch, Pinecone, Weaviate, FAISS
from langchain.chains.question_answering import load_qa_chain
from langchain.llms import HuggingFacePipeline
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
from langchain.prompts import PromptTemplate
import torch
hf_embed = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
root_dir = "/content/gdrive/My Drive/"
reader = PdfReader('/content/gdrive/My Drive/data/operation Manual.pdf')
raw_text = ''
for i, page in enumerate(reader.pages):
text = page.extract_text()
if text:
raw_text += text
text_splitter = CharacterTextSplitter(
separator = "\n",
chunk_size = 1000,
chunk_overlap = 200,
length_function = len,
)
texts = text_splitter.split_text(raw_text)
docsearch = FAISS.from_texts(texts, hf_embed)
model_name = "databricks/dolly-v2-3b"
instruct_pipeline = pipeline(model=model_name, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto",
return_full_text=True, max_new_tokens=256, top_p=0.95, top_k=50)
hf_pipe = HuggingFacePipeline(pipeline=instruct_pipeline)
prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
{context}
Question: {question}
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
query = "I forgot my login password."
docs = docsearch.similarity_search(query)
chain = load_qa_chain(llm = hf_pipe, chain_type="stuff", prompt=PROMPT)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
```
I am refer to the article in the databricks website [https://www.databricks.com/resources/demos/tutorials/data-science-and-ai/build-your-chat-bot-with-dolly?itm_data=demo_center]
the output is not the answer that in the operation Manual and it have to spend much time (about 2.5 hours) to output answer. where did i do wrong? | How to use langchain to create my own databricks dolly chat robot | https://api.github.com/repos/langchain-ai/langchain/issues/8503/comments | 0 | 2023-07-31T06:08:19Z | 2023-07-31T06:14:31Z | https://github.com/langchain-ai/langchain/issues/8503 | 1,828,472,740 | 8,503 |
[
"hwchase17",
"langchain"
]
| ### System Info
**Langchain**:0.0.247
**Python**:3.10.5
**System**: macOS 13.4 arm64
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
There is the minimal produce demo
```python
from langchain.chat_models import PromptLayerChatOpenAI
from dotenv import load_dotenv
load_dotenv()
import promptlayer
from langchain.schema import HumanMessage
promptlayer.api_key = "xxxxxxxxxxx"
llm = PromptLayerChatOpenAI(model="gpt-3.5-turbo", verbose=True, pl_tags=["test"])
print(llm([HumanMessage(content="I am a cat and I want")]))
```
I found it on the [**PromptLayer**](https://promptlayer.com/)
<img width="978" alt="image" src="https://github.com/langchain-ai/langchain/assets/3949397/770f6c3f-ec8a-42df-967d-8e207b3208d4">
### Expected behavior
Why is the API key carried in this PromptLayer? This is insecure, it should be removed. Then, I discovered the root of the problem
The paras are generated from this function (**_generate** and **_agenerate**).
And I got it inherit from **ChatOpenAI** model function __create_message_dicts_ and __client_params_, then I found the api_key
https://github.com/langchain-ai/langchain/blob/08f5e6b8012f5eda2609103f33676199a3781a15/libs/langchain/langchain/chat_models/openai.py#L487
but the api_key is necessary for ChatOpenAI model to get the result.
so it should be changed with this two line
https://github.com/langchain-ai/langchain/blob/08f5e6b8012f5eda2609103f33676199a3781a15/libs/langchain/langchain/chat_models/promptlayer_openai.py#L59
https://github.com/langchain-ai/langchain/blob/08f5e6b8012f5eda2609103f33676199a3781a15/libs/langchain/langchain/chat_models/promptlayer_openai.py#L98 | API Key Leakage in the PromptLayerChatOpenAI Model | https://api.github.com/repos/langchain-ai/langchain/issues/8499/comments | 1 | 2023-07-31T02:14:44Z | 2023-11-02T08:48:45Z | https://github.com/langchain-ai/langchain/issues/8499 | 1,828,243,007 | 8,499 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain 0.0.247, Python 3.11, Linux.
### Who can help?
@rlancemartin
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Described [here](https://python.langchain.com/docs/integrations/document_loaders/youtube_audio).
```python
from langchain.document_loaders.generic import GenericLoader
from langchain.document_loaders.parsers import OpenAIWhisperParser
from langchain.document_loaders.blob_loaders.youtube_audio import YoutubeAudioLoader
# Two Karpathy lecture videos
urls = ["https://youtu.be/kCc8FmEb1nY", "https://youtu.be/VMj-3S1tku0"]
# Directory to save audio files
save_dir = "~/Downloads/YouTube"
# Transcribe the videos to text
loader = GenericLoader(YoutubeAudioLoader(urls, save_dir), OpenAIWhisperParser())
docs = loader.load()
print(docs)
```
I get an empty list.
```log
[youtube] Extracting URL: https://youtu.be/kCc8FmEb1nY
[youtube] kCc8FmEb1nY: Downloading webpage
[youtube] kCc8FmEb1nY: Downloading ios player API JSON
[youtube] kCc8FmEb1nY: Downloading android player API JSON
[youtube] kCc8FmEb1nY: Downloading m3u8 information
[info] kCc8FmEb1nY: Downloading 1 format(s): 140
[download] Destination: /home/dm/Downloads/YouTube/Let's build GPT: from scratch, in code, spelled out..m4a
[download] 100% of 107.73MiB in 00:00:11 at 9.19MiB/s
[FixupM4a] Correcting container of "/home/dm/Downloads/YouTube/Let's build GPT: from scratch, in code, spelled out..m4a"
[ExtractAudio] Not converting audio /home/dm/Downloads/YouTube/Let's build GPT: from scratch, in code, spelled out..m4a; file is already in target format m4a
[youtube] Extracting URL: https://youtu.be/VMj-3S1tku0
[youtube] VMj-3S1tku0: Downloading webpage
[youtube] VMj-3S1tku0: Downloading ios player API JSON
[youtube] VMj-3S1tku0: Downloading android player API JSON
[youtube] VMj-3S1tku0: Downloading m3u8 information
[info] VMj-3S1tku0: Downloading 1 format(s): 140
[download] Destination: /home/dm/Downloads/YouTube/The spelled-out intro to neural networks and backpropagation: building micrograd.m4a
[download] 100% of 135.08MiB in 00:00:13 at 9.65MiB/s
[FixupM4a] Correcting container of "/home/dm/Downloads/YouTube/The spelled-out intro to neural networks and backpropagation: building micrograd.m4a"
[ExtractAudio] Not converting audio /home/dm/Downloads/YouTube/The spelled-out intro to neural networks and backpropagation: building micrograd.m4a; file is already in target format m4a
[]
```
### Expected behavior
A non-empty list of documents is expected. | Loading documents from a YouTube url doesn't work. | https://api.github.com/repos/langchain-ai/langchain/issues/8498/comments | 2 | 2023-07-30T20:47:14Z | 2023-07-31T06:48:09Z | https://github.com/langchain-ai/langchain/issues/8498 | 1,828,082,841 | 8,498 |
[
"hwchase17",
"langchain"
]
| ### System Info
python 3.11
### Who can help?
@rlancemartin
File langchain/document_loaders/async_html.py:136--> results = asyncio.run(self.fetch_all(self.web_paths))
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.retrievers.web_research import WebResearchRetriever
import pinecone
import os
from langchain.vectorstores import Pinecone
from langchain.utilities import GoogleSearchAPIWrapper
from common_util.llms import LLM_FACT, EMBEDDINGS
from dotenv import load_dotenv
from common_util.namespaceEnum import PineconeNamespaceEnum
load_dotenv()
index_name = "langchain-demo"
pinecone.init(api_key=os.getenv("PINECONE_API_KEY"), environment=os.getenv("PINECONE_ENV"))
# Vectorstore
vectorstore = Pinecone.from_existing_index(index_name, EMBEDDINGS, namespace=PineconeNamespaceEnum.WEB_SEARCH.value)
# LLM
llm = LLM_FACT
# Search
os.environ["GOOGLE_CSE_ID"] = os.getenv("GOOGLE_CSE_ID")
os.environ["GOOGLE_API_KEY"] = os.getenv("GOOGLE_API_KEY")
search = GoogleSearchAPIWrapper()
web_research_retriever = WebResearchRetriever.from_llm(
vectorstore=vectorstore,
llm=llm,
search=search,
)
from langchain.chains import RetrievalQAWithSourcesChain
user_input = "How do LLM Powered Autonomous Agents work?"
qa_chain = RetrievalQAWithSourcesChain.from_chain_type(llm,retriever=web_research_retriever)
result = qa_chain({"question": user_input})
result
### Expected behavior
should search the web async | RuntimeError: asyncio.run() cannot be called from a running event loop | https://api.github.com/repos/langchain-ai/langchain/issues/8494/comments | 6 | 2023-07-30T19:10:58Z | 2023-12-13T16:07:53Z | https://github.com/langchain-ai/langchain/issues/8494 | 1,828,038,702 | 8,494 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am getting "Agent stopped due to iteration limit or time limit" as the error , As my max_iteration is also 15 then too i am getting this error.
I need some particular output from the model.
Following is my code
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent,
tools=tools,
verbose=True, memory = memory, max_iterations = 15)
### Suggestion:
_No response_ | Agent stopped due to iteration limit or time limit | https://api.github.com/repos/langchain-ai/langchain/issues/8493/comments | 12 | 2023-07-30T17:37:50Z | 2024-05-28T10:38:44Z | https://github.com/langchain-ai/langchain/issues/8493 | 1,828,012,369 | 8,493 |
[
"hwchase17",
"langchain"
]
| ### System Info
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
google-colab 1.0.0 requires requests==2.27.1, but you have requests 2.31.0 which is incompatible.
Successfully installed backoff-2.2.1 chroma-hnswlib-0.7.1 chromadb-0.4.3 coloredlogs-15.0.1 dataclasses-json-0.5.13 fastapi-0.99.1 h11-0.14.0 httptools-0.6.0 humanfriendly-10.0 langchain-0.0.247 langsmith-0.0.15 marshmallow-3.20.1 monotonic-1.6 mypy-extensions-1.0.0 onnxruntime-1.15.1 openai-0.27.8 openapi-schema-pydantic-1.2.4 overrides-7.3.1 posthog-3.0.1 pulsar-client-3.2.0 pypika-0.48.9 python-dotenv-1.0.0 requests-2.31.0 starlette-0.27.0 tokenizers-0.13.3 typing-inspect-0.9.0 uvicorn-0.23.1 uvloop-0.17.0 watchfiles-0.19.0 websockets-11.0.3
### Who can help?
@agola11
Trying to run Web Research Retriever code available in [Langchain docs](https://python.langchain.com/docs/modules/data_connection/retrievers/web_research)
in free Google Colab,
after running code block:
```
user_input = "How do LLM Powered Autonomous Agents work?"
qa_chain = RetrievalQAWithSourcesChain.from_chain_type(llm,retriever=web_research_retriever)
result = qa_chain({"question": user_input})
result
```
I got the following info log and error:
```
INFO:langchain.retrievers.web_research:Generating questions for Google Search ...
INFO:langchain.retrievers.web_research:Questions for Google Search (raw): {'question': 'What is Task Decomposition in LLM Powered Autonomous Agents?', 'text': LineList(lines=['1. How do LLM powered autonomous agents utilize task decomposition?\n', '2. Can you explain the concept of task decomposition in LLM powered autonomous agents?\n'])}
INFO:langchain.retrievers.web_research:Questions for Google Search: ['1. How do LLM powered autonomous agents utilize task decomposition?\n', '2. Can you explain the concept of task decomposition in LLM powered autonomous agents?\n']
INFO:langchain.retrievers.web_research:Searching for relevat urls ...
INFO:langchain.retrievers.web_research:Searching for relevat urls ...
INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Agent System Overview In a LLM-powered autonomous agent system, ... Task decomposition can be done (1) by LLM with simple prompting like\xa0...'}]
INFO:langchain.retrievers.web_research:Searching for relevat urls ...
INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1." , "What are the subgoals for achieving XYZ?" , (2)\xa0...'}]
INFO:langchain.retrievers.web_research:New URLs to load: ['https://lilianweng.github.io/posts/2023-06-23-agent/']
INFO:langchain.retrievers.web_research:Indexing new urls...
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-13-efc9ad25b93a>](https://localhost:8080/#) in <cell line: 5>()
3 logging.getLogger("langchain.retrievers.web_research").setLevel(logging.INFO)
4 user_input = "What is Task Decomposition in LLM Powered Autonomous Agents?"
----> 5 docs = web_research_retriever.get_relevant_documents(user_input)
4 frames
[/usr/local/lib/python3.10/dist-packages/langchain/schema/retriever.py](https://localhost:8080/#) in get_relevant_documents(self, query, callbacks, tags, metadata, **kwargs)
191 except Exception as e:
192 run_manager.on_retriever_error(e)
--> 193 raise e
194 else:
195 run_manager.on_retriever_end(
[/usr/local/lib/python3.10/dist-packages/langchain/schema/retriever.py](https://localhost:8080/#) in get_relevant_documents(self, query, callbacks, tags, metadata, **kwargs)
184 _kwargs = kwargs if self._expects_other_args else {}
185 if self._new_arg_supported:
--> 186 result = self._get_relevant_documents(
187 query, run_manager=run_manager, **_kwargs
188 )
[/usr/local/lib/python3.10/dist-packages/langchain/retrievers/web_research.py](https://localhost:8080/#) in _get_relevant_documents(self, query, run_manager)
202 html2text = Html2TextTransformer()
203 logger.info("Indexing new urls...")
--> 204 docs = loader.load()
205 docs = list(html2text.transform_documents(docs))
206 docs = self.text_splitter.split_documents(docs)
[/usr/local/lib/python3.10/dist-packages/langchain/document_loaders/async_html.py](https://localhost:8080/#) in load(self)
134 """Load text from the url(s) in web_path."""
135
--> 136 results = asyncio.run(self.fetch_all(self.web_paths))
137 docs = []
138 for i, text in enumerate(results):
[/usr/lib/python3.10/asyncio/runners.py](https://localhost:8080/#) in run(main, debug)
31 """
32 if events._get_running_loop() is not None:
---> 33 raise RuntimeError(
34 "asyncio.run() cannot be called from a running event loop")
35
RuntimeError: asyncio.run() cannot be called from a running event loop
```
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
! pip install langchain openai chromadb google-api-python-client
import os
import logging
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
from langchain.chat_models.openai import ChatOpenAI
from langchain.retrievers.web_research import WebResearchRetriever
from langchain.utilities import GoogleSearchAPIWrapper
os.environ["OPENAI_API_KEY"] = "****"
os.environ["GOOGLE_CSE_ID"] = "*****"
os.environ["GOOGLE_API_KEY"] = "******"
# Vectorstore
vectorstore = Chroma(embedding_function=OpenAIEmbeddings(),persist_directory="./chroma_db_oai")
# LLM
llm = ChatOpenAI(temperature=0)
# Search
search = GoogleSearchAPIWrapper()
from langchain.chains import RetrievalQAWithSourcesChain
user_input = "How do LLM Powered Autonomous Agents work?"
qa_chain = RetrievalQAWithSourcesChain.from_chain_type(llm,retriever=web_research_retriever)
result = qa_chain({"question": user_input})
result
```
### Expected behavior
It works as shown in docs' page. | Web Research Retriever error, code run in free Colab | https://api.github.com/repos/langchain-ai/langchain/issues/8487/comments | 2 | 2023-07-30T09:28:34Z | 2023-07-30T14:30:15Z | https://github.com/langchain-ai/langchain/issues/8487 | 1,827,868,223 | 8,487 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain 0.0.242
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
tried to load https://huggingface.co/TheBloke/StableBeluga2-70B-GGML model to work with lanchain's Llamacpp :
`llm = LlamaCpp(model_path="./stablebeluga2-70b.ggmlv3.q4_0.bin", n_gpu_layers=n_gpu_layers,
n_batch=n_batch, n_ctx=8192, input={"temperature": 0.01},n_threads=8)
llm_chain = LLMChain(llm=llm, prompt=prompt)`
i see that there is no support in passing n_gqa=8 parameter that according to [https://github.com/abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python) should be used for 70B models
the error i get is :
error loading model: llama.cpp: tensor 'layers.0.attention.wk.weight' has wrong shape; expected 8192 x 8192, got 8192 x 1024
### Expected behavior
model should be loaded successfuly | no support for loading 70B models via llamacpp | https://api.github.com/repos/langchain-ai/langchain/issues/8486/comments | 3 | 2023-07-30T09:12:42Z | 2023-11-07T16:06:33Z | https://github.com/langchain-ai/langchain/issues/8486 | 1,827,863,684 | 8,486 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I propose the addition of a foundational chain feature, namely, the IteratorChain to LangChain. Unlike a convenience function, this feature is intended to enhance the flexibility and usability of LangChain by allowing it to handle collections of inputs for any existing chain, propagating them sequentially or asynchronously.
```python
from langchain.chains import IteratorChain, LLMChain
llm_chain = LLMChain(...)
iterator_chain = IteratorChain(llm_chain)
inputs = [{"text": "Hello"}, {"text": "World"}]
outputs = iterator_chain.run(inputs)
```
In the current LangChain framework, when dealing with lists or collections of inputs, developers are required to manually loop through the input list and call the run method for each item. This approach not only results in more code but also creates challenges with LangChain's features like LangSmith logging, especially when lists are dynamically generated from previous chains.
The IteratorChain would encapsulate this looping, resulting in cleaner code and better integration with LangSmith logging and other LangChain features.
### Motivation
The proposed IteratorChain could address a current issue I'm having when dealing with nested lists of inputs. Let's consider the current SequentialChain setup below:
```python
SequentialChain(
chains=[
ChainA(input_variables=["inputA"], output_key="outputA"),
TransformChain(input_variables=["outputA", "auxiliary_input"], output_variables=["outputB"], transform=process_A_to_B),
TransformChain(input_variables=["outputB", "auxiliary_input"], output_variables=["outputC"], transform=refine_B_to_C),
TransformChain(input_variables=["outputC", "auxiliary_input"], output_variables=["final_output"], transform=process_C_to_final)
],
input_variables=["inputA", "auxiliary_input"],
output_variables=["final_output"]
)
```
In this setup, process_A_to_B and refine_B_to_C are both creating lists of inputs for further chains. However, these lists of inputs are currently not being processed elegantly. We have to loop manually over the list and call the underlying chain for each item. This not only leads to cumbersome code, but also hinders proper interaction with the LangSmith logging feature.
```python
def process_A_to_B(params) -> List[Dict[str, Any]]:
...
for item in items:
chainB = SomeChain(...)
output = chainB.run({"input": item, "auxiliary_input": auxiliary_input})
...
return {"outputB": outputs}
```
```python
def refine_B_to_C(params) -> List[Dict[str, Any]]:
...
for item in items:
chainC = AnotherChain(...)
output = chainC.run({"input": item, "auxiliary_input": auxiliary_input})
...
return {"outputC": outputs}
```
The addition of an IteratorChain feature would address these issues. It will encapsulate the manual loop and make the list handling process more intuitive and efficient, ensuring proper integration with LangSmith logging and other LangChain features.
### Your contribution
I'm prepared to contribute to the development of this feature, if decided that it is a good addition and is not already possible using existing capabilities. | IteratorChain | https://api.github.com/repos/langchain-ai/langchain/issues/8484/comments | 2 | 2023-07-30T07:43:44Z | 2023-11-05T16:04:59Z | https://github.com/langchain-ai/langchain/issues/8484 | 1,827,840,103 | 8,484 |
[
"hwchase17",
"langchain"
]
| ### System Info
I use this code:
```
search = GoogleSearchAPIWrapper()
tool = Tool(
name="Google Search",
description="Search Google for recent results.",
func=search.run,
)
tool.run("Obama's first name?")
```
The result looks fine. But when I use the code below
```
search = GoogleSearchAPIWrapper()
tools = [
Tool(
name="google-search",
func=search.run,
description="useful when you need to search google to answer questions about current events"
)
]
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
max_iterations=6)
response = agent("What is the latest news about the Mars rover?")
print(response)
```
I get this error:
```
File /opt/homebrew/anaconda3/envs/common_3.8/lib/python3.8/site-packages/googleapiclient/http.py:191, in _retry_request(http, num_retries, req_type, sleep, rand, uri, method, *args, **kwargs)
189 try:
190 exception = None
--> 191 resp, content = http.request(uri, method, *args, **kwargs)
192 # Retry on SSL errors and socket timeout errors.
193 except _ssl_SSLError as ssl_error:
File /opt/homebrew/anaconda3/envs/common_3.8/lib/python3.8/site-packages/httplib2/__init__.py:1724, in Http.request(self, uri, method, body, headers, redirections, connection_type)
1722 content = b""
1723 else:
-> 1724 (response, content) = self._request(
1725 conn, authority, uri, request_uri, method, body, headers, redirections, cachekey,
1726 )
1727 except Exception as e:
1728 is_timeout = isinstance(e, socket.timeout)
File /opt/homebrew/anaconda3/envs/common_3.8/lib/python3.8/site-packages/httplib2/__init__.py:1444, in Http._request(self, conn, host, absolute_uri, request_uri, method, body, headers, redirections, cachekey)
1441 if auth:
1442 auth.request(method, request_uri, headers, body)
-> 1444 (response, content) = self._conn_request(conn, request_uri, method, body, headers)
1446 if auth:
1447 if auth.response(response, body):
File /opt/homebrew/anaconda3/envs/common_3.8/lib/python3.8/site-packages/httplib2/__init__.py:1396, in Http._conn_request(self, conn, request_uri, method, body, headers)
1394 pass
1395 try:
-> 1396 response = conn.getresponse()
1397 except (http.client.BadStatusLine, http.client.ResponseNotReady):
1398 # If we get a BadStatusLine on the first try then that means
1399 # the connection just went stale, so retry regardless of the
1400 # number of RETRIES set.
1401 if not seen_bad_status_line and i == 1:
File /opt/homebrew/anaconda3/envs/common_3.8/lib/python3.8/http/client.py:1348, in HTTPConnection.getresponse(self)
1346 try:
1347 try:
-> 1348 response.begin()
1349 except ConnectionError:
1350 self.close()
File /opt/homebrew/anaconda3/envs/common_3.8/lib/python3.8/http/client.py:316, in HTTPResponse.begin(self)
314 # read until we get a non-100 response
315 while True:
--> 316 version, status, reason = self._read_status()
317 if status != CONTINUE:
318 break
File /opt/homebrew/anaconda3/envs/common_3.8/lib/python3.8/http/client.py:277, in HTTPResponse._read_status(self)
276 def _read_status(self):
--> 277 line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
278 if len(line) > _MAXLINE:
279 raise LineTooLong("status line")
File /opt/homebrew/anaconda3/envs/common_3.8/lib/python3.8/socket.py:669, in SocketIO.readinto(self, b)
667 while True:
668 try:
--> 669 return self._sock.recv_into(b)
670 except timeout:
671 self._timeout_occurred = True
File /opt/homebrew/anaconda3/envs/common_3.8/lib/python3.8/ssl.py:1241, in SSLSocket.recv_into(self, buffer, nbytes, flags)
1237 if flags != 0:
1238 raise ValueError(
1239 "non-zero flags not allowed in calls to recv_into() on %s" %
1240 self.__class__)
-> 1241 return self.read(nbytes, buffer)
1242 else:
1243 return super().recv_into(buffer, nbytes, flags)
File /opt/homebrew/anaconda3/envs/common_3.8/lib/python3.8/ssl.py:1099, in SSLSocket.read(self, len, buffer)
1097 try:
1098 if buffer is not None:
-> 1099 return self._sslobj.read(len, buffer)
1100 else:
1101 return self._sslobj.read(len)
ConnectionResetError: [Errno 54] Connection reset by peer
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
search = GoogleSearchAPIWrapper()
tools = [
Tool(
name="google-search",
func=search.run,
description="useful when you need to search google to answer questions about current events"
)
]
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
max_iterations=6)
response = agent("What is the latest news about the Mars rover?")
print(response)
```
### Expected behavior
Hope to know the reason and how to prevent the issue. | ConnectionResetError: [Errno 54] Connection reset by peer | https://api.github.com/repos/langchain-ai/langchain/issues/8483/comments | 4 | 2023-07-30T07:07:22Z | 2023-11-09T16:08:15Z | https://github.com/langchain-ai/langchain/issues/8483 | 1,827,831,371 | 8,483 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.239
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The results I got from 3 different Google Search API Wrapper. The same thing happenes to the Agent with tool.
```python
from langchain.utilities import GoogleSerperAPIWrapper
search = GoogleSerperAPIWrapper()
search.run("What's the Tesla stock today?")
```
'266.44 +10.73 (4.20%)'
```python
from langchain.utilities import SerpAPIWrapper
search = SerpAPIWrapper()
print(search.run("What's the Tesla stock today?"))
```
Tesla, Inc. is an American multinational automotive and clean energy company headquartered in Austin, Texas. Tesla designs and manufactures electric vehicles, stationary battery energy storage devices from home to grid-scale, solar panels and solar roof tiles, and related products and services.
```python
from langchain.utilities import GoogleSearchAPIWrapper
search = GoogleSearchAPIWrapper()
search.run("What's the Tesla stock today?")
```
"In this week's video, I cover need-to-know news items related to Tesla (NASDAQ: TSLA) during the week of July 24. Today's video will focus on what Tesla\xa0... TSLA | Complete Tesla Inc. stock news by MarketWatch. View real-time stock prices ... Here's What One Survey Revealed About Their Perception Of Elon Musk. Tesla is accelerating the world's transition to sustainable energy with electric cars, solar and integrated renewable energy solutions for homes and\xa0... Tesla's mission is to accelerate the world's transition to sustainable energy. Today, Tesla builds not only all-electric vehicles but also infinitely\xa0... Get Tesla Inc (TSLA:NASDAQ) real-time stock quotes, news, price and financial ... Please contact cnbc support to provide details about what went wrong. Feb 2, 2023 ... Tesla stock soared 41% in January, its best month since October 2021, leaving investors breathless and wondering what to do next. TSLA: Get the latest Tesla stock price and detailed information including TSLA news, historical charts and ... What are analysts forecasts for Tesla stock? Aug 25, 2022 ... The question now is what do Tesla investors expect the stock to do after the split. Tesla (ticker: TSLA) stock on Thursday was trading at\xa0... Aug 25, 2022 ... The electric car company completed a 3-for-1 stock split after the closing bell Wednesday. So one share now costs a third of what it did a day\xa0... Jun 30, 2023 ... Doubling your money isn't easy, and doubling it in just six months is even more difficult, so investors now have to decide: Is It time to take\xa0..."
### Expected behavior
I except the result from these 3 API should be similar. | 3 different google search API varies a lot | https://api.github.com/repos/langchain-ai/langchain/issues/8480/comments | 2 | 2023-07-30T06:00:36Z | 2023-11-13T16:07:20Z | https://github.com/langchain-ai/langchain/issues/8480 | 1,827,816,758 | 8,480 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
The INFO: Python Guide links in both https://docs.langchain.com/docs/components/prompts/prompt-template and https://docs.langchain.com/docs/components/prompts/example-selectors are both broken (similar to #8105)
### Idea or request for content:
The pages have simply been moved from https://python.langchain.com/docs/modules/prompts/ to https://python.langchain.com/docs/modules/model_io/prompts/, so setting up corresponding redirects should fix it
I can open up a PR with the corresponding redirects myself | DOC: Broken Links in Prompts Sub Categories Pages | https://api.github.com/repos/langchain-ai/langchain/issues/8477/comments | 0 | 2023-07-30T04:41:57Z | 2023-07-31T02:38:53Z | https://github.com/langchain-ai/langchain/issues/8477 | 1,827,802,229 | 8,477 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.247
python version: 3.11.0
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
You can reproduce this issue according following link:
https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompts_pipelining
```
from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.schema import HumanMessage, AIMessage, SystemMessage
prompt = SystemMessage(content="You are a nice pirate")
new_prompt = (
prompt
+ HumanMessage(content="hi")
+ AIMessage(content="what?")
+ "{input}"
)
```
prompy + HumanMessage(content="hi") will generate this issue
### Expected behavior
operand + for 'SystemMessage' and 'HumanMessage' should be support | unsupported operand type(s) for +: 'SystemMessage' and 'HumanMessage' | https://api.github.com/repos/langchain-ai/langchain/issues/8472/comments | 5 | 2023-07-30T02:14:01Z | 2023-11-05T16:05:09Z | https://github.com/langchain-ai/langchain/issues/8472 | 1,827,763,902 | 8,472 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain-version: 0.0.246
AzureCognitiveSearchRetriever always return top 10 results as supposed to what was specified for top k
<img width="1158" alt="image" src="https://github.com/langchain-ai/langchain/assets/19245478/cb0e7317-8eee-4d9b-b71c-95c217451b42">
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
retriever = AzureCognitiveSearchRetriever(api_version=api_version, top_k=3)
results = retriever.get_relevant_documents(chat_input.to_string())
print(retriever.top_k)
print(len(results))
### Expected behavior
The two print results are not the same | AzureCognitiveSearchRetriever Issue | https://api.github.com/repos/langchain-ai/langchain/issues/8469/comments | 1 | 2023-07-29T23:12:59Z | 2023-11-04T16:04:30Z | https://github.com/langchain-ai/langchain/issues/8469 | 1,827,714,420 | 8,469 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I just saw a novel error, which appears to be triggered by a failed OpenAI API call (inside an asynchronous block) which is causing an asyncio.run() inside an asyncio.run(). Error pasted below. Is this my (user) error? Or possibly a problem with the acompletion_with_retry() implementation?
```
2023-07-29 05:53:14,838 INFO message='OpenAI API response' path=https://api.openai.com/v1/chat/completions processing_ms=None request_id=None response_code=502
2023-07-29 05:53:14,838 INFO error_code=502 error_message='Bad gateway.' error_param=None error_type=cf_bad_gateway message='OpenAI API error received' stream_error=False
2023-07-29 05:53:14,839 WARNING Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} <CIMultiDictProxy('Date': 'Sat, 29 Jul 2023 05:53:14 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ee3120dab9f1084-ORD', 'alt-svc': 'h3=":443"; ma=86400')>.
2023-07-29 05:53:14,839 ERROR Error in on_retry: asyncio.run() cannot be called from a running event loop
/usr/local/python-modules/tenacity/__init__.py:338: RuntimeWarning: coroutine 'AsyncRunManager.on_retry' was never awaited
self.before_sleep(retry_state)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
```
### Suggestion:
_No response_ | Issue: OpenAI Bad Gateway results in Error in on_retry: asyncio.run() cannot be called from a running event loop (coroutine 'AsyncRunManager.on_retry' was never awaited) inside openai.acompletion_with_retry | https://api.github.com/repos/langchain-ai/langchain/issues/8462/comments | 14 | 2023-07-29T17:11:33Z | 2023-09-25T09:44:18Z | https://github.com/langchain-ai/langchain/issues/8462 | 1,827,559,495 | 8,462 |
[
"hwchase17",
"langchain"
]
| ### System Info
I'm trying to use the tutorial on langchain but I get this error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[11], line 1
----> 1 from langchain.document_loaders import TextLoader
2 print(langchain.__version__)
3 loader = TextLoader("Scribbles.txt")
File [~/Dropbox/Personal/islington_news/myenv/lib/python3.9/site-packages/langchain/__init__.py:6](https://file+.vscode-resource.vscode-cdn.net/Users/davidelks/Dropbox/Personal/~/Dropbox/Personal/islington_news/myenv/lib/python3.9/site-packages/langchain/__init__.py:6)
3 from importlib import metadata
4 from typing import Optional
----> 6 from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
7 from langchain.cache import BaseCache
8 from langchain.chains import (
9 ConversationChain,
10 LLMBashChain,
(...)
16 VectorDBQAWithSourcesChain,
17 )
File [~/Dropbox/Personal/islington_news/myenv/lib/python3.9/site-packages/langchain/agents/__init__.py:2](https://file+.vscode-resource.vscode-cdn.net/Users/davidelks/Dropbox/Personal/~/Dropbox/Personal/islington_news/myenv/lib/python3.9/site-packages/langchain/agents/__init__.py:2)
1 """Interface for agents."""
----> 2 from langchain.agents.agent import (
3 Agent,
4 AgentExecutor,
5 AgentOutputParser,
...
851 if not isinstance(cls, _GenericAlias):
--> 852 return issubclass(cls, self.__origin__)
853 return super().__subclasscheck__(cls)
TypeError: issubclass() arg 1 must be a class
```
I can't provide the version of langchain because I get this error. (I've got 0.0.247 from pip install.)
Running on MacOs Ventura. Python: 3.9.15
### Who can help?
@elksie5000
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import TextLoader
print(langchain.__version__)
loader = TextLoader("Scribbles.txt")
from langchain.indexes import VectorstoreIndexCreator
index = VectorstoreIndexCreator().from_loaders([loader])
### Expected behavior
The index to be created. | TypeError: issubclass() arg 1 must be a class | https://api.github.com/repos/langchain-ai/langchain/issues/8458/comments | 2 | 2023-07-29T11:19:47Z | 2023-11-04T16:04:36Z | https://github.com/langchain-ai/langchain/issues/8458 | 1,827,446,765 | 8,458 |
[
"hwchase17",
"langchain"
]
| ### System Info
... % python --version
Python 3.11.4
... % pip show langchain | grep Version
Version: 0.0.247
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When following the langchain docs [here](https://python.langchain.com/docs/integrations/vectorstores/qdrant#qdrant-cloud), there will be an error thrown:
```py
qdrant = Qdrant.from_documents(
docs,
embeddings,
url,
prefer_grpc=True,
api_key=api_key,
collection_name="test",
)
```
error:
```
Traceback (most recent call last):
File "...myscript.py", line 29, in <module>
qdrant = Qdrant.from_documents(
^^^^^^^^^^^^^^^^^^^^^^
TypeError: VectorStore.from_documents() takes 3 positional arguments but 4 were given
```
Is it related to https://github.com/langchain-ai/langchain/pull/7910 ?
### Expected behavior
QDrant being initialized properly. | VectorStore.from_documents() takes 3 positional arguments but 4 were given | https://api.github.com/repos/langchain-ai/langchain/issues/8457/comments | 2 | 2023-07-29T10:53:33Z | 2023-07-30T06:26:23Z | https://github.com/langchain-ai/langchain/issues/8457 | 1,827,440,722 | 8,457 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I am behind company firewall and I need to set proxy to `SitemapLoader`
### Motivation
I am behind company firewall
### Your contribution
Eg.:
```python
sitemap_loader = SitemapLoader(
web_path="https://langchain.readthedocs.io/sitemap.xml",
https_proxy="https://my.proxy.io/"
)
docs = sitemap_loader.load()
``` | SitemapLoader: set proxy | https://api.github.com/repos/langchain-ai/langchain/issues/8451/comments | 2 | 2023-07-29T06:02:55Z | 2024-04-15T16:41:40Z | https://github.com/langchain-ai/langchain/issues/8451 | 1,827,335,864 | 8,451 |
[
"hwchase17",
"langchain"
]
| ### System Info
```python
from langchain.llms.base import LLM
from langchain.llms import GooglePalm
```
This throws an error saying it requires `google.generativeai`; pervious it used to work, something changed and is it documented?
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
pip install langchain
```
```python
from langchain.llms.base import LLM
from langchain.llms import GooglePalm
```
### Expected behavior
Should load without errors? | GooglePalm requires google.generativeai? | https://api.github.com/repos/langchain-ai/langchain/issues/8449/comments | 3 | 2023-07-29T04:55:22Z | 2023-11-15T16:06:58Z | https://github.com/langchain-ai/langchain/issues/8449 | 1,827,318,961 | 8,449 |
[
"hwchase17",
"langchain"
]
| ### System Info
```shell
$ langchain.__version__
'0.0.234'
$ uname -a
Linux codespaces-92388d 5.15.0-1042-azure #49-Ubuntu SMP Tue Jul 11 17:28:46 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
$ python
Python 3.10.8 (main, Jun 15 2023, 01:39:58) [GCC 9.4.0] on linux
```
### Who can help?
@hwchase17 @vowe
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The project address is: https://github.com/eunomia-bpf/trace-agent. When uncomment the line [# args_schema=AnalyseInput,](https://github.com/eunomia-bpf/trace-agent/blob/a0c1ff74017e1e47a900c9371833eb2cca1705ef/iminder/tools.py#L38) in the file `trace-agent/iminder/tools.py`, and then run the project using `python -m iminder pid`, the following error occurs:
```shell
$ python -m iminder 480
Traceback (most recent call last):
File "/usr/local/python/3.10.8/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/python/3.10.8/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/workspaces/trace-agent/iminder/__main__.py", line 11, in <module>
bot.run([f"Obtain the resource usage of the process whose pid is {pid} over a period of time, "
File "/workspaces/trace-agent/iminder/autogpt.py", line 61, in run
return self.agent.run(tasks)
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/experimental/autonomous_agents/autogpt/agent.py", line 93, in run
assistant_reply = self.chain.run(
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/base.py", line 445, in run
return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/base.py", line 243, in __call__
raise e
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/base.py", line 237, in __call__
self._call(inputs, run_manager=run_manager)
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/llm.py", line 92, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/llm.py", line 101, in generate
prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/llm.py", line 135, in prep_prompts
prompt = self.prompt.format_prompt(**selected_inputs)
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/prompts/chat.py", line 155, in format_prompt
messages = self.format_messages(**kwargs)
File "/workspaces/trace-agent/iminder/prompt.py", line 130, in format_messages
misc_messages = self._format_misc_messages(**kwargs)
File "/workspaces/trace-agent/iminder/prompt.py", line 67, in _format_misc_messages
base_prompt = SystemMessage(content=self.construct_full_prompt(**kwargs))
File "/workspaces/trace-agent/iminder/prompt.py", line 58, in construct_full_prompt
full_prompt += f"\n\n{get_prompt(self.tools)}"
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt_generator.py", line 184, in get_prompt
prompt_string = prompt_generator.generate_prompt_string()
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt_generator.py", line 113, in generate_prompt_string
f"{self._generate_numbered_list(self.commands, item_type='command')}\n\n"
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt_generator.py", line 84, in _generate_numbered_list
command_strings = [
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt_generator.py", line 85, in <listcomp>
f"{i + 1}. {self._generate_command_string(item)}"
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt_generator.py", line 50, in _generate_command_string
output += f", args json schema: {json.dumps(tool.args)}"
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/tools/base.py", line 418, in args
return self.args_schema.schema()["properties"]
File "pydantic/main.py", line 664, in pydantic.main.BaseModel.schema
File "pydantic/schema.py", line 188, in pydantic.schema.model_schema
File "pydantic/schema.py", line 582, in pydantic.schema.model_process_schema
File "pydantic/schema.py", line 623, in pydantic.schema.model_type_schema
File "pydantic/schema.py", line 249, in pydantic.schema.field_schema
File "pydantic/schema.py", line 217, in pydantic.schema.get_field_info_schema
File "pydantic/schema.py", line 992, in pydantic.schema.encode_default
File "pydantic/schema.py", line 991, in genexpr
File "pydantic/schema.py", line 996, in pydantic.schema.encode_default
File "pydantic/json.py", line 90, in pydantic.json.pydantic_encoder
TypeError: Object of type 'FieldInfo' is not JSON serializable
```
### Expected behavior
In [trace-agent/iminder/tools.py](https://github.com/eunomia-bpf/trace-agent/blob/main/iminder/tools.py), I have defined two custom tools: one is called `sample`, and the other is called `analyse_process`. Both tools have only one input parameter, but of different types. `sample` takes an integer as input, while `analyse_process` takes a string. Strangely, `sample` works as expected, but `analyse_process` does not. My expectation was that both of them would function correctly. | Object of type 'FieldInfo' is not JSON serializable | https://api.github.com/repos/langchain-ai/langchain/issues/8448/comments | 2 | 2023-07-29T04:47:30Z | 2023-07-29T08:50:24Z | https://github.com/langchain-ai/langchain/issues/8448 | 1,827,317,440 | 8,448 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python = 3.9
Langchain = 0.0.245
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I checked the recent update for summarization pipeline with memory as:
```
from langchain.llms import HuggingFacePipeline
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline
from langchain.memory import ConversationSummaryBufferMemory
import torch
summarize_model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-large-cnn", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
summarize_tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn", padding_side="left")
pipe_summary = pipeline("summarization", model=summarize_model, tokenizer=summarize_tokenizer) #, max_new_tokens=500, min_new_tokens=300
hf_summary = HuggingFacePipeline(pipeline=pipe_summary)
memory=ConversationSummaryBufferMemory(llm=hf_summary, max_token_limit=10)
```
Then, added chat history to the memory and observed the memory afterwards as:
```
memory.save_context({"input": "hi"}, {"output": "whats up"})
memory.save_context({"input": "not much you"}, {"output": "not much"})
memory.save_context({"input": "what's my name"}, {"output": "AJ"})
memory.load_memory_variables({})
```
It returned:
```
{'history': "System: The human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential. Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary. The human asks the AI: Why do you think artificial Intelligence is a Force for good? The AI: Because artificial intelligence will help human reach their potential.\nHuman: what's my name\nAI: AJ"}
```
This doesn't summarize the actual chat history, but returns a generalized text: `The human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential. Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary. The human asks the AI: Why do you think artificial Intelligence is a Force for good? The AI: Because artificial intelligence will help human reach their potential.`
### Expected behavior
This would pass wrong prompt to the LLM for question-answering. The expectation is the summary of the chat history. | Huggingface_Pipeline for summarization returning generalized response | https://api.github.com/repos/langchain-ai/langchain/issues/8444/comments | 6 | 2023-07-29T01:15:26Z | 2023-08-02T12:38:17Z | https://github.com/langchain-ai/langchain/issues/8444 | 1,827,254,867 | 8,444 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
## Description
The installation instructions in the Contributor docs are not working and should be updated. For example:
```bash
> poetry install -E all
Installing dependencies from lock file
Extra [all] is not specified.
```
```bash
> poetry install --with dev
Group(s) not found: dev (via --with)
```
### Idea or request for content:
_No response_ | DOC: Contributor docs are inaccurate | https://api.github.com/repos/langchain-ai/langchain/issues/8440/comments | 2 | 2023-07-28T23:26:09Z | 2023-07-31T18:01:18Z | https://github.com/langchain-ai/langchain/issues/8440 | 1,827,201,525 | 8,440 |
[
"hwchase17",
"langchain"
]
| null | Issue: Indexing and querying an XML file | https://api.github.com/repos/langchain-ai/langchain/issues/8436/comments | 0 | 2023-07-28T21:37:19Z | 2023-07-28T21:40:29Z | https://github.com/langchain-ai/langchain/issues/8436 | 1,827,113,761 | 8,436 |
[
"hwchase17",
"langchain"
]
| ### System Info
- MacOS 13.4.1 (c)
- Intel Core i9
#### Version
- Python 3.8.17
- Langchain 0.0.245
#### Context
I am trying build a prompt that convert latex string generated by an OCR algo to a text describing that latex. When using the `FewShotPromptTemplate`, the curly brackets in the latex string are somehow interpreted as key to a dict.
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
#### Code
```
from langchain.prompts.few_shot import FewShotPromptTemplate
from langchain.prompts.prompt import PromptTemplate
examples = [
{
"latex": """\sum_{i=1}^{n}""",
"doc": """taking sum from 1 to n"""
}
]
example_template = """
latex: {latex}
doc: {doc}
"""
prefix = """ Convert the latex
"""
suffix = """
User: {latex}
AI: """
example_prompt = PromptTemplate(input_variables=["latex", "doc"], template="Question: {latex}\n{doc}")
few_shot_prompt_template = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
prefix=prefix,
suffix=suffix,
input_variables=["latex"],
example_separator="\n\n"
)
print(example_prompt.format(**examples[0]))
print(few_shot_prompt_template.format(latex="\frac{a}{b}"))
```
### Expected behavior
#### Error
The PromptTemplate.format works fine, but the FewShotPromptTemplate fails.
```
---> 34 print(few_shot_prompt_template.format(latex="\frac{a}{b}"))
File [~/Library/Caches/pypoetry/virtualenvs/expression-engine-OXFJOYa8-py3.8/lib/python3.8/site-packages/langchain/prompts/few_shot.py:123](https://file+.vscode-resource.vscode-cdn.net/Users/LLM/~/Library/Caches/pypoetry/virtualenvs/expression-engine-OXFJOYa8-py3.8/lib/python3.8/site-packages/langchain/prompts/few_shot.py:123), in FewShotPromptTemplate.format(self, **kwargs)
120 template = self.example_separator.join([piece for piece in pieces if piece])
122 # Format the template with the input variables.
--> 123 return DEFAULT_FORMATTER_MAPPING[self.template_format](template, **kwargs)
File [/usr/local/opt/python](https://file+.vscode-resource.vscode-cdn.net/usr/local/opt/python)@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/string.py:163, in Formatter.format(self, format_string, *args, **kwargs)
162 def format(self, format_string, [/](https://file+.vscode-resource.vscode-cdn.net/), *args, **kwargs):
--> 163 return self.vformat(format_string, args, kwargs)
File [~/Library/Caches/pypoetry/virtualenvs/expression-engine-OXFJOYa8-py3.8/lib/python3.8/site-packages/langchain/utils/formatting.py:29](https://file+.vscode-resource.vscode-cdn.net/Users/LLM/~/Library/Caches/pypoetry/virtualenvs/expression-engine-OXFJOYa8-py3.8/lib/python3.8/site-packages/langchain/utils/formatting.py:29), in StrictFormatter.vformat(self, format_string, args, kwargs)
24 if len(args) > 0:
25 raise ValueError(
26 "No arguments should be provided, "
...
227 return args[key]
228 else:
--> 229 return kwargs[key]
KeyError: 'i=1'
``` | FewShotPromptTemplate example formating bug | https://api.github.com/repos/langchain-ai/langchain/issues/8433/comments | 2 | 2023-07-28T20:48:54Z | 2023-11-03T16:05:41Z | https://github.com/langchain-ai/langchain/issues/8433 | 1,827,071,328 | 8,433 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
OpenAI Cookbook
- https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb
- `gpt-3.5-turbo-0301` set `tokens_per_message = 4` and `tokens_per_name = -1`
- `gpt-3.5-turbo-*` set `tokens_per_message = 3` and `tokens_per_name = 1`
- `gpt-3.5-turbo` redirect to `gpt-3.5-turbo-0613`
- `gpt-4` redirect to `gpt-4-0613`
```
if model in {
"gpt-3.5-turbo-0613",
"gpt-3.5-turbo-16k-0613",
"gpt-4-0314",
"gpt-4-32k-0314",
"gpt-4-0613",
"gpt-4-32k-0613",
}:
tokens_per_message = 3
tokens_per_name = 1
elif model == "gpt-3.5-turbo-0301":
tokens_per_message = 4 # every message follows <|start|>{role/name}\n{content}<|end|>\n
tokens_per_name = -1 # if there's a name, the role is omitted
elif "gpt-3.5-turbo" in model:
print("Warning: gpt-3.5-turbo may update over time. Returning num tokens assuming gpt-3.5-turbo-0613.")
return num_tokens_from_messages(messages, model="gpt-3.5-turbo-0613")
elif "gpt-4" in model:
print("Warning: gpt-4 may update over time. Returning num tokens assuming gpt-4-0613.")
return num_tokens_from_messages(messages, model="gpt-4-0613")
```
LangChain
- https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chat_models/openai.py#L541-L548
- `gpt-3.5-turbo` and `gpt-3.5-turbo-*` set `tokens_per_message = 4` and `tokens_per_name = -1`
```
if model.startswith("gpt-3.5-turbo"):
# every message follows <im_start>{role/name}\n{content}<im_end>\n
tokens_per_message = 4
# if there's a name, the role is omitted
tokens_per_name = -1
elif model.startswith("gpt-4"):
tokens_per_message = 3
tokens_per_name = 1
```
- https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chat_models/openai.py#L508-L515
- `gpt-3.5-turbo` redirect to `gpt-3.5-turbo-0301`
- `gpt-4` redirect to `gpt-4-0314`
```
if model == "gpt-3.5-turbo":
# gpt-3.5-turbo may change over time.
# Returning num tokens assuming gpt-3.5-turbo-0301.
model = "gpt-3.5-turbo-0301"
elif model == "gpt-4":
# gpt-4 may change over time.
# Returning num tokens assuming gpt-4-0314.
model = "gpt-4-0314"
```
### Suggestion:
Follow the OpenAI Cookbook
- `gpt-3.5-turbo-0301` set `tokens_per_message = 4` and `tokens_per_name = -1`
- `gpt-3.5-turbo-*` set `tokens_per_message = 3` and `tokens_per_name = 1`
| Issue: Azure/OpenAI get_num_tokens_from_messages returns wrong prompt tokens | https://api.github.com/repos/langchain-ai/langchain/issues/8430/comments | 1 | 2023-07-28T19:22:44Z | 2023-07-29T01:13:36Z | https://github.com/langchain-ai/langchain/issues/8430 | 1,826,981,943 | 8,430 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Expose text-bison as chat model since it might be useful for some applications.
### Motivation
Sometimes it might be interesting to compare text-bison vs chat-bison for chat scenarios.
### Your contribution
yes, I'm happy to do it. | Expose Vertex text model as chat model | https://api.github.com/repos/langchain-ai/langchain/issues/8427/comments | 2 | 2023-07-28T18:49:53Z | 2023-11-03T16:05:29Z | https://github.com/langchain-ai/langchain/issues/8427 | 1,826,935,451 | 8,427 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Is there anyway to use a different model to generate AGENT THOUGHTS and AGENT FINAL ANSWER?
For example: I want to to use GPT3.5 for generate thoughts and GPT4 to generate the final answer.
### Suggestion:
_No response_ | Different Model to generate thought and answer from Agent. | https://api.github.com/repos/langchain-ai/langchain/issues/8421/comments | 1 | 2023-07-28T15:28:07Z | 2023-11-03T16:05:57Z | https://github.com/langchain-ai/langchain/issues/8421 | 1,826,632,580 | 8,421 |
[
"hwchase17",
"langchain"
]
| ### System Info
Firstly sorry if I am posting this in wrong place but I felt like it belongs to here.
I am trying use LlamaCpp for QA for txt documents but on Chromadb I am getting following error
I couldn't find a way to solve this:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-13-9bd0ee703b23>](https://localhost:8080/#) in <cell line: 1>()
----> 1 db = Chroma.from_documents(texts, embeddings, persist_directory='db')
7 frames
[/usr/local/lib/python3.10/dist-packages/langchain/vectorstores/chroma.py](https://localhost:8080/#) in from_documents(cls, documents, embedding, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs)
576 texts = [doc.page_content for doc in documents]
577 metadatas = [doc.metadata for doc in documents]
--> 578 return cls.from_texts(
579 texts=texts,
580 embedding=embedding,
[/usr/local/lib/python3.10/dist-packages/langchain/vectorstores/chroma.py](https://localhost:8080/#) in from_texts(cls, texts, embedding, metadatas, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs)
540 **kwargs,
541 )
--> 542 chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
543 return chroma_collection
544
[/usr/local/lib/python3.10/dist-packages/langchain/vectorstores/chroma.py](https://localhost:8080/#) in add_texts(self, texts, metadatas, ids, **kwargs)
173 embeddings = None
174 if self._embedding_function is not None:
--> 175 embeddings = self._embedding_function.embed_documents(list(texts))
176
177 if metadatas:
[/usr/local/lib/python3.10/dist-packages/langchain/embeddings/llamacpp.py](https://localhost:8080/#) in embed_documents(self, texts)
108 List of embeddings, one for each text.
109 """
--> 110 embeddings = [self.client.embed(text) for text in texts]
111 return [list(map(float, e)) for e in embeddings]
112
[/usr/local/lib/python3.10/dist-packages/langchain/embeddings/llamacpp.py](https://localhost:8080/#) in <listcomp>(.0)
108 List of embeddings, one for each text.
109 """
--> 110 embeddings = [self.client.embed(text) for text in texts]
111 return [list(map(float, e)) for e in embeddings]
112
[/usr/local/lib/python3.10/dist-packages/llama_cpp/llama.py](https://localhost:8080/#) in embed(self, input)
810 A list of embeddings
811 """
--> 812 return list(map(float, self.create_embedding(input)["data"][0]["embedding"]))
813
814 def _create_completion(
[/usr/local/lib/python3.10/dist-packages/llama_cpp/llama.py](https://localhost:8080/#) in create_embedding(self, input, model)
774 tokens = self.tokenize(input.encode("utf-8"))
775 self.reset()
--> 776 self.eval(tokens)
777 n_tokens = len(tokens)
778 total_tokens += n_tokens
[/usr/local/lib/python3.10/dist-packages/llama_cpp/llama.py](https://localhost:8080/#) in eval(self, tokens)
469 raise RuntimeError(f"llama_eval returned {return_code}")
470 # Save tokens
--> 471 self.input_ids[self.n_tokens : self.n_tokens + n_tokens] = batch
472 # Save logits
473 rows = n_tokens if self.params.logits_all else 1
ValueError: could not broadcast input array from shape (8,) into shape (0,)
```
Code:
```
#installation
!pip install langchain PyPDF2 huggingface_hub chromadb llama-cpp-python
#download the model
!git clone https://github.com/ggerganov/llama.cpp.git
%cd llama.cpp
!curl -L https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/resolve/main/llama-2-7b-chat.ggmlv3.q4_K_M.bin -o ./models/llama-2-7b-chat.ggmlv3.q4_K_M.bin
!LLAMA_METAL=1 make
```
```
from langchain.llms import LlamaCpp
from langchain.embeddings import LlamaCppEmbeddings
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know or you can't help, don't try to make up an answer.
{context}
Question: {question}
Answer:"""
prompt = PromptTemplate(template=prompt_template, input_variables=["context", "question"])
model_path = '/content/llama.cpp/models/llama-2-7b-chat.ggmlv3.q4_K_M.bin'
llm = LlamaCpp(model_path=model_path)
embeddings = LlamaCppEmbeddings(model_path=model_path)
llm_chain = LLMChain(llm=llm, prompt=prompt)
loader = TextLoader(txt_file_path)
docs = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=100, chunk_overlap=0)
texts = text_splitter.split_documents(docs)
db = Chroma.from_documents(texts, embeddings, persist_directory='db')
question = 'summerize this document'
similar_doc = db.similarity_search(question, k=1)
context = similar_doc[0].page_content
query_llm = LLMChain(llm=llm, prompt=prompt)
response = query_llm.run({"context": context, "question": question})
```
versions:
```
langchain==0.0.246
chromadb==0.4.3
```
Is there any alternative way to do what i want to achieve?
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Upload a text file
- run all the code to download the model
- replace model_path with downloaded model
- run the rest
### Expected behavior
working | langchain with LlamaCpp for QA from txt files fails on Chroma part | https://api.github.com/repos/langchain-ai/langchain/issues/8420/comments | 4 | 2023-07-28T14:45:53Z | 2023-10-13T12:13:43Z | https://github.com/langchain-ai/langchain/issues/8420 | 1,826,557,242 | 8,420 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'm trying to load some documents, powerpoints and text to train my custom LLm using Langchain.
When I run it I come to a weird error message where it tells I don't have "tokenizers" and "taggers" packages (folders).
I've read the docs, asked Langchain chatbot, pip install nltk, uninstall, pip install nltk without dependencies, added them with nltk.download(), nltk.download("punkt"), nltk.download("all"),... Did also manually put on the path: nltk.data.path = ['C:\Users\zaesa\AppData\Roaming\nltk_data'] and added all the folders. Added the tokenizers folder and taggers folder from the github repo: [](https://github.com/nltk/nltk_data/tree/gh-pages/packages). Everything. Also asked on the Github repo. Nothing, no success.
Here the code of the file I try to run:
`
from nltk.tokenize import sent_tokenize
from langchain.document_loaders import UnstructuredPowerPointLoader, TextLoader, UnstructuredWordDocumentLoader
from dotenv import load_dotenv, find_dotenv
import os
import openai
import sys
import nltk
nltk.data.path = ['C:\\Users\\zaesa\\AppData\\Roaming\\nltk_data']
nltk.download(
'punkt', download_dir='C:\\Users\\zaesa\\AppData\\Roaming\\nltk_data')
sys.path.append('../..')
_ = load_dotenv(find_dotenv()) # read local .env file
openai.api_key = os.environ['OPENAI_API_KEY']
folder_path_docx = "DB\\ DB VARIADO\\DOCS"
folder_path_txt = "DB\\BLOG-POSTS"
folder_path_pptx_1 = "DB\\PPT DAY JUNIO"
folder_path_pptx_2 = "DB\\DB VARIADO\\PPTX"
loaded_content = []
for file in os.listdir(folder_path_docx):
if file.endswith(".docx"):
file_path = os.path.join(folder_path_docx, file)
loader = UnstructuredWordDocumentLoader(file_path)
docx = loader.load()
loaded_content.extend(docx)
for file in os.listdir(folder_path_txt):
if file.endswith(".txt"):
file_path = os.path.join(folder_path_txt, file)
loader = TextLoader(file_path, encoding='utf-8')
text = loader.load()
loaded_content.extend(text)
for file in os.listdir(folder_path_pptx_1):
if file.endswith(".pptx"):
file_path = os.path.join(folder_path_pptx_1, file)
loader = UnstructuredPowerPointLoader(file_path)
slides_1 = loader.load()
loaded_content.extend(slides_1)
for file in os.listdir(folder_path_pptx_2):
if file.endswith(".pptx"):
file_path = os.path.join(folder_path_pptx_2, file)
loader = UnstructuredPowerPointLoader(file_path)
slides_2 = loader.load()
loaded_content.extend(slides_2)
print(loaded_content[0].page_content)
print(nltk.data.path)
installed_packages = nltk.downloader.Downloader(
download_dir='C:\\Users\\zaesa\\AppData\\Roaming\\nltk_data').packages()
print(installed_packages)
sent_tokenize("Hello. How are you? I'm well.")
`
When running the file I get:
`
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading taggers: Package 'taggers' not found in
[nltk_data] index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading taggers: Package 'taggers' not found in
[nltk_data] index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading taggers: Package 'taggers' not found in
[nltk_data] index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading taggers: Package 'taggers' not found in
[nltk_data] index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading taggers: Package 'taggers' not found in
[nltk_data] index
- HERE SOME TEXT -
['C:\\Users\\zaesa\\AppData\\Roaming\\nltk_data']
dict_values([<Package perluniprops>, <Package mwa_ppdb>, <Package punkt>, <Package rslp>, <Package porter_test>, <Package snowball_data>, <Package maxent_ne_chunker>, <Package moses_sample>, <Package bllip_wsj_no_aux>, <Package word2vec_sample>, <Package wmt15_eval>, <Package spanish_grammars>, <Package sample_grammars>, <Package large_grammars>, <Package book_grammars>, <Package basque_grammars>, <Package maxent_treebank_pos_tagger>, <Package averaged_perceptron_tagger>, <Package averaged_perceptron_tagger_ru>, <Package universal_tagset>, <Package vader_lexicon>, <Package lin_thesaurus>, <Package movie_reviews>, <Package problem_reports>, <Package pros_cons>, <Package masc_tagged>, <Package sentence_polarity>, <Package webtext>, <Package nps_chat>, <Package city_database>, <Package europarl_raw>, <Package biocreative_ppi>, <Package verbnet3>, <Package pe08>, <Package pil>, <Package crubadan>, <Package gutenberg>, <Package propbank>, <Package machado>, <Package state_union>, <Package twitter_samples>, <Package semcor>, <Package wordnet31>, <Package extended_omw>, <Package names>, <Package ptb>, <Package nombank.1.0>, <Package floresta>, <Package comtrans>, <Package knbc>, <Package mac_morpho>, <Package swadesh>, <Package rte>, <Package toolbox>, <Package jeita>, <Package product_reviews_1>, <Package omw>, <Package wordnet2022>, <Package sentiwordnet>, <Package product_reviews_2>, <Package abc>, <Package wordnet2021>, <Package udhr2>, <Package senseval>, <Package words>, <Package framenet_v15>, <Package unicode_samples>, <Package kimmo>, <Package framenet_v17>, <Package chat80>, <Package qc>, <Package inaugural>, <Package wordnet>, <Package stopwords>, <Package verbnet>, <Package shakespeare>, <Package ycoe>, <Package ieer>, <Package cess_cat>, <Package switchboard>, <Package comparative_sentences>, <Package subjectivity>, <Package udhr>, <Package pl196x>, <Package paradigms>, <Package gazetteers>, <Package timit>, <Package treebank>, <Package sinica_treebank>, <Package opinion_lexicon>, <Package ppattach>, <Package dependency_treebank>, <Package reuters>, <Package genesis>, <Package cess_esp>, <Package conll2007>, <Package nonbreaking_prefixes>, <Package dolch>, <Package smultron>, <Package alpino>, <Package wordnet_ic>, <Package brown>, <Package bcp47>, <Package panlex_swadesh>, <Package conll2000>, <Package universal_treebanks_v20>, <Package brown_tei>, <Package cmudict>, <Package omw-1.4>, <Package mte_teip5>, <Package indian>, <Package conll2002>, <Package tagsets>])
`
And here is how my folders structure from nltk_data looks like:
<img width="841" alt="nltk-screenshot" src="https://github.com/langchain-ai/langchain/assets/29057173/c094ec97-6e9b-4a3c-83ae-afbe08af3380">
<img width="842" alt="taggers-screenshot" src="https://github.com/langchain-ai/langchain/assets/29057173/e943a0cc-6897-4a59-9d23-0c7e5c080f37">
<img width="835" alt="tokeenizers-screenshot" src="https://github.com/langchain-ai/langchain/assets/29057173/184d69f3-d8a9-42e1-a792-a78044f54076">
<img width="838" alt="punkt-screenshot" src="https://github.com/langchain-ai/langchain/assets/29057173/e1158619-1fdc-4fd5-b014-53ddc802e9c4">
### Suggestion:
I have fresh installed nltk with no dependencies. The version is the latest. The support team from NLTK doesn't know what is wrong. It seems everything is fine. So, it has to be a bug or something coming from Langchain that I'm not able to see. Really appreciate any help. Need to make this work! Thank you. | Working with Langchain I get nlkt errors telling me: Package "tokenizers" not found in index and Packaage "taggers" not found in index | https://api.github.com/repos/langchain-ai/langchain/issues/8419/comments | 8 | 2023-07-28T12:23:04Z | 2023-11-03T16:06:43Z | https://github.com/langchain-ai/langchain/issues/8419 | 1,826,332,861 | 8,419 |
[
"hwchase17",
"langchain"
]
| I'd like to incorporate this 'system_message' with each whenever I call 'qa.run(prompt)'.
How is this possible. Can someone help?
Here is the code I wrote to initialize the LLM and the RetrievalQA:
```from langchain.chat_models import ChatOpenAI
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain.prompts import ChatPromptTemplate
from langchain.chains import RetrievalQA, LLMChain
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
llm = ChatOpenAI(
openai_api_key=OPENAI_API_KEY,
model_name = 'gpt-3.5-turbo',
temperature=0.0
)
system_message = [SystemMessage(
content='You are a Virtual Vet. '
'You should help clients with their concerns about their pets and provide helpful solutions.'
'You can ask questions to help you understand and diagnose the problem.'
'You should only talk within the context of problem.'
'If you are unsure of how to help, you can suggest the client to go to the nearest clink of their place.'
'You should talk on German, unless the client talks in English.')]
conversational_memory = ConversationBufferWindowMemory(
memory_key='chat_history',
k=2,
return_messages=True
)
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type='stuff',
retriever=vectorstore.as_retriever(),
)
| Question: How can I include SystemMessage with RetrievalQA | https://api.github.com/repos/langchain-ai/langchain/issues/8418/comments | 5 | 2023-07-28T12:05:32Z | 2023-08-01T06:22:50Z | https://github.com/langchain-ai/langchain/issues/8418 | 1,826,310,026 | 8,418 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.245
gptcache==0.1.37
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from gptcache import Cache
from gptcache.adapter.api import init_similar_cache
from langchain.cache import GPTCache
# In the first llm predict call the cache is not initialized and always returns None
def init_gptcache(cache_obj: Cache, llm_str: str):
init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{llm_str}")
langchain.llm_cache = GPTCache(init_gptcache)
llm = OpenAI(model_name="text-davinci-002", temperature=0.2)
llm.predict("tell me a joke")
```
### Expected behavior
It should first create the gptcache object and check the cache. This becomes a problem if I use external dbs for gptcache for my langchain app.
Please assign this issue to me. I am willing to contribute on this one. | GPTCache object should be created during or before the first lookup | https://api.github.com/repos/langchain-ai/langchain/issues/8415/comments | 1 | 2023-07-28T10:24:52Z | 2023-08-02T15:17:50Z | https://github.com/langchain-ai/langchain/issues/8415 | 1,826,173,560 | 8,415 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
My Chatbot is fairly straight forward. It uses MosaicMLInstructorEmbeddings() as the embedding and MosaicML() as the model. I have a file that I want to retrieve from. When outputting an answer, for instance:
What is AliBi?
It will give a response like:
Answer: LiBi (Attention with Linear Biases) dispenses with position embeddings for tokens in transformer-based NLP models, instead encoding position information by biasing the query-key attention scores proportionally to each token pair’s distance. ALiBi yields excellent extrapolation to unseen sequence lengths compared to other position embedding schemes. We leverage this extrapolation capability by training with shorter sequence lengths, which reduces the memory and computation load.
Or when I give it:
Answer: arded checkpointing is a feature in distributed systems that allows for the checkpointing of the state of the system to be divided into multiple shards. This is useful for systems that have a large amount of data or compute to perform.
I have noticed that the chatbot fairly consistently drops the first or first two characters and I'm wondering if this is a bug with LangChain, MosaicMLInstructorEmbeddings, or MosaicML.
Here is my chatbot file:
https://github.com/KuuCi/examples/blob/support-bot/examples/end-to-end-examples/support_chatbot/chatbot.py
### Suggestion:
_No response_ | Issue: LangChain is fairly consistently dropping the first one or two characters of the chain answer. | https://api.github.com/repos/langchain-ai/langchain/issues/8413/comments | 7 | 2023-07-28T08:53:41Z | 2023-07-31T22:00:04Z | https://github.com/langchain-ai/langchain/issues/8413 | 1,826,004,866 | 8,413 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I have added the tools named A_tool, default_tool to a `ZeroShotAgent`.
The output from one of the execution looks like:
```
Thought: We need to use the A_tool.
Action: Use A_tool
Observation: Use A_tool is not a valid tool, try another one.
Thought:We need to use the default_tool since A_tool is not a valid tool.
Action: Use default_tool
Action Input: None
Observation: Use default_tool is not a valid tool, try another one.
```
I am not sure why the prefix `Use ` is getting added to Action.....ideally action should be just the name of the tool [`A_tool`,` default_tool`]
This is causing InvalidTool to be invoked again and again.
What can i do to fix this issue?
### Suggestion:
_No response_ | Issue: prefix `Use ` being added to agent action causing InvalidTool to be invoked again and again | https://api.github.com/repos/langchain-ai/langchain/issues/8407/comments | 7 | 2023-07-28T06:15:22Z | 2024-03-30T16:04:56Z | https://github.com/langchain-ai/langchain/issues/8407 | 1,825,780,100 | 8,407 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Currently, I'm using SequentialChain class to combine two steps in my workflow.
Step1: I'm using the LLM through prompt to identify the intent of the question posed by the user.
Step2: I'm using the csv based agent to answer the question posed by the user based on the csv file, but my aim is to answer the question only if the intent of the question is a textual response.
Below is the code snippets I have used to create the SequentialChain
`model = AzureOpenAI(temperature=0,deployment_name="",openai_api_key="",openai_api_version="",openai_api_base="",
)`
`template = """
You will help me identify the intent with the following examples and instructions.
Give your response in this format {{"Intent":"<identified intent>",
"Question":"<Input question>"}}
### Instructions
# Different possible intents are textResponse, BarChart, LineChart.
# If the question doesn't come under any of the intents, identify it as a None intent.
####
### Examples
Question: What is the total count of stores in 2022?
Intent: textResponse
Question: What is the split of sale amount for each sale type?
Intent: BarChart
Question: What is the monthly trend of sales amount in 2022?
Intent: LineChart
Question: {input}
"""`
`prompt = PromptTemplate(
input_variables=["input"],
template=template,
)`
`chain_one = LLMChain(llm=model, prompt=prompt, output_key = "intent")`
`agent = create_csv_agent(
AzureOpenAI(temperature=0.5,top_p = 0.5,deployment_name="",openai_api_key="",openai_api_version="",openai_api_base="",),
<csv file path>,
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True
)`
`agent.agent.llm_chain.prompt.template = """
You are a friendly assistant who is warm and thoughtful and never rude or curt.
Possible Intents are: textResponse, LineChart, BarChart, None
You will act further only if the {intent} is textResponse, else your Final Answer will be I cannot respond to your query.
If {intent} is textResponse use the python_repl_ast to answer the question.
You should use the tools below to answer the question posed of you:
python_repl_ast: A Python shell. Use this to execute python commands.
You should use the python_repl_ast to answer the question posed of you. You are working with a pandas dataframe in Python.
The name of the dataframe is `df`.
Input to python_repl_ast should be a valid python command.
Give your Final Answer in this format {{"output":"Final Answer"}}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do.
Action: the action to take, should be one of [python_repl_ast]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question.
This is the result of `print(df.head())`:{df_head}
Begin!
Question: {input}
{agent_scratchpad}
"""`
`input_var = agent.agent.llm_chain.prompt.input_variables`
`input_var.append("intent")`
This was done to append my input variables to already pre-defined ones for the csv agent.
`agent.agent.llm_chain.output_key = "FinalAnswer"`
`chain_two = agent`
`overall_chain = SequentialChain(chains=[chain_one, chain_two],input_variables=["input"],
output_variables=["intent","FinalAnswer"],verbose=True)`
`overall_chain.run(input = "count of total stores in 2022")`
Now, when I run the above code I get the following error:
**validation error for SequentialChain __root__ Expected output variables that were not found: {'FinalAnswer'}. (type=value_error)**
As far as I understood the langchain documentation [(https://python.langchain.com/docs/modules/chains/foundational/sequential_chains)]
the output_key must be defined for each LLM hit for the model to tag the response to that key, hence here I have provided the output key to the agent through the llm_chain.output_key property. But still the code throws error that output variables were not found.
Is this a bug in langchain while binding the csv agents to SequentialChain class or am I missing something? Can someone please help?
### Suggestion:
_No response_ | Issue: Unable to define "output_key" param of the LLM chain class for a csv agent while binding to SequentialChain class | https://api.github.com/repos/langchain-ai/langchain/issues/8406/comments | 2 | 2023-07-28T06:13:52Z | 2023-08-02T04:54:40Z | https://github.com/langchain-ai/langchain/issues/8406 | 1,825,778,483 | 8,406 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I'm interested in the code segmenters created. So far we have python and javascript. I'm looking for a typescript one in particular, the JS one marks my typescript as invalid. But in general, having the language support shown in the text splitter Language enum would be ideal but with full on parsing.
### Motivation
This call is ok to start:
`RecursiveCharacterTextSplitter.get_separators_for_language(Language.JS)`
but when we're splitting by the word function we're cutting off `export async` or `default`, and especially JsDoc, which is probably the best think to start a chunk of a function with.
Splitting by keywords like this is kinda clunky as a long term solution because of the above cases and others, having something that's more carefully splitting files up would be better.
### Your contribution
Maybe instead of a parser like eprisma, there's just a generic one that's powered by an LLM. | Add more CodeSegmenters | https://api.github.com/repos/langchain-ai/langchain/issues/8405/comments | 3 | 2023-07-28T06:06:12Z | 2024-02-13T16:45:52Z | https://github.com/langchain-ai/langchain/issues/8405 | 1,825,765,474 | 8,405 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello, I'm currently using the load_summarize_chain function to summarize extensive documents, specifically those about 250 pages or more. In this process, I'm employing the map_reduce chain type along with a custom prompt.
The process works as follows:
In the initial step, the chain sends parallel requests to LLM to summarize chunks of data.
Upon receiving all responses, the summary chain collects them, combines them into a single prompt, and generates the final output.
I'm curious about two aspects of this process:
In the second step, the chain employs the StuffDocumentsChain method to create the final summary despite setting mapreduce in chaintype. Is it possible to use an alternative chain type for this? If yes, could you recommend any?
Would it be feasible to utilize different Language Learning Models (LLMs) in the two steps? For instance, could I use GPT3.5 for the first step and GPT4 for generating the final summary?
I appreciate your help and look forward to your response.
Thanks in advance!
### Suggestion:
_No response_ | Questions Regarding load_summarize_chain Implementation and LLM Models Usage | https://api.github.com/repos/langchain-ai/langchain/issues/8399/comments | 2 | 2023-07-28T04:43:43Z | 2023-12-07T16:07:05Z | https://github.com/langchain-ai/langchain/issues/8399 | 1,825,692,044 | 8,399 |
[
"hwchase17",
"langchain"
]
| For example: we have two tool, one is search tool, used to search transaction, the other is date tool
for search tool, we have three parameters, if user input less parameters, search tool will feedback user to input missing information, then user will provide supplementary information, agent need to combine dialogue history to route to search tool until the information is complete, finally search tool return the query results. | can you provide agent examples for multiple rounds of dialogue? | https://api.github.com/repos/langchain-ai/langchain/issues/8396/comments | 1 | 2023-07-28T03:45:01Z | 2023-11-03T16:05:37Z | https://github.com/langchain-ai/langchain/issues/8396 | 1,825,651,360 | 8,396 |
[
"hwchase17",
"langchain"
]
| i just wrote a custom agent to classify intent to choose different tool, but i don't know how to get response from agent, when i print answer, it shows Agent stopped due to iteration limit or time limit. Code is shown as below, thanks
tools = [SearchTool(),DateTool()]
agent = IntentAgent(tools=tools, llm=llm)
agent_exec = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True, max_iterations=1)
answer=agent_exec.run(prompt)
| how can i get response of agent | https://api.github.com/repos/langchain-ai/langchain/issues/8393/comments | 4 | 2023-07-28T02:39:03Z | 2023-11-03T16:06:06Z | https://github.com/langchain-ai/langchain/issues/8393 | 1,825,595,680 | 8,393 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
I have tried to make a wrapper around my LLMs but the class cant be instantiated.
Can't even get the example here to work:
https://python.langchain.com/docs/modules/model_io/models/llms/custom_llm
Can someone help out with this?
### Idea or request for content:
_No response_ | DOC: Custom LLM Wrappers not functional | https://api.github.com/repos/langchain-ai/langchain/issues/8392/comments | 1 | 2023-07-28T02:06:13Z | 2023-08-06T18:32:16Z | https://github.com/langchain-ai/langchain/issues/8392 | 1,825,568,030 | 8,392 |
[
"hwchase17",
"langchain"
]
| ### System Info
(h2ogpt) jon@pseudotensor:~/h2ogpt$ pip freeze | grep langchain
langchain==0.0.235
langchainplus-sdk==0.0.20
Python 3.10
(h2ogpt) jon@pseudotensor:~/h2ogpt$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.6 LTS
Release: 20.04
Codename: focal
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [x] Async
### Reproduction
https://github.com/h2oai/h2ogpt/pull/551/commits/cc3331d897f4f7bab13c7f9644e7a7d7cd35031e
The above shows my introduction of async from before not having it.
The text generation inference server is set to have a large concurrency, but is showing requests are coming in back-to-back.
### Expected behavior
I expect the summarization part to be parallel, like stated here:
https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/combine_documents/map_reduce.py#L210-L213
But perhaps I misunderstand something. Or perhaps it's not really parallel:
https://github.com/langchain-ai/langchain/issues/1145#issuecomment-1586234397
There's lots of discussion w.r.t. hitting rate limit with OpenAI:
https://github.com/langchain-ai/langchain/issues/2465
https://github.com/langchain-ai/langchain/issues/1643
So I presume this works, but I'm not seeing it. In OpenAI case it seems to be done via batching, which is possible in HF TGI server but not implemented. But I would have thought that all the reduction tasks could have been in parallel with asyncio.
https://github.com/langchain-ai/langchain/pull/1463#issuecomment-1566391189 | chain.arun() for summarization no faster than chain() | https://api.github.com/repos/langchain-ai/langchain/issues/8391/comments | 8 | 2023-07-28T01:09:54Z | 2023-07-28T05:12:04Z | https://github.com/langchain-ai/langchain/issues/8391 | 1,825,511,510 | 8,391 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
I would like to deploy my langchain project in a shared host server , so I will need tobe able APIs to receive request and return response to the end devices. I think this feature is already done , however, I do not how to implement it in my project
### Idea or request for content:
_No response_ | DOC: how to setup APIs in my project to receive requests and returns responses from and to end devices | https://api.github.com/repos/langchain-ai/langchain/issues/8390/comments | 4 | 2023-07-28T00:17:12Z | 2024-02-10T16:20:48Z | https://github.com/langchain-ai/langchain/issues/8390 | 1,825,424,441 | 8,390 |
[
"hwchase17",
"langchain"
]
| ### System Info
I wonder if the input_documents are combined with relevant_docuemnts from retriever? I have been comparing results, and although wiki_doc provide different context, it is not reflected in response! I have tried both version, and the information by "input_doc" is not reflected in result1. I looked up in example, and I didn't find example where both "retriever" and "input_documents" are used together, it is either this OR that.
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
qa_chain = RetrievalQA.from_chain_type(llm, retriever=ret_para, chain_type="stuff")
result1 = qa_chain({"input_documents": wiki_doc, "query": query})
result2 = qa_chain({"query": query})
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce is:
1-load any db
2-ret = db.as_retriever()
3-llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
4-qa_chain = RetrievalQA.from_chain_type(llm, retriever=ret_para, chain_type="stuff")
5-Create any additional_doc from some other sources
6-result1 = qa_chain({"input_documents": additional_doc, "query": query})
7-result2 = qa_chain({"query": query})
### Expected behavior
It is expected that result1 and result2 have different output since they have different contexts, if input_documents is properly combined with relevant_documents from retriever. | qa_chain with retriever and input_documents. | https://api.github.com/repos/langchain-ai/langchain/issues/8386/comments | 1 | 2023-07-27T22:35:13Z | 2023-11-02T16:04:35Z | https://github.com/langchain-ai/langchain/issues/8386 | 1,825,313,643 | 8,386 |
[
"hwchase17",
"langchain"
]
| ### Feature request
An integration of exllama in Langchain to be able to use 4-bit GPTQ weights, designed to be fast and memory-efficient on modern GPUs.
### Motivation
The benchmarks on the official repo speak for themselves:
https://github.com/turboderp/exllama#results-so-far
### Your contribution
There is a fork that uses exllama with langchain here :
https://github.com/CoffeeVampir3/exllama-langchain-example/tree/master | Exllama integration to run GPTQ models | https://api.github.com/repos/langchain-ai/langchain/issues/8385/comments | 12 | 2023-07-27T22:26:51Z | 2024-05-31T23:49:27Z | https://github.com/langchain-ai/langchain/issues/8385 | 1,825,307,485 | 8,385 |
[
"hwchase17",
"langchain"
]
| ### Feature request
This is related to [HyDE](https://python.langchain.com/docs/modules/chains/additional/hyde). But I feel my feature request puts the HyDE example in a more realistic context, and naturally extends the RetrievalQA chain.
My suggestion would be allow the text passed to the retriever to be different from the query passed to the LLM. This would be useful as fallback when the original query, as provided by the user, yields a response of "I don't know" from the LLM; or if it returns no or poor-scoring documents during the retrieval step.
Step 1, Embed original query
Step 2, retrieve candidate documents based on embedded original query
Step 3, pass original query & retrieved documents to LLM for answer
If Step 2 yields 0/low score documents OR Step 3 yields no/low confidence answer...
Step 4, generate an alternate _hypothetical_ answer (i.e., as described in [HyDE](https://arxiv.org/abs/2212.10496)
Step 5, embed _hypothetical_ answer
Step 6, retrieve candidate documents based on embedded _hypothetical_ answer
Step 7, pass original query & _newly_ retrieved documents to LLM for answer
### Motivation
Perhaps I have a query that does not yield good retrieval documents from my main knowledgebase corpus. It is poorly formed, it has spelling errors, it is vague, it is brief. Even though the corpus does contain the factual answer to the intent of the query. Instead, a different form of the original query would yield better retrieved documents, and therefore produce the correct & complete answer desired.
For example, very simple questions like "Who's Character X?" may yield a wide range of sub-optimal documents from a QA Retriever. As a result, the LLM's response may be muddled or impossible to generate.
### Your contribution
Happy to refine the suggestion and provide concrete examples. | RetrievalQA: Submit different queries to Retriever and LLM | https://api.github.com/repos/langchain-ai/langchain/issues/8380/comments | 3 | 2023-07-27T20:32:27Z | 2023-10-27T16:18:45Z | https://github.com/langchain-ai/langchain/issues/8380 | 1,825,171,219 | 8,380 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.244
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a new Matching Engine Index Endpoint that is public.
Follow the tutorial to make a similarity search:
```
vector_store = MatchingEngine.from_components(
project_id="",
region="us-central1",
gcs_bucket_name="",
index_id="",
endpoint_id="",
embedding=embeddings,
)
vector_store.similarity_search("what is a cat?", k=5)
```
Error:
```
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:1030, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
1021 def __call__(self,
1022 request: Any,
1023 timeout: Optional[float] = None,
(...)
1026 wait_for_ready: Optional[bool] = None,
1027 compression: Optional[grpc.Compression] = None) -> Any:
1028 state, call, = self._blocking(request, timeout, metadata, credentials,
1029 wait_for_ready, compression)
-> 1030 return _end_unary_response_blocking(state, call, False, None)
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:910, in _end_unary_response_blocking(state, call, with_call, deadline)
908 return state.response
909 else:
--> 910 raise _InactiveRpcError(state)
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed for :10000: unparseable host:port"
debug_error_string = "UNKNOWN:DNS resolution failed for :10000: unparseable host:port {created_time:"2023-07-27T20:12:23.727315699+00:00", grpc_status:14}"
>
```
### Expected behavior
It should be possible to do this. The VertexAI Python SDK supports it with the `endpoint.find_neighbors` function.
I think just changing [the wrapper](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/matching_engine.py#L178) from `.match` to `.find_neighbors` for when the endpoint is public should do it.
| GCP Matching Engine support for public index endpoints | https://api.github.com/repos/langchain-ai/langchain/issues/8378/comments | 7 | 2023-07-27T20:14:21Z | 2023-11-21T02:11:30Z | https://github.com/langchain-ai/langchain/issues/8378 | 1,825,144,169 | 8,378 |
[
"hwchase17",
"langchain"
]
| ### System Info
**RecursiveUrlLoader** is not working. Please refer to the below code. "docs" size is always 0.
https://python.langchain.com/docs/integrations/document_loaders/recursive_url_loader
from langchain.document_loaders.recursive_url_loader import RecursiveUrlLoader
url = "https://js.langchain.com/docs/modules/memory/examples/"
loader = RecursiveUrlLoader(url=url)
docs = loader.load()
print(len(docs))
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders.recursive_url_loader import RecursiveUrlLoader
url = "https://js.langchain.com/docs/modules/memory/examples/"
loader = RecursiveUrlLoader(url=url)
docs = loader.load()
print(len(docs))
Here Length is displaying as zero.
### Expected behavior
All the urls should be scraped | RecursiveUrlLoader is not working | https://api.github.com/repos/langchain-ai/langchain/issues/8367/comments | 2 | 2023-07-27T17:30:39Z | 2023-11-02T16:04:39Z | https://github.com/langchain-ai/langchain/issues/8367 | 1,824,862,056 | 8,367 |
[
"hwchase17",
"langchain"
]
| ### Feature request
The `_id` field should be populated with these in order:
1. `ids: Optional[List[str]] = None,` -- already in place
2. Document metadata (filename + start_index) -- new feature
3. Leave `_id` empty (instead of uuid) -- new feature
https://github.com/langchain-ai/langchain/blob/cf608f876b0ada9ac965fe5b25b5ca6e5e47feeb/libs/langchain/langchain/vectorstores/opensearch_vector_search.py#L125
### Motivation
Deterministic _id is very important to keep a robust data pipeline. Duplicate load will thus result in version increment instead of creating new document with the same content.
### Your contribution
I can do a PR if the idea is approved. | OpenSearch bulk ingest with deterministic _id if available | https://api.github.com/repos/langchain-ai/langchain/issues/8366/comments | 2 | 2023-07-27T17:23:26Z | 2023-10-26T16:40:38Z | https://github.com/langchain-ai/langchain/issues/8366 | 1,824,852,567 | 8,366 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.244
Numexpr version: 2.8.4
Python version: 3.10.11
### Who can help?
@hwchase17 @vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Numexpr's evaluate function that Langchain uses [here](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/llm_math/base.py#L80) in the LLMMathChain is susceptible to arbitrary code execution with eval in the latest released version. See this [issue](https://github.com/pydata/numexpr/issues/442) where PoC for numexpr's evaluate is also provided.
This vulnerability allows an arbitrary code execution, that is to run code and commands on target machine, via LLMMathChain's run method with the right prompt. I'd like to ask the Langchain's maintainers to confirm if they want a full PoC with Langchain posted here publicly.
### Expected behavior
Numerical expressions should be evaluated securely so as to not allow code execution. | Arbitrary code execution in LLMMathChain | https://api.github.com/repos/langchain-ai/langchain/issues/8363/comments | 33 | 2023-07-27T16:00:56Z | 2024-03-13T16:12:31Z | https://github.com/langchain-ai/langchain/issues/8363 | 1,824,692,692 | 8,363 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Following this:https://github.com/levalencia/langchainLuisValencia/blob/master/.github/CONTRIBUTING.md#-common-tasks
On Windows.
Forked the repo, cloned it locally, created a conda environment, (Poetry was already installed), then installed all dependencies.
Error received:```
• Installing xmltodict (0.13.0)
• Installing zstandard (0.21.0)
ChefBuildError
Backend subprocess exited when trying to invoke get_requires_for_build_wheel
Traceback (most recent call last):
File "C:\Users\xx\AppData\Roaming\pypoetry\venv\Lib\site-packages\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\Users\xx\AppData\Roaming\pypoetry\venv\Lib\site-packages\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "C:\Users\xx\AppData\Roaming\pypoetry\venv\Lib\site-packages\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "C:\Users\LUISVA~1\AppData\Local\Temp\tmpn9w63h_u\.venv\lib\site-packages\setuptools\build_meta.py", line 341, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
File "C:\Users\LUISVA~1\AppData\Local\Temp\tmpn9w63h_u\.venv\lib\site-packages\setuptools\build_meta.py", line 323, in _get_build_requires
self.run_setup()
File "C:\Users\LUISVA~1\AppData\Local\Temp\tmpn9w63h_u\.venv\lib\site-packages\setuptools\build_meta.py", line 487, in run_setup
super(_BuildMetaLegacyBackend,
File "C:\Users\LUISVA~1\AppData\Local\Temp\tmpn9w63h_u\.venv\lib\site-packages\setuptools\build_meta.py", line 338, in run_setup
exec(code, locals())
File "<string>", line 8, in <module>
RuntimeError: uvloop does not support Windows at the moment
at ~\AppData\Roaming\pypoetry\venv\Lib\site-packages\poetry\installation\chef.py:147 in _prepare
143│
144│ error = ChefBuildError("\n\n".join(message_parts))
145│
146│ if error is not None:
→ 147│ raise error from None
148│
149│ return path
150│
151│ def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:
Note: This error originates from the build backend, and is likely not a problem with poetry but with uvloop (0.17.0) not supporting PEP 517 builds. You can verify this by running 'pip wheel --use-pep517 "uvloop (==0.17.0)"'.
```
How can I fix this?
### Idea or request for content:
Poetry Version 1.5.1
What do I need to do to be able to test locally? Before I start to contribute? | DOC: Contribution guidelines not working | https://api.github.com/repos/langchain-ai/langchain/issues/8362/comments | 1 | 2023-07-27T14:46:12Z | 2023-10-26T19:39:35Z | https://github.com/langchain-ai/langchain/issues/8362 | 1,824,539,164 | 8,362 |
[
"hwchase17",
"langchain"
]
| ### System Info
Mac, command line and jupyter notebook inside VSCode
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a new conda environment
Then
`conda install -c conda-forge 'langchain'`
`pip install openai`
Run sample code (can be in jupyter or terminal via a python file):
```
import os
openai_api_key = os.environ.get("OPENAI_API_KEY")
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
llm = OpenAI()
chat_model = ChatOpenAI()
llm.predict("hi!")
```
Get the following error:
`TypeError: Argument 'bases' has incorrect type (expected list, got tuple)`
Full error output:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[2], line 5
1 import os
3 openai_api_key = os.environ.get("OPENAI_API_KEY")
----> 5 from langchain.llms import OpenAI
6 from langchain.chat_models import ChatOpenAI
8 llm = OpenAI()
File [~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/__init__.py:6](https://file+.vscode-resource.vscode-cdn.net/Users/coreyszopinski/Documents/projects/core/experiments-transformers/nlp/langchain/~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/__init__.py:6)
3 from importlib import metadata
4 from typing import Optional
----> 6 from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
7 from langchain.cache import BaseCache
8 from langchain.chains import (
9 ConversationChain,
10 LLMBashChain,
(...)
18 VectorDBQAWithSourcesChain,
19 )
File [~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/agents/__init__.py:2](https://file+.vscode-resource.vscode-cdn.net/Users/coreyszopinski/Documents/projects/core/experiments-transformers/nlp/langchain/~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/agents/__init__.py:2)
1 """Interface for agents."""
----> 2 from langchain.agents.agent import (
3 Agent,
4 AgentExecutor,
5 AgentOutputParser,
6 BaseMultiActionAgent,
7 BaseSingleActionAgent,
8 LLMSingleActionAgent,
9 )
10 from langchain.agents.agent_toolkits import (
11 create_csv_agent,
12 create_json_agent,
(...)
22 create_xorbits_agent,
23 )
24 from langchain.agents.agent_types import AgentType
File [~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/agents/agent.py:16](https://file+.vscode-resource.vscode-cdn.net/Users/coreyszopinski/Documents/projects/core/experiments-transformers/nlp/langchain/~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/agents/agent.py:16)
13 from pydantic import BaseModel, root_validator
15 from langchain.agents.agent_types import AgentType
---> 16 from langchain.agents.tools import InvalidTool
17 from langchain.callbacks.base import BaseCallbackManager
18 from langchain.callbacks.manager import (
19 AsyncCallbackManagerForChainRun,
20 AsyncCallbackManagerForToolRun,
(...)
23 Callbacks,
24 )
File [~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/agents/tools.py:4](https://file+.vscode-resource.vscode-cdn.net/Users/coreyszopinski/Documents/projects/core/experiments-transformers/nlp/langchain/~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/agents/tools.py:4)
1 """Interface for tools."""
2 from typing import Optional
----> 4 from langchain.callbacks.manager import (
5 AsyncCallbackManagerForToolRun,
6 CallbackManagerForToolRun,
7 )
8 from langchain.tools.base import BaseTool, Tool, tool
11 class InvalidTool(BaseTool):
File [~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/callbacks/__init__.py:3](https://file+.vscode-resource.vscode-cdn.net/Users/coreyszopinski/Documents/projects/core/experiments-transformers/nlp/langchain/~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/callbacks/__init__.py:3)
1 """Callback handlers that allow listening to events in LangChain."""
----> 3 from langchain.callbacks.aim_callback import AimCallbackHandler
4 from langchain.callbacks.argilla_callback import ArgillaCallbackHandler
5 from langchain.callbacks.arize_callback import ArizeCallbackHandler
File [~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/callbacks/aim_callback.py:4](https://file+.vscode-resource.vscode-cdn.net/Users/coreyszopinski/Documents/projects/core/experiments-transformers/nlp/langchain/~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/callbacks/aim_callback.py:4)
1 from copy import deepcopy
2 from typing import Any, Dict, List, Optional, Union
----> 4 from langchain.callbacks.base import BaseCallbackHandler
5 from langchain.schema import AgentAction, AgentFinish, LLMResult
8 def import_aim() -> Any:
File [~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/callbacks/base.py:7](https://file+.vscode-resource.vscode-cdn.net/Users/coreyszopinski/Documents/projects/core/experiments-transformers/nlp/langchain/~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/callbacks/base.py:7)
4 from typing import Any, Dict, List, Optional, Sequence, Union
5 from uuid import UUID
----> 7 from langchain.schema.agent import AgentAction, AgentFinish
8 from langchain.schema.document import Document
9 from langchain.schema.messages import BaseMessage
File [~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/schema/__init__.py:2](https://file+.vscode-resource.vscode-cdn.net/Users/coreyszopinski/Documents/projects/core/experiments-transformers/nlp/langchain/~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/schema/__init__.py:2)
1 from langchain.schema.agent import AgentAction, AgentFinish
----> 2 from langchain.schema.document import BaseDocumentTransformer, Document
3 from langchain.schema.language_model import BaseLanguageModel
4 from langchain.schema.memory import BaseChatMessageHistory, BaseMemory
File [~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/schema/document.py:11](https://file+.vscode-resource.vscode-cdn.net/Users/coreyszopinski/Documents/projects/core/experiments-transformers/nlp/langchain/~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/schema/document.py:11)
6 from pydantic import Field
8 from langchain.load.serializable import Serializable
---> 11 class Document(Serializable):
12 """Class for storing a piece of text and associated metadata."""
14 page_content: str
File [~/anaconda3/envs/core_ml/lib/python3.11/site-packages/pydantic/main.py:186](https://file+.vscode-resource.vscode-cdn.net/Users/coreyszopinski/Documents/projects/core/experiments-transformers/nlp/langchain/~/anaconda3/envs/core_ml/lib/python3.11/site-packages/pydantic/main.py:186), in pydantic.main.ModelMetaclass.__new__()
TypeError: Argument 'bases' has incorrect type (expected list, got tuple)
```
### Expected behavior
Expected sample code to run without errors. | Argument 'bases' has incorrect type | https://api.github.com/repos/langchain-ai/langchain/issues/8361/comments | 2 | 2023-07-27T14:44:05Z | 2023-11-03T16:07:03Z | https://github.com/langchain-ai/langchain/issues/8361 | 1,824,534,625 | 8,361 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version v0.0.5, Python 3.9.13
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Issue was reported before, see [issue 5428](https://github.com/langchain-ai/langchain/issues/5428)
It is still not solved.
The problem is the regular expression used in libs/langchain/langchain/output_parsers/json.py
```python
match = re.search(r"```(json)?(.*?)```", json_string, re.DOTALL)
```
`.*?` does non-greedy matching, so the parsing does not stop at the final triple backticks, but
at the first occurrence of triple backticks which can be the start of a code section like ```python
Which leads errors like:
`Error while running agent: Could not parse LLM output: json { "action": "Final Answer", "action_input": "Here is a simple Python script that includes a main function and prints 'Hello':\n\npython\ndef say_hello():\n print('Hello')\n\ndef main():\n say_hello()\n\nif name == 'main':\n main()\n```\nThis script defines a function say_hello that prints the string 'Hello'. The main function calls say_hello. The final lines check if this script is being run directly (as opposed to being imported as a module), and if so, calls the main function." }`
### Expected behavior
Correct LLM output parsing for answers including code sections.
See [issue 5428](https://github.com/langchain-ai/langchain/issues/5428) | LLM output parsing error for answers including code sections. | https://api.github.com/repos/langchain-ai/langchain/issues/8357/comments | 4 | 2023-07-27T12:40:32Z | 2023-08-03T09:05:28Z | https://github.com/langchain-ai/langchain/issues/8357 | 1,824,290,695 | 8,357 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
if we are passing chathistory as a input in agent
template = """You are a chatbot having a conversation with a human.
{chat_history}
{human_input}
Chatbot:"""
prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""
suffix = """Begin!"
{chat_history}
Question: {input}
{agent_scratchpad}
"""
SO will it exceed the max token limit of the agent ??
If Yes what alternatives can be used for this
### Suggestion:
_No response_ | Chat History Issue | https://api.github.com/repos/langchain-ai/langchain/issues/8355/comments | 3 | 2023-07-27T11:56:41Z | 2024-02-11T16:17:46Z | https://github.com/langchain-ai/langchain/issues/8355 | 1,824,215,752 | 8,355 |
[
"hwchase17",
"langchain"
]
| ### System Info
I'm trying to achieve the least calls to openAI-s API, So I've tried to save my vectorstorage to the disk and then reload it (like its in the "if" part). But if I use OpenAIEmbeddings() as embedding_funcion it still calls to openai's embeddings API. (Maybe it should be a feature request) Is this a bug? Am I doing something wrong?
import os
from langchain.document_loaders import TextLoader
from langchain.embeddings import OpenAIEmbeddings
from langchain.indexes import VectorstoreIndexCreator
from langchain.indexes.vectorstore import VectorStoreIndexWrapper
from langchain.vectorstores import Chroma
loaders = [TextLoader("text1.txt"), TextLoader("text2.txt")]
indexes = []
for loader in loaders:
path = os.path.splitext(loader.file_path)[0]
if os.path.exists(path):
indexes.append(VectorStoreIndexWrapper(
vectorstore=Chroma(
persist_directory=f"./{path}", embedding_function=OpenAIEmbeddings())))
else:
indexes.append(VectorstoreIndexCreator(
vectorstore_kwargs={"persist_directory": f"./{path}"}).from_loaders([loader]))
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The code I provided above creates folders when there are text1.txt and text2.txt in the same folder. (something should be in these files, like a poem or an article or something).
### Expected behavior
After the script ran the folders ("text1" and "text2") should be created with a {guid} folder and a chroma.sqlite3 file in them.
I checked my request usage on https://platform.openai.com/account/usage, and there were 2 requests to text-embedding-ada-002-v2 since there are 2 text files | Loading vectorstore from disk still calls to OpenAI's embeddings API | https://api.github.com/repos/langchain-ai/langchain/issues/8352/comments | 1 | 2023-07-27T11:41:07Z | 2023-11-02T16:17:17Z | https://github.com/langchain-ai/langchain/issues/8352 | 1,824,192,942 | 8,352 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain 0.0.235
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.schema import FunctionMessage
from langchain.memory import PostgresChatMessageHistory
history = PostgresChatMessageHistory(
connection_string="postgresql://postgres:mypassword@localhost/chat_history", # needs configuring
session_id="test_session_id",
)
function_message = FunctionMessage(
content='A Message for passing the result of executing a function back to a model',
name='name of function'
)
history.add_message(function_message)
history.messages
```
### Expected behavior
Output:
FunctionMessage(content='A Message for passing the result of executing a function back to a model', additional_kwargs={}, name='name of function') | No function message compatability in PostgresChatMessageHistory | https://api.github.com/repos/langchain-ai/langchain/issues/8349/comments | 4 | 2023-07-27T10:53:51Z | 2023-07-28T08:24:55Z | https://github.com/langchain-ai/langchain/issues/8349 | 1,824,121,928 | 8,349 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Some kind of possibility to make a custom tool with return direct not do so when faced with a error such as a ToolException.
### Motivation
Currently, when a tool with return_direct faces a handled error, it returns the error directly. I would want a tool to return an output directly but only if there are no errors, since the errors aren't useful to be directly returned.
### Your contribution
I am not sure how this could be implemented, but would gladly help working on any ideas. Found this issue which would also enable this https://github.com/langchain-ai/langchain/issues/8306 | Custom tool return_direct except when a ToolException occurs. | https://api.github.com/repos/langchain-ai/langchain/issues/8348/comments | 2 | 2023-07-27T10:03:00Z | 2024-02-06T16:32:01Z | https://github.com/langchain-ai/langchain/issues/8348 | 1,824,041,459 | 8,348 |
[
"hwchase17",
"langchain"
]
| ### Feature request
The Playwright toolkit is awesome. But it would be great to be able to see what the agent is doing. We could add a feature that takes screenshoots of the current state of the web page
### Motivation
It feels strange to use the toolkit but being blind, we cannot see what the agent is doing. The original idea is that by taking screenshoot we could use agents to perform UI testing.
### Your contribution
I could help writing a PR, but I’ll need help from the Playwright community. I also want to know what the community thinks of this feature. | Playwright Browser Tools: add screenshot feature | https://api.github.com/repos/langchain-ai/langchain/issues/8347/comments | 1 | 2023-07-27T09:51:26Z | 2023-11-02T16:14:07Z | https://github.com/langchain-ai/langchain/issues/8347 | 1,824,020,139 | 8,347 |
[
"hwchase17",
"langchain"
]
| ### Discussed in https://github.com/langchain-ai/langchain/discussions/5410
<div type='discussions-op-text'>
<sup>Originally posted by **luca-git** May 29, 2023</sup>
My goal is to extend the tools used by baby AGI, more specifically to use at least the basic WriteFileTool() and ReadFileTool(). They use two inputs though, so I cannot stick with the vanilla ZeroShotAgent. So i resorted to using: AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION and replace it in the original code in the link below. Agent initialization is the only modification I've made. Can anyone kindly help me understand what I'm missing in the code in order to be able to leverage multiple input tools or provide guidance/resources?
https://python.langchain.com/en/latest/use_cases/agents/baby_agi_with_agent.html?highlight=babyagi%20with%20tools
```python
from langchain.agents import AgentType
from langchain.agents import initialize_agent
@classmethod
def from_llm(
cls, llm: BaseLLM, vectorstore: VectorStore, verbose: bool = False, **kwargs
) -> "BabyAGI":
"""Initialize the BabyAGI Controller."""
task_creation_chain = TaskCreationChain.from_llm(llm, verbose=verbose)
task_prioritization_chain = TaskPrioritizationChain.from_llm(
llm, verbose=verbose
)
llm_chain = LLMChain(llm=llm, prompt=prompt)
tool_names = [tool.name for tool in tools]
agent = initialize_agent(agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,llm=llm_chain, tools=tool_names)
agent_executor = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=True
)
return cls(
task_creation_chain=task_creation_chain,
task_prioritization_chain=task_prioritization_chain,
execution_chain=agent_executor,
vectorstore=vectorstore,
**kwargs,
)
```
I get the error
```python
File ~\anaconda3\envs\aagi\lib\site-packages\langchain\agents\structured_chat\base.py:83 in create_prompt args_schema = re.sub("}", "}}}}", re.sub("{", "{{{{", str(tool.args)))
AttributeError: 'str' object has no attribute 'args'
```
</div> | BabyAGI implementation with tools with multiple inputs (requiring Structured Tool Chat Agent) | https://api.github.com/repos/langchain-ai/langchain/issues/8346/comments | 2 | 2023-07-27T08:14:36Z | 2023-11-02T16:16:00Z | https://github.com/langchain-ai/langchain/issues/8346 | 1,823,850,168 | 8,346 |
[
"hwchase17",
"langchain"
]
| ### System Info
gptcache==0.1.37
langchain==0.0.240
python 3.8
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from gptcache import Cache
from gptcache.adapter.api import init_similar_cache
from langchain.cache import GPTCache
# Every time llm updates the cache, it is creating a new cache object for the existing llm_string
def init_gptcache(cache_obj: Cache, llm_str: str):
init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{llm_str}")
langchain.llm_cache = GPTCache(init_gptcache)
llm = OpenAI(model_name="text-davinci-002", temperature=0.2)
llm.predict("tell me a joke")
```
### Expected behavior
It should not recreate new GPTCache object for the same llm_string. | GPTCache is getting created multiple times for the same llm_string | https://api.github.com/repos/langchain-ai/langchain/issues/8343/comments | 1 | 2023-07-27T06:04:49Z | 2023-07-27T14:47:28Z | https://github.com/langchain-ai/langchain/issues/8343 | 1,823,677,777 | 8,343 |
[
"hwchase17",
"langchain"
]
| ### System Info
While using both URL-Playwriter or PlayWright Browser Toolkit I am getting this error.
The solution I used:
1. I used asyncio.create_task
2. I used nest_asyncio.apply
but the error is the same. Please help if someone is also getting the same error and found some solution

### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction


### Expected behavior
This code needs to return the documents of the html content, but I am getting error. | Using URL-PlayWriter/PlayWright Browser Toolkit in Jupyter Notebook | https://api.github.com/repos/langchain-ai/langchain/issues/8342/comments | 5 | 2023-07-27T05:59:45Z | 2023-11-06T16:06:19Z | https://github.com/langchain-ai/langchain/issues/8342 | 1,823,673,129 | 8,342 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I encountered the issue while reproducing the section related to **[Structured tool chat](https://python.langchain.com/docs/modules/agents/agent_types/structured_chat)** in the documentation.
## This is the code (from docs)
```python
from langchain.tools.playwright.utils import create_async_playwright_browser
...
async_browser = create_async_playwright_browser() # raise error here
...
```
## And this is the source code
```python
# ...\site-packages\langchain\tools\playwright\utils.py
def create_async_playwright_browser(headless: bool = True) -> AsyncBrowser:
"""
Create an async playwright browser.
Args:
headless: Whether to run the browser in headless mode. Defaults to True.
Returns:
AsyncBrowser: The playwright browser.
"""
from playwright.async_api import async_playwright # raise error here
browser = run_async(async_playwright().start())
return run_async(browser.chromium.launch(headless=headless))
```
## The error info
```
Traceback (most recent call last):
File "...\08_structured_tool_chat.py", line 16, in <module>
async_browser = create_async_playwright_browser()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "...\site-packages\langchain\tools\playwright\utils.py", line 63, in create_async_playwright_browser
from playwright.async_api import async_playwright
ModuleNotFoundError: No module named 'playwright'
```
---
I'm confused.
就有点迷@_@
### Suggestion:
_No response_ | Issue: ModuleNotFoundError: No module named 'playwright' | https://api.github.com/repos/langchain-ai/langchain/issues/8338/comments | 6 | 2023-07-27T03:55:02Z | 2024-07-21T10:37:18Z | https://github.com/langchain-ai/langchain/issues/8338 | 1,823,563,453 | 8,338 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
When I have a single PDF file it works very well, when I have several files it does not find the same question as if it did when there was only one pdf file, I have the following code
store = PGVector(
collection_name=COLLECTION_NAME,
connection_string=CONNECTION_STRING,
embedding_function=embeddings,
)
pdf_qa = ConversationalRetrievalChain.from_llm(
ChatOpenAI(temperature=0.9, model_name="gpt-3.5-turbo"),
store.as_retriever(search_kwargs={'k': 2}),
return_source_documents=True,
verbose=False
)
When I increase the value of search_kwargs to 7 it works, because I think it takes a bigger chunk and finds the answer.
Please help
### Suggestion:
_No response_ | Error finding in multiple pdfs | https://api.github.com/repos/langchain-ai/langchain/issues/8334/comments | 2 | 2023-07-27T02:05:37Z | 2023-11-02T16:05:14Z | https://github.com/langchain-ai/langchain/issues/8334 | 1,823,486,705 | 8,334 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I have a csv file I would like to interact with but when I start using it says:
Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised APIError: OpenAI API returned an empty embedding.
Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised APIError: OpenAI API returned an empty embedding.
Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised APIError: OpenAI API returned an empty embedding.
Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 8.0 seconds as it raised APIError: OpenAI API returned an empty embedding.
Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 10.0 seconds as it raised APIError: OpenAI API returned an empty embedding.
Do you know the max size for a csv file or for any file for that matter?
My file is 12 MB.
Thanks
### Suggestion:
_No response_ | Issue: Max CSV Size | https://api.github.com/repos/langchain-ai/langchain/issues/8332/comments | 2 | 2023-07-27T01:49:19Z | 2023-11-02T16:05:40Z | https://github.com/langchain-ai/langchain/issues/8332 | 1,823,470,170 | 8,332 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version=0.0.237
python version = 3.11.4
### Who can help?
@hwchase17
The RDFGraph class appears to throw an exception when an OWL ontology contains a Restriction.
For example, a Turtle ontology contains the following restriction causes the error "Unexpected IRI 'n16c4af590.........", contains neither '#' nor '/'
```
:Topping rdf:type owl:Class ;
rdfs:subClassOf :Object ,
[ rdf:type owl:Restriction ;
owl:onProperty :hasTopping ;
owl:someValuesFrom :Topping
] .
```
Theis exception occurs in the code:
```
graph = RdfGraph(
source_file="example.ttl",
serialization="ttl",
standard="owl"
)
```
This appears to happen because the rdflib library inserts necessary blank nodes when it reads an ontology.
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The steps to reproduce are:
1. Create an OWL ontology called example.ttl with the following:
``` turtle
@prefix : <http://example.org/test-ontology#> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix xml: <http://www.w3.org/XML/1998/namespace> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
<http://example.org/test-ontology> rdf:type owl:Ontology .
:hasTopping rdf:type owl:ObjectProperty .
:Object rdf:type owl:Class .
:Pizza rdf:type owl:Class ;
rdfs:subClassOf :Object .
:Topping rdf:type owl:Class ;
rdfs:subClassOf :Object ,
[ rdf:type owl:Restriction ;
owl:onProperty :hasTopping ;
owl:someValuesFrom :Topping
] .
:pizza_001 rdf:type owl:NamedIndividual ,
:Pizza ;
:hasTopping :topping_001 ,
:topping_002 ,
:topping_003 ;
rdfs:label "margharita pizza" .
:pizza_002 rdf:type owl:NamedIndividual ,
:Pizza ;
:hasTopping :topping_001 ,
:topping_004 ;
rdfs:label "cheese and bacon pizza" .
:topping_001 rdf:type owl:NamedIndividual ,
:Topping ;
rdfs:label "cheese" .
:topping_002 rdf:type owl:NamedIndividual ,
:Topping ;
rdfs:label "tomato sauce" .
:topping_003 rdf:type owl:NamedIndividual ,
:Topping ;
rdfs:label "tomato" .
:topping_004 rdf:type owl:NamedIndividual ,
:Topping ;
:hasTopping :topping_002 ;
rdfs:label "bacon" .
```
2. Run the following code:
``` python
from langchain.chat_models import ChatOpenAI
from langchain.chains import GraphSparqlQAChain
from langchain.graphs import RdfGraph
graph = RdfGraph(
source_file="example.ttl",
serialization="ttl",
standard="owl"
)
```
### Expected behavior
The RDFGraph object is created successfully. | RDFGraph throws exception when ontology contains blank nodes | https://api.github.com/repos/langchain-ai/langchain/issues/8331/comments | 1 | 2023-07-27T01:25:11Z | 2023-08-09T07:44:41Z | https://github.com/langchain-ai/langchain/issues/8331 | 1,823,441,492 | 8,331 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I propose to make BaseMemoryAsync, BaseChatMessageHistoryAsync, SummarizerMixinAsync, BaseChatMemoryAsync and add support for async save_context to the main Chain class. For web development, everything is asynchronous, and this is critical.
### Motivation
I've faced a problem when writing a chatbot using the asynchronous aiogram library, I can't use Memory asynchronously, so I want to add additional classes that support asynchronous memory work. I am using asynchronous sqlalchemy and cannot work with memory. Besides, I think this is a really critical feature.
### Your contribution
I tried to implement this, but it turned out to be too difficult, so I'm asking you very much, give me and other people the opportunity to work with memory asynchronously. | Async Memory | https://api.github.com/repos/langchain-ai/langchain/issues/8329/comments | 1 | 2023-07-27T00:31:00Z | 2023-11-02T16:05:19Z | https://github.com/langchain-ai/langchain/issues/8329 | 1,823,401,369 | 8,329 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
This might be a general issue with LLM tools but I wanted to pose it here since this community is very active.
When you feed a huge CSV file with thousands of comments, you are trying to summarize them but also extract some quotes.
When feeding this file into Langchain, it will summarize just fine. However, the quotes will get lost when reducing the result. The quotes will end up being summarized and lose their data integrity
Is there a way around this issue? Can this solved by tweaking the system prompt?
### Suggestion:
_No response_ | Issue: How to extract quotes from a large file | https://api.github.com/repos/langchain-ai/langchain/issues/8328/comments | 2 | 2023-07-26T23:19:52Z | 2023-11-01T16:04:50Z | https://github.com/langchain-ai/langchain/issues/8328 | 1,823,346,330 | 8,328 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Add support for passing CustomAttributes to the `SagemakerEnpiontLLM` wrapper around `client.invoke_enpoint()`
With the release of LLama-v2 on Sagemaker Jumpstart, we are required to pass EULA=TRUE # END USER LICENSE AGREEMENT
Endstate
```python
class SagemakerEndpoint(LLM):
...
...
custom_attributes: Dict
custom_attributes =
# send request
try:
response = self.client.invoke_endpoint(
EndpointName=self.endpoint_name,
Body=body,
ContentType=content_type,
Accept=accepts,
Custom_Attributes=self.custom_attributes
**_endpoint_kwargs,
)
except Exception as e:
raise ValueError(f"Error raised by inference endpoint: {e}")
```
### Motivation
Lanchain is incompatible with llama-v2 model using SagemakerEnpointLLM in langchain
without the fix, Lanchain throws an error
`Unknown parameter in input: "accept_eula", must be one of: EndpointName, Body, ContentType, Accept, CustomAttributes, TargetModel, TargetVariant, TargetContainerHostname, InferenceId, EnableExplanations`
### Your contribution
Happy to create a PR | SagemakerEndpoint LLM does not support InvokeEndpoint CustomAttributes needed for Llama-V2 | https://api.github.com/repos/langchain-ai/langchain/issues/8326/comments | 9 | 2023-07-26T22:59:57Z | 2024-05-22T16:06:57Z | https://github.com/langchain-ai/langchain/issues/8326 | 1,823,323,149 | 8,326 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
I am converting from CONVERSATIONAL_REACT agent to Functions and having a tough time figuring out how to customize the prompt. Memory was not obvious, but worked as expected. However, adding additional inputs does not:
``` Python
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=False)
template = """
The Human is named {user_name}. Always answer questions by directing it at the Human by name.
For example, if the Human asks "What is the weather like?", you should answer "It is sunny, {user_name}." instead of just "It is sunny.".
History of the conversation:
{chat_history}
"""
extra_prompt = SystemMessagePromptTemplate.from_template(
template=template,
input_variables=["chat_history", "user_name"],
)
agent_args = {
"extra_prompt_messages": [extra_prompt],
}
mrkl_chat = initialize_agent(
tools,
llm,
agent=AgentType.OPENAI_MULTI_FUNCTIONS,
memory=memory,
agent_kwargs=agent_args,
verbose=True,
)
```
This works, but trying to run using
``` Python
mrkl_chat.run(input="What is fity six times twenty seven?", user_name="John")
```
executes the function call correctly, but then errors:
```
ValueError: One input key expected got ['input', 'user_name']
```
How do I pass in the extra inputs?
### Idea or request for content:
Please extend the examples for the FUNCTIONS agent to include customizing the prompt. | DOC: How to use OPENAI_MULTI_FUNCTIONS with custom prompt? | https://api.github.com/repos/langchain-ai/langchain/issues/8325/comments | 2 | 2023-07-26T22:58:19Z | 2023-09-08T17:18:24Z | https://github.com/langchain-ai/langchain/issues/8325 | 1,823,322,018 | 8,325 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hi,
this repo on huggingface implements word-level timestamps into locally run whisper models: https://huggingface.co/spaces/Matthijs/whisper_word_timestamps/tree/main
I have created a simple modification of this for generating simple CSVs with words and timestamps, a functionality i'd love to see in langchain too: https://github.com/SimonB97/whisper-word-ts-csv (currently only running with cuda support, but should be modifiable to dynamically support both cpu and gpu).
### Motivation
It would be amazing to be able to generate word-level timestamps together with transcriptions, for example to be able to pin-point when some information in a youtube video was given.
### Your contribution
I'm unsure how to implement this in langchain, would be willing to help though! | Whisper with word-level timestamps | https://api.github.com/repos/langchain-ai/langchain/issues/8319/comments | 1 | 2023-07-26T20:28:13Z | 2023-11-01T16:04:55Z | https://github.com/langchain-ai/langchain/issues/8319 | 1,823,113,323 | 8,319 |
[
"hwchase17",
"langchain"
]
| ### Feature request
According to the [Amazon Kendra documentation](https://docs.aws.amazon.com/kendra/latest/APIReference/API_Query.html), a query response contains three types of results:
- **Relevant suggested answers.** The answers can be either a text excerpt or table excerpt. The answer can be highlighted in the excerpt.
- **Matching FAQs** or questions-answer from your FAQ file.
- Relevant documents. This result type includes an excerpt of the document with the document title. The searched terms can be highlighted in the excerpt.
Therefore, the AmazonKendraRetriever should be flexible to allow a retrieval pattern that combines the query API with the retrieve API or not.
For instance, the application could:
- use the query API to check for relevant answers or matching FAQs before calling another retriever with the retrieve API;
- explore the [Merger Retriever](https://python.langchain.com/docs/integrations/retrievers/merger_retriever) to combine the results of the retrieve and query APIs.
### Motivation
Currently, the AmazonKendraRetrieve uses the retrieve API by default and fall backs to the query API in case the retrieve API results are empty.
### Your contribution
I could submit a PR. | In the AmazonKendraRetriever, allow selection of which API to use: query or retrieve (default) | https://api.github.com/repos/langchain-ai/langchain/issues/8315/comments | 2 | 2023-07-26T19:22:38Z | 2023-11-01T16:05:00Z | https://github.com/langchain-ai/langchain/issues/8315 | 1,823,016,792 | 8,315 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.243
```
@property
def embeddings(self) -> Embeddings:
return self.embeddings
```
is referencing itself instead of `self.embedding` resulting in Recursion error.
```
return self.embeddings
^^^^^^^^^^^^^^^
[Previous line repeated 993 more times]
RecursionError: maximum recursion depth exceeded
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
vectordb = ElasticVectorSearch(
elasticsearch_url=elasticsearch_url,
embedding=OpenAIEmbeddings()
index_name="some-index"
)
retriever = vectorstore.as_retriever(search_kwargs={"k": 3})
```
### Expected behavior
Should not get RecursionError | ElasticVectorSearch().as_retriever() throwingRecursionError: maximum recursion depth exceeded | https://api.github.com/repos/langchain-ai/langchain/issues/8310/comments | 1 | 2023-07-26T18:35:13Z | 2023-07-26T19:15:41Z | https://github.com/langchain-ai/langchain/issues/8310 | 1,822,953,914 | 8,310 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hi All,
I tried to run Apify tutorial and I ran on the issue of ImportError: cannot import name 'ApifyWrapper' from 'langchain.utilities'. I checked the Utilities library under utilities/__init__.py and I couldn't find anything under the Generic integrations with third-party systems and packages.
Any thoughts or support?
### Who can help?
@hwchase17, @agola
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import os
openai.api_key = os.environ["OPEN_API_KEY"]
os.environ["APIFY_API_TOKEN"] = "apify_api_qNa00bcYGUYFwIZltWiOuhskmer7E61VE6GN"
apify = ApifyWrapper()
loader = apify.call_actor(
actor_id="apify/website-content-crawler",
run_input={"startUrls": [{"url": "https://python.langchain.com/en/latest/"}]},
dataset_mapping_function=lambda item: Document(
page_content=item["text"] or "", metadata={"source": item["url"]}
),
)
index = VectorstoreIndexCreator().from_loaders([loader])
query = "What is LangChain?"
result = index.query_with_sources(query)
print(result["answer"])
print(result["sources"])
### Expected behavior
LangChain is a standard interface through which you can interact with a variety of large language models (LLMs). It provides modules that can be used to build language model applications, and it also provides chains and agents with memory capabilities.
https://python.langchain.com/en/latest/modules/models/llms.html, https://python.langchain.com/en/latest/getting_started/getting_started.html | ImportError: cannot import name 'ApifyWrapper' from 'langchain.utilities' | https://api.github.com/repos/langchain-ai/langchain/issues/8307/comments | 11 | 2023-07-26T18:18:22Z | 2023-12-08T16:05:50Z | https://github.com/langchain-ai/langchain/issues/8307 | 1,822,932,353 | 8,307 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Possibility to make a direct return of a tool's output on certain cases.
### Motivation
Under specific circumstances, it would be extremely beneficial to be able to return a tool's output directly to the user. For instance, this feature would be helpful when using a search tool. If a search made no matches, a response can be directly sent to the user. Without being able to make a direct return, the tool's output is sent to the LLM, making the LLM prone to hallucinate the final answer.
### Your contribution
I am not sure how this could be implemented, but would gladly help working on any ideas. | Make a return_direct of a tool's output under certain conditions | https://api.github.com/repos/langchain-ai/langchain/issues/8306/comments | 8 | 2023-07-26T18:13:52Z | 2024-07-30T16:05:49Z | https://github.com/langchain-ai/langchain/issues/8306 | 1,822,926,870 | 8,306 |
[
"hwchase17",
"langchain"
]
| ### System Info
Pop!_OS 22.04 LTS
Python 3.11.4
Langchain v0.0.242
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`code-bison` works fine, `text-bison` returns no output. I've got full `Vertex AI User` perms, and every other model, including the chat models work fine in my config.
```
>>> from langchain.llms.vertexai import VertexAI
>>> llm = VertexAI(model_name="code-bison")
>>> llm("say hello")
'Hello! How are you doing today?'
>>> llm = VertexAI(model_name="text-bison")
>>> llm("say hello")
''
>>>
```
### Expected behavior
`text-bison` should return content. | VertexAI using `text-bison` returns no output | https://api.github.com/repos/langchain-ai/langchain/issues/8304/comments | 3 | 2023-07-26T17:33:31Z | 2023-12-20T22:59:35Z | https://github.com/langchain-ai/langchain/issues/8304 | 1,822,865,052 | 8,304 |
[
"hwchase17",
"langchain"
]
| ### System Info
When i install for the first time using this command
```
pip install 'langchain[all]'
```
It installs the older version `langchain 0.0.39`
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. pip install 'langchain[all]'
### Expected behavior
It should install the latest version of langchain | installing pip install langchain[all] installs the old version of langchain 0.0.39 | https://api.github.com/repos/langchain-ai/langchain/issues/8298/comments | 3 | 2023-07-26T16:32:22Z | 2023-08-07T18:29:02Z | https://github.com/langchain-ai/langchain/issues/8298 | 1,822,776,944 | 8,298 |
[
"hwchase17",
"langchain"
]
| ### System Info
python = 3.9
es = 8.9
langchain = 0.0.237
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
url = 'https://elastic:xxx@xxxx:8447'
docsearch = ElasticVectorSearch.from_documents(
split_docs,
embeddings,
elasticsearch_url=url,
index_name="my_index",
ssl_verify={
"verify_certs": True,
"basic_auth": ("elastic", "xxx"),
"ca_certs": "./data/xxxx"
},
)
```
then get error:
`__init__() got multiple values for argument 'elasticsearch_url'`

in: https://github.com/langchain-ai/langchain/blob/5c6dcb1960b717aaf70413ed0b467bffc4fc0be8/libs/langchain/langchain/vectorstores/elastic_vector_search.py#L298
add this then work:
```
del kwargs['elasticsearch_url']
```
### Expected behavior
no error | ElasticVectorSearch.from_documents failed | https://api.github.com/repos/langchain-ai/langchain/issues/8293/comments | 1 | 2023-07-26T14:08:01Z | 2023-07-27T02:20:53Z | https://github.com/langchain-ai/langchain/issues/8293 | 1,822,500,823 | 8,293 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version - 0.0.208
openAI - 3.5-turbo
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
i have a chat built on top of OpenAI , i have memory of 1 in the conversation.
the problem is that sometimes when the prompt is very long i hit the token limit from openAI .
i tried to clean the conversation memory to allow next question to succeed with :
agent.memory.clear()
but it doesnt seem to work, and all subsequent queries are failing on openai.error.InvalidRequestError
this is my agent creation code :
chatgpt_chain = LLMChain(
llm=ChatOpenAI(temperature=0, model_name=settings.model_name), # type: ignore
prompt=prompt,
verbose=False,
# memory=ConversationBufferWindowMemory(k=1),
)
tools = [PythonAstREPLTool()]
custom_agent = LLMSingleActionAgent(
llm_chain=chatgpt_chain,
output_parser=my_output_parser,
stop=["\nObservation:"],
)
agent_memory = ConversationBufferWindowMemory(k=1)
return AgentExecutor.from_agent_and_tools(
agent=custom_agent,
tools=tools,
verbose=True,
memory=agent_memory,
)
### Expected behavior
i expected the memory to be clear and continue to work with the agent | cannot clear memory when token limit exceeded | https://api.github.com/repos/langchain-ai/langchain/issues/8291/comments | 1 | 2023-07-26T12:59:35Z | 2023-07-27T11:05:00Z | https://github.com/langchain-ai/langchain/issues/8291 | 1,822,369,188 | 8,291 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I have defined several Chains (LLM Chains, RetrievalQA and ConversationalRelationChain) and have used an agent to route between them after I created for each their respective Tool.
I realised that when each chain was defined outside of the agent, their performance was amazing. But once you add them up as routes in the agent, their performance drops significantly.
The exact same questions that they manage to answer to outside of the routing system (agent) are answered incorrectly when used as a route in the Agent.
Is anyone else facing this issue or is it only me? Is it fixable?
### Suggestion:
_No response_ | Issue: Performance of Agents or Chains dropping when used as routers in an Agent. | https://api.github.com/repos/langchain-ai/langchain/issues/8287/comments | 2 | 2023-07-26T12:20:31Z | 2023-11-01T16:05:04Z | https://github.com/langchain-ai/langchain/issues/8287 | 1,822,298,506 | 8,287 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.144
python3 == 3.11.4
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am running the following code:
```
from langchain.document_loaders import PyMuPDFLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
# Load the text document
loader = PyMuPDFLoader("path/to/file.pdf")
documents = loader.load()
# Split the document into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# Generate the embeddings
embeddings = OpenAIEmbeddings()
embedded_docs = embeddings.embed_documents(docs)
# Initialize FAISS
db = FAISS.from_documents(embedded_docs, embeddings)
# Perform a similarity search between the embedding of the query and the embeddings of the documents
query = "Your query here"
similar_docs = db.similarity_search(query)
# Print the content of the most similar document
print(similar_docs[0].page_content)
```
The error occurs as follows:
```
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /var/folders/q3/9988ccxx13q_x9yfdkv36x2r0000gn/T/ipykernel_22317/1552469458.py:1 in <module> │
│ │
│ [Errno 2] No such file or directory: │
│ '/var/folders/q3/9988ccxx13q_x9yfdkv36x2r0000gn/T/ipykernel_22317/1552469458.py' │
│ │
│ /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/embedd │
│ ings/openai.py:275 in embed_documents │
│ │
│ 272 │ │ """ │
│ 273 │ │ # handle batches of large input text │
│ 274 │ │ if self.embedding_ctx_length > 0: │
│ ❱ 275 │ │ │ return self._get_len_safe_embeddings(texts, engine=self.document_model_name) │
│ 276 │ │ else: │
│ 277 │ │ │ results = [] │
│ 278 │ │ │ _chunk_size = chunk_size or self.chunk_size │
│ │
│ /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/embedd │
│ ings/openai.py:209 in _get_len_safe_embeddings │
│ │
│ 206 │ │ │ encoding = tiktoken.model.encoding_for_model(self.document_model_name) │
│ 207 │ │ │ for i, text in enumerate(texts): │
│ 208 │ │ │ │ # replace newlines, which can negatively affect performance. │
│ ❱ 209 │ │ │ │ text = text.replace("\n", " ") │
│ 210 │ │ │ │ token = encoding.encode( │
│ 211 │ │ │ │ │ text, │
│ 212 │ │ │ │ │ allowed_special=self.allowed_special, │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: 'Document' object has no attribute 'replace'
```
### Expected behavior
Expected behaviour is that the variable `embedded_docs` should return a list of vector embeddings for each document. | AttributeError: `Document` object has not attribute `replace` | https://api.github.com/repos/langchain-ai/langchain/issues/8286/comments | 5 | 2023-07-26T12:15:13Z | 2024-08-02T05:44:32Z | https://github.com/langchain-ai/langchain/issues/8286 | 1,822,290,214 | 8,286 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'm using a SQL Agent that is connected to BigQuery to build a QA model. I get OutputParserException fairly often. The Agent returns the correct answer some times, but I have never got an answer when the option view_support=True in SQLDatabase.from_uri().
For some questions, the agent generates a SQL query and executes it. It gets the results of the query as an Observation, but runs into an OutputParserException. The last line of output is almost always 'Thought:' before the OutputParserException.
Here's the code that I'm running:
```
from google.cloud import bigquery
from sqlalchemy import *
from sqlalchemy.engine import create_engine
from sqlalchemy.schema import *
import os
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain import SQLDatabase
from langchain.agents import AgentExecutor
from langchain.agents.agent_types import AgentType
from langchain.llms import VertexAI
from vertexai.preview.language_models import TextGenerationModel
service_account_file = "masked.json"
project = "masked_project_name"
dataset = "SAP_REPORTING"
sqlalchemy_url = f'bigquery://{project}/{dataset}?credentials_path={service_account_file}'
llm = VertexAI(temperature=0)
db_2 = SQLDatabase.from_uri(sqlalchemy_url, view_support=True)
toolkit_2 = SQLDatabaseToolkit(db=db_2, llm=llm, use_query_checker=True)
PREFIX = '''You are a SQL expert. You have access to a BigQuery database.
Identify which tables can be used to answer the user's question and write and execute a SQL query accordingly.
'''
FORMAT_INSTRUCTIONS = """Please use the following format:
'''
Thought:
Action: the action to take should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
'''
Provide the output in the Final Answer."""
SUFFIX = '''
Begin!
Previous conversation history:
{chat_history}
Instructions: {input}
{agent_scratchpad}
'''
agent_executor_2 = create_sql_agent(llm=llm,
toolkit=toolkit_2,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
# top_k=1000,
handle_parsing_errors=True,
agent_kwargs={
'prefix': PREFIX,
'format_instructions': FORMAT_INSTRUCTIONS,
'suffix': SUFFIX
}
)
agent_executor_2.run("Which customers are located in Cleveland?")
---------------------------------------------------------------------------
> Entering new AgentExecutor chain...
Action: sql_db_list_tables
Action Input:
Observation: AccountingDocuments, AccountingDocumentsReceivable, AccountsPayable, AccountsPayableTurnover, AddressesMD, AugmentedDemandPlan, AugmentedWeeklySales, BatchesMD, BillOfMaterialsMD, Billing, BillingBlockingReasonsMD, BusinessPartnersMD, CashDiscountUtilization, CompaniesMD, CostCenterAmountsHierarchy_SAMPLE, CostCentersMD, CountriesMD, CurrenciesMD, CurrencyConvUtil, CurrencyConversion, CustomersMD, DaysPayableOutstanding, Deliveries, DeliveriesStatus_PerSalesOrg, DeliveryBlockingReasonsMD, DemandForecast, DemandPlan, DistributionChannelsMD, DivisionsMD, GLAccountsMD, HolidayCalendar, InventoryByPlant, InventoryKeyMetrics, InvoiceDocuments_Flow, Languages_T002, LedgersMD, MaterialGroupsMD, MaterialLedger, MaterialMovementTypesMD, MaterialPlantsMD, MaterialTypesMD, MaterialsBatchMD, MaterialsMD, MaterialsMovement, MaterialsValuation, OneTouchOrder, OrderToCash, POFulfillment, POOrderHistory, POSchedule, POScheduleLine, POVendorConfirmation, PlantsMD, PricingConditions, ProductHierarchiesMD, ProductHierarchy_Flatten, ProductionOrders, ProfitCenterAmountsHierarchy_SAMPLE, ProfitCentersMD, PromotionCalendar, PurchaseDocumentTypesMD, PurchaseDocuments, PurchaseDocumentsHistory, PurchaseDocuments_Flow, PurchasingGroupsMD, PurchasingOrganizationsMD, ReasonForMovementTypesMD, Regions_T005S, SDDocumentFlow, SDStatus_Items, SalesFulfillment, SalesFulfillment_perOrder, SalesOrderDetails_SAMPLE, SalesOrderHeaderStatus, SalesOrderPartnerFunction, SalesOrderPricing, SalesOrderScheduleLine, SalesOrderStatus, SalesOrders, SalesOrders_V2, SalesOrganizationsMD, SalesStatus_Items, SlowMovingThreshold, SpecialStocksMD, StockCharacteristicsConfig, StockInHand, StockMonthlySnapshots, StockWeeklySnapshots, Stock_NonValuated, Stock_PerPlant, Stock_Unrestricted_vs_Sales, StorageLocationsMD, TelephoneCodes_T005K, Trends, UoMConversionUtil, UoMMD, UoMUsage_SAMPLE, ValuationAreasMD, VendorConfig, VendorPerformance, VendorsMD, Weather, WeatherDaily
Thought:I should look at the schema of the CustomersMD table to see what columns I can query.
Action: sql_db_schema
Action Input: CustomersMD
Observation:
CREATE TABLE `CustomersMD` (
`Client_MANDT` STRING,
`CustomerNumber_KUNNR` STRING,
`CountryKey_LAND1` STRING,
`Name1_NAME1` STRING,
`Name2_NAME2` STRING,
`City_ORT01` STRING,
`PostalCode_PSTLZ` STRING,
`CustomerRegion_REGIO` STRING,
`SortField_SORTL` STRING,
`StreetAndNumber_STRAS` STRING,
`FirstTelephoneNumber_TELF1` STRING,
`FaxNumber_TELFX` STRING,
`OneTimeAccount_XCPDK` STRING,
`Address_ADRNR` STRING,
`MatchcodeSearch_MCOD1` STRING,
`MatchcodeSearch_MCOD2` STRING,
`MatchcodeSearch_MCOD3` STRING,
`Title_ANRED` STRING,
`CentralOrderBlockForCustomer_AUFSD` STRING,
`ExpressTrainStation_BAHNE` STRING,
`TrainStation_BAHNS` STRING,
`InternationalLocationNumber_BBBNR` STRING,
`InternationalLocationNumber_BBSNR` STRING,
`AuthorizationGroup_BEGRU` STRING,
`IndustryKey_BRSCH` STRING,
`CheckDigitForTheInternationalLocationNumber_BUBKZ` STRING,
`DataCommunicationLineNo_DATLT` STRING,
`DateCreated_ERDAT` DATE,
`CreatedBy_ERNAM` STRING,
`UnloadingPointsExist_EXABL` STRING,
`CentralBillingBlock_FAKSD` STRING,
`AccountNumberFiscalAddress_FISKN` STRING,
`WorkingTimeCalendar_KNAZK` STRING,
`AlternativePayer_KNRZA` STRING,
`GroupKey_KONZS` STRING,
`CustomerAccountGroup_KTOKD` STRING,
`CustomerClassification_KUKLA` STRING,
`AccountNumberOfVendorOrCreditor_LIFNR` STRING,
`CentralDeliveryBlockForTheCustomer_LIFSD` STRING,
`CityCoordinates_LOCCO` STRING,
`CentralDeletionFlagForMasterRecord_LOEVM` STRING,
`Name3_NAME3` STRING,
`Name4_NAME4` STRING,
`NielsenId_NIELS` STRING,
`District_ORT02` STRING,
`PoBox_PFACH` STRING,
`POBoxPostalCode_PSTL2` STRING,
`CountyCode_COUNC` STRING,
`CityCode_CITYC` STRING,
`RegionalMarket_RPMKR` STRING,
`CentralPostingBlock_SPERR` STRING,
`LanguageKey_SPRAS` STRING,
`TaxNumber1_STCD1` STRING,
`TaxNumber2_STCD2` STRING,
`SubjectToEqualizationTax_STKZA` STRING,
`LiableForVat_STKZU` STRING,
`TeleboxNumber_TELBX` STRING,
`SecondTelephoneNumber_TELF2` STRING,
`TeletexNumber_TELTX` STRING,
`TelexNumber_TELX1` STRING,
`TransportationZone_LZONE` STRING,
`AlternativePayerAllowed_XZEMP` STRING,
`CompanyIdOfTradingPartner_VBUND` STRING,
`VatRegistrationNumber_STCEG` STRING,
`Competitor_DEAR1` STRING,
`SalesPartner_DEAR2` STRING,
`SalesProspect_DEAR3` STRING,
`CustomerType4_DEAR4` STRING,
`IdForDefaultSoldToParty_DEAR5` STRING,
`LegalStatus_GFORM` STRING,
`IndustryCode1_BRAN1` STRING,
`IndustryCode2_BRAN2` STRING,
`IndustryCode3_BRAN3` STRING,
`IndustryCode4_BRAN4` STRING,
`IndustryCode5_BRAN5` STRING,
`InitialContact_EKONT` STRING,
`AnnualSales_UMSAT` NUMERIC,
`YearForWhichSalesAreGiven_UMJAH` STRING,
`CurrencyOfSalesFigure_UWAER` STRING,
`YearlyNumberOfEmployees_JMZAH` STRING,
`YearForWhichTheNumberOfEmployeesIsGiven_JMJAH` STRING,
`Attribute1_KATR1` STRING,
`Attribute2_KATR2` STRING,
`Attribute3_KATR3` STRING,
`Attribute4_KATR4` STRING,
`Attribute5_KATR5` STRING,
`Attribute6_KATR6` STRING,
`Attribute7_KATR7` STRING,
`Attribute8_KATR8` STRING,
`Attribute9_KATR9` STRING,
`Attribute10_KATR10` STRING,
`NaturalPerson_STKZN` STRING,
`AnnualSales_UMSA1` NUMERIC,
`TaxJurisdiction_TXJCD` STRING,
`FiscalYearVariant_PERIV` STRING,
`UsageIndicator_ABRVW` STRING,
`InspectionCarriedOutByCustomer_INSPBYDEBI` STRING,
`InspectionForADeliveryNote_INSPATDEBI` STRING,
`ReferenceAccountGroup_KTOCD` STRING,
`PoBoxCity_PFORT` STRING,
`Plant_WERKS` STRING,
`ReportKeyForDataMediumExchange_DTAMS` STRING,
`InstructionKeyForDataMediumExchange_DTAWS` STRING,
`StatusOfDataTransferIntoSubsequentRelease_DUEFL` STRING,
`AssignmentToHierarchy_HZUOR` STRING,
`PaymentBlock_SPERZ` STRING,
`RLabeling_CustomerplantGroup_ETIKG` STRING,
`IdNonMilitaryUse_CIVVE` STRING,
`IdForMilitaryUse_MILVE` STRING,
`ConditionGroup1_KDKG1` STRING,
`ConditionGroup2_KDKG2` STRING,
`ConditionGroup3_KDKG3` STRING,
`ConditionGroup4_KDKG4` STRING,
`ConditionGroup5_KDKG5` STRING,
`AlternativePayerUsingAccountNumber_XKNZA` STRING,
`TaxType_FITYP` STRING,
`TaxNumberType_STCDT` STRING,
`TaxNumber3_STCD3` STRING,
`TaxNumber4_STCD4` STRING,
`TaxNumber5_STCD5` STRING,
`CustomerIsIcmsExempt_XICMS` STRING,
`CustomerIsIpiExempt_XXIPI` STRING,
`CustomerGroupForSubstituicaoTributariaCalculation_XSUBT` STRING,
`CustomerCfopCategory_CFOPC` STRING,
`TaxLaw_Icms_TXLW1` STRING,
`TaxLaw_Ipi_TXLW2` STRING,
`IndicatorForBiochemicalWarfare_CCC01` STRING,
`IndicatorForNuclearNonproliferation_CCC02` STRING,
`IndicatorForNationalSecurity_CCC03` STRING,
`IndicatorForMissileTechnology_CCC04` STRING,
`CentralSalesBlock_CASSD` STRING,
`UniformResourceLocator_KNURL` STRING,
`NameOfRepresentative_J_1KFREPRE` STRING,
`TypeOfBusiness_J_1KFTBUS` STRING,
`TypeOfIndustry_J_1KFTIND` STRING,
`StatusOfChangeAuthorization_CONFS` STRING,
`DateOnWhichTheChangesWereConfirmed_UPDAT` DATE,
`TimeOfLastChangeConfirmation_UPTIM` TIME,
`CentralDeletionBlockForMasterRecord_NODEL` STRING,
`Indicator_Consumer_DEAR6` STRING,
`BusinessPurposeCompletedFlag_CVP_XBLCK` STRING,
`SuframaCode_SUFRAMA` STRING,
`RgNumber_RG` STRING,
`IssuedBy_EXP` STRING,
`State_UF` STRING,
`RgIssuingDate_RGDATE` DATE,
`RicNumber_RIC` STRING,
`ForeignNationalRegistration_RNE` STRING,
`RneIssuingDate_RNEDATE` DATE,
`Cnae_CNAE` STRING,
`LegalNature_LEGALNAT` STRING,
`CrtNumber_CRTN` STRING,
`IcmsTaxpayer_ICMSTAXPAY` STRING,
`IndustryMainType_INDTYP` STRING,
`TaxDeclarationType_TDT` STRING,
`CompanySize_COMSIZE` STRING,
`DeclarationRegimenForPiscofins_DECREGPC` STRING,
`AgencyLocationCode_ALC` STRING,
`PaymentOffice_PMT_OFFICE` STRING,
`FeeSchedule_FEE_SCHEDULE` STRING,
`DunsNumber_DUNS` STRING,
`Duns4_DUNS4` STRING,
`ProcessorGroup_PSOFG` STRING,
`SubledgerAcctPreprocessingProcedure_PSOIS` STRING,
`Name1_PSON1` STRING,
`Name2_PSON2` STRING,
`Name3_PSON3` STRING,
`FirstName_PSOVN` STRING,
`Title_PSOTL` STRING,
`Description_PSOO1` STRING,
`Description_PSOO2` STRING,
`Description_PSOO3` STRING,
`Description_PSOO4` STRING,
`Description_PSOO5` STRING,
`ValidFromDate_DATE_FROM` DATE,
`VersionIdForInternationalAddresses_NATION` STRING,
`ValidToDate_DATE_TO` DATE,
`FormOfAddressKey_TITLE` STRING,
`Addr_NAME1` STRING,
`Addr_NAME2` STRING,
`Addr_NAME3` STRING,
`Addr_NAME4` STRING,
`City_CITY1` STRING,
`District_CITY2` STRING,
`CityCodeForCitystreetFile_CITY_CODE` STRING,
`DistrictCodeForCityAndStreetFile_CITYP_CODE` STRING,
`City_HOME_CITY` STRING,
`DifferentCityForCitystreetFile_CITYH_CODE` STRING,
`RegionalStructureGrouping_REGIOGROUP` STRING,
`CityPostalCode_POST_CODE1` STRING,
`PoBoxPostalCode_POST_CODE2` STRING,
`CompanyPostalCode_POST_CODE3` STRING,
`PoBox_PO_BOX` STRING,
`PoBoxAddressUndeliverableFlag_DONT_USE_P` STRING,
`Flag_PoBoxWithoutNumber_PO_BOX_NUM` STRING,
`PoBoxCity_PO_BOX_LOC` STRING,
`CityPoBoxCode_CityFile_CITY_CODE2` STRING,
`RegionForPoBox_PO_BOX_REG` STRING,
`PoBoxCountry_PO_BOX_CTY` STRING,
`TransportationZoneToOrFromWhichTheGoodsAreDelivered_TRANSPZONE` STRING,
`Street_STREET` STRING,
`StreetAddressUndeliverableFlag_DONT_USE_S` STRING,
`StreetNumberForCitystreetFile_STREETCODE` STRING,
`HouseNumber_HOUSE_NUM1` STRING,
`HouseNumberSupplement_HOUSE_NUM2` STRING,
`Street2_STR_SUPPL1` STRING,
`Street3_STR_SUPPL2` STRING,
`Street4_STR_SUPPL3` STRING,
`Street5_LOCATION` STRING,
`Building_NumberOrCode_BUILDING` STRING,
`FloorInBuilding_FLOOR` STRING,
`RoomOrAppartmentNumber_ROOMNUMBER` STRING,
`CountryKey_COUNTRY` STRING,
`Language_LANGU` STRING,
`Region__REGION` STRING,
`AddressGroup_Key_BusinessAddressServices_ADDR_GROUP` STRING,
`Flag_ThereAreMoreAddressGroupAssignments_FLAGGROUPS` STRING,
`Flag_ThisIsAPersonalAddress_PERS_ADDR` STRING,
`SearchTerm1_SORT1` STRING,
`SearchTerm2_SORT2` STRING,
`FirstTelephoneNo_DiallingCodenumber_TEL_NUMBER` STRING,
`FirstTelephoneNo_Extension_TEL_EXTENS` STRING,
`CountyCodeForCounty_COUNTY_CODE` STRING,
`County_COUNTY` STRING,
`TownshipCodeForTownship_TOWNSHIP_CODE` STRING,
`Township_TOWNSHIP` STRING,
`CountyNameInUpperCaseForSearchHelp_MC_COUNTY` STRING,
`TownshipNameInUpperCaseForSearchHelp_MC_TOWNSHIP` STRING,
`BusinessPurposeCompletedFlag_XPCPT` STRING
)
OPTIONS(description='Customer Master Data')
/*
3 rows from CustomersMD table:
Client_MANDT CustomerNumber_KUNNR CountryKey_LAND1 Name1_NAME1 Name2_NAME2 City_ORT01 PostalCode_PSTLZ CustomerRegion_REGIO SortField_SORTL StreetAndNumber_STRAS FirstTelephoneNumber_TELF1 FaxNumber_TELFX OneTimeAccount_XCPDK Address_ADRNR MatchcodeSearch_MCOD1 MatchcodeSearch_MCOD2 MatchcodeSearch_MCOD3 Title_ANRED CentralOrderBlockForCustomer_AUFSD ExpressTrainStation_BAHNE TrainStation_BAHNS InternationalLocationNumber_BBBNR InternationalLocationNumber_BBSNR AuthorizationGroup_BEGRU IndustryKey_BRSCH CheckDigitForTheInternationalLocationNumber_BUBKZ DataCommunicationLineNo_DATLT DateCreated_ERDAT CreatedBy_ERNAM UnloadingPointsExist_EXABL CentralBillingBlock_FAKSD AccountNumberFiscalAddress_FISKN WorkingTimeCalendar_KNAZK AlternativePayer_KNRZA GroupKey_KONZS CustomerAccountGroup_KTOKD CustomerClassification_KUKLA AccountNumberOfVendorOrCreditor_LIFNR CentralDeliveryBlockForTheCustomer_LIFSD CityCoordinates_LOCCO CentralDeletionFlagForMasterRecord_LOEVM Name3_NAME3 Name4_NAME4 NielsenId_NIELS District_ORT02 PoBox_PFACH POBoxPostalCode_PSTL2 CountyCode_COUNC CityCode_CITYC RegionalMarket_RPMKR CentralPostingBlock_SPERR LanguageKey_SPRAS TaxNumber1_STCD1 TaxNumber2_STCD2 SubjectToEqualizationTax_STKZA LiableForVat_STKZU TeleboxNumber_TELBX SecondTelephoneNumber_TELF2 TeletexNumber_TELTX TelexNumber_TELX1 TransportationZone_LZONE AlternativePayerAllowed_XZEMP CompanyIdOfTradingPartner_VBUND VatRegistrationNumber_STCEG Competitor_DEAR1 SalesPartner_DEAR2 SalesProspect_DEAR3 CustomerType4_DEAR4 IdForDefaultSoldToParty_DEAR5 LegalStatus_GFORM IndustryCode1_BRAN1 IndustryCode2_BRAN2 IndustryCode3_BRAN3 IndustryCode4_BRAN4 IndustryCode5_BRAN5 InitialContact_EKONT AnnualSales_UMSAT YearForWhichSalesAreGiven_UMJAH CurrencyOfSalesFigure_UWAER YearlyNumberOfEmployees_JMZAH YearForWhichTheNumberOfEmployeesIsGiven_JMJAH Attribute1_KATR1 Attribute2_KATR2 Attribute3_KATR3 Attribute4_KATR4 Attribute5_KATR5 Attribute6_KATR6 Attribute7_KATR7 Attribute8_KATR8 Attribute9_KATR9 Attribute10_KATR10 NaturalPerson_STKZN AnnualSales_UMSA1 TaxJurisdiction_TXJCD FiscalYearVariant_PERIV UsageIndicator_ABRVW InspectionCarriedOutByCustomer_INSPBYDEBI InspectionForADeliveryNote_INSPATDEBI ReferenceAccountGroup_KTOCD PoBoxCity_PFORT Plant_WERKS ReportKeyForDataMediumExchange_DTAMS InstructionKeyForDataMediumExchange_DTAWS StatusOfDataTransferIntoSubsequentRelease_DUEFL AssignmentToHierarchy_HZUOR PaymentBlock_SPERZ RLabeling_CustomerplantGroup_ETIKG IdNonMilitaryUse_CIVVE IdForMilitaryUse_MILVE ConditionGroup1_KDKG1 ConditionGroup2_KDKG2 ConditionGroup3_KDKG3 ConditionGroup4_KDKG4 ConditionGroup5_KDKG5 AlternativePayerUsingAccountNumber_XKNZA TaxType_FITYP TaxNumberType_STCDT TaxNumber3_STCD3 TaxNumber4_STCD4 TaxNumber5_STCD5 CustomerIsIcmsExempt_XICMS CustomerIsIpiExempt_XXIPI CustomerGroupForSubstituicaoTributariaCalculation_XSUBT CustomerCfopCategory_CFOPC TaxLaw_Icms_TXLW1 TaxLaw_Ipi_TXLW2 IndicatorForBiochemicalWarfare_CCC01 IndicatorForNuclearNonproliferation_CCC02 IndicatorForNationalSecurity_CCC03 IndicatorForMissileTechnology_CCC04 CentralSalesBlock_CASSD UniformResourceLocator_KNURL NameOfRepresentative_J_1KFREPRE TypeOfBusiness_J_1KFTBUS TypeOfIndustry_J_1KFTIND StatusOfChangeAuthorization_CONFS DateOnWhichTheChangesWereConfirmed_UPDAT TimeOfLastChangeConfirmation_UPTIM CentralDeletionBlockForMasterRecord_NODEL Indicator_Consumer_DEAR6 BusinessPurposeCompletedFlag_CVP_XBLCK SuframaCode_SUFRAMA RgNumber_RG IssuedBy_EXP State_UF RgIssuingDate_RGDATE RicNumber_RIC ForeignNationalRegistration_RNE RneIssuingDate_RNEDATE Cnae_CNAE LegalNature_LEGALNAT CrtNumber_CRTN IcmsTaxpayer_ICMSTAXPAY IndustryMainType_INDTYP TaxDeclarationType_TDT CompanySize_COMSIZE DeclarationRegimenForPiscofins_DECREGPC AgencyLocationCode_ALC PaymentOffice_PMT_OFFICE FeeSchedule_FEE_SCHEDULE DunsNumber_DUNS Duns4_DUNS4 ProcessorGroup_PSOFG SubledgerAcctPreprocessingProcedure_PSOIS Name1_PSON1 Name2_PSON2 Name3_PSON3 FirstName_PSOVN Title_PSOTL Description_PSOO1 Description_PSOO2 Description_PSOO3 Description_PSOO4 Description_PSOO5 ValidFromDate_DATE_FROM VersionIdForInternationalAddresses_NATION ValidToDate_DATE_TO FormOfAddressKey_TITLE Addr_NAME1 Addr_NAME2 Addr_NAME3 Addr_NAME4 City_CITY1 District_CITY2 CityCodeForCitystreetFile_CITY_CODE DistrictCodeForCityAndStreetFile_CITYP_CODE City_HOME_CITY DifferentCityForCitystreetFile_CITYH_CODE RegionalStructureGrouping_REGIOGROUP CityPostalCode_POST_CODE1 PoBoxPostalCode_POST_CODE2 CompanyPostalCode_POST_CODE3 PoBox_PO_BOX PoBoxAddressUndeliverableFlag_DONT_USE_P Flag_PoBoxWithoutNumber_PO_BOX_NUM PoBoxCity_PO_BOX_LOC CityPoBoxCode_CityFile_CITY_CODE2 RegionForPoBox_PO_BOX_REG PoBoxCountry_PO_BOX_CTY TransportationZoneToOrFromWhichTheGoodsAreDelivered_TRANSPZONE Street_STREET StreetAddressUndeliverableFlag_DONT_USE_S StreetNumberForCitystreetFile_STREETCODE HouseNumber_HOUSE_NUM1 HouseNumberSupplement_HOUSE_NUM2 Street2_STR_SUPPL1 Street3_STR_SUPPL2 Street4_STR_SUPPL3 Street5_LOCATION Building_NumberOrCode_BUILDING FloorInBuilding_FLOOR RoomOrAppartmentNumber_ROOMNUMBER CountryKey_COUNTRY Language_LANGU Region__REGION AddressGroup_Key_BusinessAddressServices_ADDR_GROUP Flag_ThereAreMoreAddressGroupAssignments_FLAGGROUPS Flag_ThisIsAPersonalAddress_PERS_ADDR SearchTerm1_SORT1 SearchTerm2_SORT2 FirstTelephoneNo_DiallingCodenumber_TEL_NUMBER FirstTelephoneNo_Extension_TEL_EXTENS CountyCodeForCounty_COUNTY_CODE County_COUNTY TownshipCodeForTownship_TOWNSHIP_CODE Township_TOWNSHIP CountyNameInUpperCaseForSearchHelp_MC_COUNTY TownshipNameInUpperCaseForSearchHelp_MC_TOWNSHIP BusinessPurposeCompletedFlag_XPCPT
050 0001000053 US Customer-Oregon None Portland OR OREGON None None None None 0000023139 CUSTOMER-OREGON None PORTLAND Mr. None None None None None None None None None 2022-03-10 DEEPAKV None None None None None None 1000 None None None None None None None None None None None None None None None E None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None X None None None X None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None 0001-01-01 9999-12-31 0002 Customer-Oregon Portland US E OR BP OREGON
050 0001000060 US Customer- Seattle-Tacoma None Seattle-Tacoma WA SEATTLE-TA None None None None 0000023162 CUSTOMER- SEATTLE-TACOMA None SEATTLE-TACOMA Company None None None None None None None None None 2022-03-20 DEEPAKV None None None None None None 1000 None None None None None None None None None None None None None None None E None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None X None None None X None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None 0001-01-01 9999-12-31 0003 Customer- Seattle-Tacoma Seattle-Tacoma US E WA BP SEATTLE-TACOMA
050 0000000001 AU Test None Territory 2612 ACT TEST None None None None 0000023110 TEST None TERRITORY Mr. and Mrs. None None None None None None None None None 2022-02-28 MAHITHA None None None None None None 0004 None None None None None None None None None None None None None None None E None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None X None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None None 0001-01-01 9999-12-31 0004 Test Territory 2612 AU E ACT BP TEST ING
*/
Thought:
---------------------------------------------------------------------------
OutputParserException Traceback (most recent call last)
Cell In[20], line 1
----> 1 agent_executor_2.run("Which customers are located in Cleveland?")
File /opt/conda/envs/env_preview_lc/lib/python3.11/site-packages/langchain/chains/base.py:436, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
434 if len(args) != 1:
435 raise ValueError("`run` supports only one positional argument.")
--> 436 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
437 _output_key
438 ]
440 if kwargs and not args:
441 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
442 _output_key
443 ]
File /opt/conda/envs/env_preview_lc/lib/python3.11/site-packages/langchain/chains/base.py:243, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
241 except (KeyboardInterrupt, Exception) as e:
242 run_manager.on_chain_error(e)
--> 243 raise e
244 run_manager.on_chain_end(outputs)
245 final_outputs: Dict[str, Any] = self.prep_outputs(
246 inputs, outputs, return_only_outputs
247 )
File /opt/conda/envs/env_preview_lc/lib/python3.11/site-packages/langchain/chains/base.py:237, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
231 run_manager = callback_manager.on_chain_start(
232 dumpd(self),
233 inputs,
234 )
235 try:
236 outputs = (
--> 237 self._call(inputs, run_manager=run_manager)
238 if new_arg_supported
239 else self._call(inputs)
240 )
241 except (KeyboardInterrupt, Exception) as e:
242 run_manager.on_chain_error(e)
File /opt/conda/envs/env_preview_lc/lib/python3.11/site-packages/langchain/agents/agent.py:1029, in AgentExecutor._call(self, inputs, run_manager)
1027 # We now enter the agent loop (until it returns something).
1028 while self._should_continue(iterations, time_elapsed):
-> 1029 next_step_output = self._take_next_step(
1030 name_to_tool_map,
1031 color_mapping,
1032 inputs,
1033 intermediate_steps,
1034 run_manager=run_manager,
1035 )
1036 if isinstance(next_step_output, AgentFinish):
1037 return self._return(
1038 next_step_output, intermediate_steps, run_manager=run_manager
1039 )
File /opt/conda/envs/env_preview_lc/lib/python3.11/site-packages/langchain/agents/agent.py:843, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
841 raise_error = False
842 if raise_error:
--> 843 raise e
844 text = str(e)
845 if isinstance(self.handle_parsing_errors, bool):
File /opt/conda/envs/env_preview_lc/lib/python3.11/site-packages/langchain/agents/agent.py:832, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
829 intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
831 # Call the LLM to see what to do.
--> 832 output = self.agent.plan(
833 intermediate_steps,
834 callbacks=run_manager.get_child() if run_manager else None,
835 **inputs,
836 )
837 except OutputParserException as e:
838 if isinstance(self.handle_parsing_errors, bool):
File /opt/conda/envs/env_preview_lc/lib/python3.11/site-packages/langchain/agents/agent.py:457, in Agent.plan(self, intermediate_steps, callbacks, **kwargs)
455 full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)
456 full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs)
--> 457 return self.output_parser.parse(full_output)
File /opt/conda/envs/env_preview_lc/lib/python3.11/site-packages/langchain/agents/mrkl/output_parser.py:44, in MRKLOutputParser.parse(self, text)
39 return AgentFinish(
40 {"output": text.split(FINAL_ANSWER_ACTION)[-1].strip()}, text
41 )
43 if not re.search(r"Action\s*\d*\s*:[\s]*(.*?)", text, re.DOTALL):
---> 44 raise OutputParserException(
45 f"Could not parse LLM output: `{text}`",
46 observation="Invalid Format: Missing 'Action:' after 'Thought:'",
47 llm_output=text,
48 send_to_llm=True,
49 )
50 elif not re.search(
51 r"[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)", text, re.DOTALL
52 ):
53 raise OutputParserException(
54 f"Could not parse LLM output: `{text}`",
55 observation="Invalid Format:"
(...)
58 send_to_llm=True,
59 )
OutputParserException: Could not parse LLM output: ````
CREATE TABLE CustomersMD (
Client_MANDT STRING,
CustomerNumber_KUNNR STRING,
CountryKey_LAND1 STRING,
Name1_NAME1 STRING,
Name2_NAME2 STRING,
City_ORT01 STRING,
PostalCode_PSTLZ STRING,
CustomerRegion_REGIO STRING,
SortField_SORTL STRING,
StreetAndNumber_STRAS STRING,
FirstTelephoneNumber_TELF1 STRING,
FaxNumber_TELFX STRING,
OneTimeAccount_XCPDK STRING`
```
Appreciate any help with this :)
### Suggestion:
_No response_ | Issue: OutputParserException when using SQL Agent | https://api.github.com/repos/langchain-ai/langchain/issues/8282/comments | 1 | 2023-07-26T10:19:13Z | 2023-11-01T16:05:10Z | https://github.com/langchain-ai/langchain/issues/8282 | 1,822,077,707 | 8,282 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.242
python 3.11.4
### Who can help?
@hwchase17
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I try using a local LLM from HuggingFace:
1. Using from_model_id works perfectly:
llm = HuggingFacePipeline.from_model_id(
model_id="TheBloke/wizardLM-7B-HF",
task="text-generation",
pipeline_kwargs={"max_new_tokens": 1200, "temperature": 0.3, "top_p": 0.95, "repetition_penalty": 1.15,},
device=0,
)
print(llm)
Params: {'model_id': 'TheBloke/wizardLM-7B-HF', 'model_kwargs': {}, 'pipeline_kwargs': {'max_new_tokens': 1200, 'temperature': 0.3, 'top_p': 0.95, 'repetition_penalty': 1.15}}
2. Now the problem is when **passing pipeline directly**:
I just followed the example given in Langchain.
from langchain.llms import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import transformers
model_id = "TheBloke/wizardLM-7B-HF"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
device_map="auto",
max_new_tokens = 1200,
temperature = 0.3,
top_p = 0.95,
repetition_penalty = 1.15,
)
hf = HuggingFacePipeline(pipeline=pipe)
print(hf)
### Now I get this: Params: {'model_id': 'gpt2', 'model_kwargs': None, 'pipeline_kwargs': None}
### Expected behavior
What is the matter?
Why are the paramters that I set not taken?
Why does it use the default ones?
I tried it now over and over and over again, but I can't get it work.
Sb has any idea why? I am a bit desperate, cause I need to pass pipeline directly.
Official langchain documentation:
Example passing pipeline in directly:
from langchain.llms import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10
)
hf = HuggingFacePipeline(pipeline=pipe)
| Problem using HuggingFacePipeline by passing pipeline in directly | https://api.github.com/repos/langchain-ai/langchain/issues/8280/comments | 7 | 2023-07-26T10:05:37Z | 2024-02-14T16:12:28Z | https://github.com/langchain-ai/langchain/issues/8280 | 1,822,048,802 | 8,280 |
[
"hwchase17",
"langchain"
]
| ### System Info
Macos 13.5, vscode 1.80.1
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
As mentioned in the readme if .devcontainer
1. Clone locally
2. In Vscode select Build and open in dev container
### Expected behavior
The Devcontainer is working well, Building and running

but the mounted folder is incorrect and empty, so the vscode file explorer is empty


Maybe due to the devcontainer.json file, referring to a FolderBaseName variable

| devcontainer in vscode is not working | https://api.github.com/repos/langchain-ai/langchain/issues/8277/comments | 3 | 2023-07-26T08:03:16Z | 2023-12-06T17:44:40Z | https://github.com/langchain-ai/langchain/issues/8277 | 1,821,812,670 | 8,277 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi, it's my first issue in github. And the first thing I want to say is that LangChain is awesome!
I'm not quite so sure is that a bug or just a little issue of config.So I list it as a other issue.
I built a agent and built some RetrievalQA chains as tools for agent.However,when I ask a question,agent answer my question in a sentence and make up some new QAs, which easily misdirected agent and made final answer quite irrelevant with my original question.
So is that my problem in config of agent or LLM(temperture/prompt), or do i need to adjust my agent?(But I have tried to add `early_stopping_method` and `max_execution_time` to limit the agent generate redundant content.And turns out it didn't work well)
### Suggestion:
Maybe add some solution in document? | Issue: LLM makes up new QAs after answering question in a QA chains | https://api.github.com/repos/langchain-ai/langchain/issues/8273/comments | 4 | 2023-07-26T07:34:35Z | 2023-11-02T16:16:04Z | https://github.com/langchain-ai/langchain/issues/8273 | 1,821,769,394 | 8,273 |
[
"hwchase17",
"langchain"
]
| ### System Info
platform = mac m2
python = 3.11
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
prompt_template = PromptTemplate.from_template(
"""Given the input context and the reference, analyze and determine which prediction, A or B, aligns most closely with the reference label.
Consider the following factors while analyzing:
- Relevance to the input context
- Semantic similarity with the reference label
- Consistency with any specifics mentioned in the input
The DATA for this decision are as follows:
Input Context: {input}
Reference Label: {reference}
Option A: {prediction}
Option B: {prediction_b}
After analyzing, provide the reasoning for your selection, and finally, respond with either [[A]] or [[B]] on its own line. In the case that both options are equally similar, default to option [[A]].
---
Reasoning:
"""
)
evalutionChain = LabeledPairwiseStringEvalChain.from_llm(
llm=llm, prompt=prompt_template
)
result = evalutionChain.evaluate_string_pairs(
input=self.currentQuery,
prediction=response1,
prediction_b=response2,
reference=self.formatSourcesStructure(sourcedocs),
)
```
sometime it gives error like
```
not enough values to unpack (expected 2, got 1)
```
it like every 3-4 request, 1 request failing with this request,
and when request failed, on next request it gives the response
### Expected behavior
There will be no error, and should return valid response | not enough values to unpack (expected 2, got 1) while LabeledPairwiseStringEvalChain with evaluate_string_pairs | https://api.github.com/repos/langchain-ai/langchain/issues/8272/comments | 1 | 2023-07-26T07:20:57Z | 2023-07-26T13:15:05Z | https://github.com/langchain-ai/langchain/issues/8272 | 1,821,748,208 | 8,272 |
[
"hwchase17",
"langchain"
]
| Hi , I am using an existing open search vector store and trying to perform below queries using langchain and python. But i am unable to see the documents returned.
Queries:
docs = result.similarity_search(
"show me results for rest API throwing 500 exceptions", search_type="painless_scripting",space_type="cosinesimil",
vector_field="listOfMessages.vectorEmbeddings",
text_field="listOfMessages.messageContent"
)
Note:
1. even if i use other search_type or not pass search_type also i am not getting any results.
2. Here my fields are nested fields.
Someone please help me on how to resolve this issue.
| similarity_search on nested fields not returning results | https://api.github.com/repos/langchain-ai/langchain/issues/8269/comments | 2 | 2023-07-26T05:38:54Z | 2023-11-08T16:07:34Z | https://github.com/langchain-ai/langchain/issues/8269 | 1,821,620,134 | 8,269 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.