issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### Feature request
GitLab is currently trying to adopt LangChain in https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/merge_requests/475 for Anthropic usage. It's just simply passing a raw prompt to the Anthropic client, however, LangChain seems not supporting it because:
- `langchain.chat_models.ChatAnthropic` class can't take a raw prompt. It expects [`List[BaseMessage]`](https://github.com/dosuken123/langchain/blob/master/libs/langchain/langchain/chat_models/anthropic.py#L45C5-L45C41) and constructs the new messages.
- `langchain.llms.Anthropic` class was deprecated by @hwchase17 in https://github.com/langchain-ai/langchain/commit/52d95ec47dbb06a1bcc3f0ff30cadc50135351db. We don't want to use a deprecated class.
It sounds like we should add an option to `ChatAnthropic` to allow raw prompt, or recover `langchain.llms.Anthropic` from deprecated state.
### Motivation
Increasing the adoption of LangChain in GitLab
### Your contribution
I can contribute to this issue as LangChain contributor. | Allow ChatAnthropic to receive Raw prompt or don't deprecate llms.Anthropic | https://api.github.com/repos/langchain-ai/langchain/issues/14382/comments | 2 | 2023-12-07T06:57:30Z | 2024-03-17T16:09:26Z | https://github.com/langchain-ai/langchain/issues/14382 | 2,030,036,527 | 14,382 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am using ConversationChain with a custom prompt, and now I am looking to integrate tools into it. How can we do that?
### Suggestion:
_No response_ | How can we integrate tools with custom prompt? | https://api.github.com/repos/langchain-ai/langchain/issues/14381/comments | 1 | 2023-12-07T06:45:10Z | 2024-03-16T16:12:36Z | https://github.com/langchain-ai/langchain/issues/14381 | 2,030,022,877 | 14,381 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
How to remove citation from langchain results and can we get the image out of it?
### Suggestion:
How to remove citation from langchain results and can we get the image out of it? | Issue: How to remove citation from langchain results | https://api.github.com/repos/langchain-ai/langchain/issues/14380/comments | 1 | 2023-12-07T06:03:28Z | 2024-03-16T16:12:31Z | https://github.com/langchain-ai/langchain/issues/14380 | 2,029,974,064 | 14,380 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain version = 0.0.344
Python version = 3.11.5
@agola11 @hwchase17
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Here is my code and it is giving me below error for create_sql_agent whe i use suffix variable.
A single string input was passed in, but this chain expects multiple inputs (set()). When a chain expects multiple inputs, please call it by passing in a dictionary, eg `chain({'foo': 1, 'bar': 2})
agent_inputs = {
'prefix': MSSQL_AGENT_PREFIX,
'format_instructions': MSSQL_AGENT_FORMAT_INSTRUCTIONS,
'suffix': MSSQL_AGENT_SUFFIX,
'llm': llm,
'toolkit': toolkit,
'top_k': 30,
'early_stopping_method': 'generate',
'handle_parsing_errors': True,
'input_variables': ['question']
}
agent_executor_sql = create_sql_agent(**agent_inputs)
i also used suffix': [MSSQL_AGENT_SUFFIX], and suffix': str(MSSQL_AGENT_SUFFIX). Yet the error persist. Kindly help.
### Expected behavior
It should take suffix and work. | create_sql_agent Suffix error | https://api.github.com/repos/langchain-ai/langchain/issues/14379/comments | 2 | 2023-12-07T05:39:25Z | 2024-03-29T16:07:00Z | https://github.com/langchain-ai/langchain/issues/14379 | 2,029,945,805 | 14,379 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.347
langchain-core==0.0.11
### Who can help?
@JeanBaptiste-dlb @hwchase17 @kacperlukawski
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The code I'm trying is based from: https://python.langchain.com/docs/integrations/vectorstores/qdrant
```
import os
directory_path = 'data/'
from langchain.document_loaders import TextLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Qdrant
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(openai_api_key="sk-XXX")
loader = TextLoader(os.path.join(directory_path, "concept-note.md"))
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
qdrant = Qdrant.from_documents(
docs,
embeddings,
location=":memory:", # Local mode with in-memory storage only
collection_name="my_documents",
)
```
The error:
```
Created a chunk of size 1893, which is longer than the specified 1000
Created a chunk of size 1728, which is longer than the specified 1000
Created a chunk of size 1317, which is longer than the specified 1000
Created a chunk of size 1464, which is longer than the specified 1000
Created a chunk of size 2119, which is longer than the specified 1000
Created a chunk of size 1106, which is longer than the specified 1000
Created a chunk of size 1822, which is longer than the specified 1000
Created a chunk of size 3658, which is longer than the specified 1000
Created a chunk of size 1233, which is longer than the specified 1000
Created a chunk of size 1522, which is longer than the specified 1000
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[3], line 20
16 docs = text_splitter.split_documents(documents)
17 docs
---> 20 qdrant = Qdrant.from_documents(
21 docs,
22 embeddings,
23 location=":memory:", # Local mode with in-memory storage only
24 collection_name="my_documents",
25 )
File /opt/conda/lib/python3.11/site-packages/langchain_core/vectorstores.py:510, in VectorStore.from_documents(cls, documents, embedding, **kwargs)
508 texts = [d.page_content for d in documents]
509 metadatas = [d.metadata for d in documents]
--> 510 return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File /opt/conda/lib/python3.11/site-packages/langchain/vectorstores/qdrant.py:1345, in Qdrant.from_texts(cls, texts, embedding, metadatas, ids, location, url, port, grpc_port, prefer_grpc, https, api_key, prefix, timeout, host, path, collection_name, distance_func, content_payload_key, metadata_payload_key, vector_name, batch_size, shard_number, replication_factor, write_consistency_factor, on_disk_payload, hnsw_config, optimizers_config, wal_config, quantization_config, init_from, on_disk, force_recreate, **kwargs)
1210 """Construct Qdrant wrapper from a list of texts.
1211
1212 Args:
(...)
1311 qdrant = Qdrant.from_texts(texts, embeddings, "localhost")
1312 """
1313 qdrant = cls.construct_instance(
1314 texts,
1315 embedding,
(...)
1343 **kwargs,
1344 )
-> 1345 qdrant.add_texts(texts, metadatas, ids, batch_size)
1346 return qdrant
File /opt/conda/lib/python3.11/site-packages/langchain/vectorstores/qdrant.py:190, in Qdrant.add_texts(self, texts, metadatas, ids, batch_size, **kwargs)
174 """Run more texts through the embeddings and add to the vectorstore.
175
176 Args:
(...)
187 List of ids from adding the texts into the vectorstore.
188 """
189 added_ids = []
--> 190 for batch_ids, points in self._generate_rest_batches(
191 texts, metadatas, ids, batch_size
192 ):
193 self.client.upsert(
194 collection_name=self.collection_name, points=points, **kwargs
195 )
196 added_ids.extend(batch_ids)
File /opt/conda/lib/python3.11/site-packages/langchain/vectorstores/qdrant.py:2136, in Qdrant._generate_rest_batches(self, texts, metadatas, ids, batch_size)
2122 # Generate the embeddings for all the texts in a batch
2123 batch_embeddings = self._embed_texts(batch_texts)
2125 points = [
2126 rest.PointStruct(
2127 id=point_id,
2128 vector=vector
2129 if self.vector_name is None
2130 else {self.vector_name: vector},
2131 payload=payload,
2132 )
2133 for point_id, vector, payload in zip(
2134 batch_ids,
2135 batch_embeddings,
-> 2136 self._build_payloads(
2137 batch_texts,
2138 batch_metadatas,
2139 self.content_payload_key,
2140 self.metadata_payload_key,
2141 ),
2142 )
2143 ]
2145 yield batch_ids, points
File /opt/conda/lib/python3.11/site-packages/langchain/vectorstores/qdrant.py:1918, in Qdrant._build_payloads(cls, texts, metadatas, content_payload_key, metadata_payload_key)
1912 raise ValueError(
1913 "At least one of the texts is None. Please remove it before "
1914 "calling .from_texts or .add_texts on Qdrant instance."
1915 )
1916 metadata = metadatas[i] if metadatas is not None else None
1917 payloads.append(
-> 1918 {
1919 content_payload_key: text,
1920 metadata_payload_key: metadata,
1921 }
1922 )
1924 return payloads
TypeError: unhashable type: 'list'
```
### Expected behavior
It should not raise an error. | Issue with Qdrant: TypeError: unhashable type: 'list' | https://api.github.com/repos/langchain-ai/langchain/issues/14378/comments | 8 | 2023-12-07T05:17:40Z | 2023-12-07T19:19:13Z | https://github.com/langchain-ai/langchain/issues/14378 | 2,029,922,881 | 14,378 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.344
langchain-experimental>=0.0.42
python==3.10.12
```
embeddings = OpenAIEmbeddings(model_name=model,
openai_api_key=get_model_path(model),
chunk_size=CHUNK_SIZE)
```
error
```
WARNING! model_name is not default parameter.
model_name was transferred to model_kwargs.
Please confirm that model_name is what you intended.
warnings.warn(
2023-12-07 12:36:12,645 - embeddings_api.py[line:39] - ERROR: Embeddings.create() got an unexpected keyword argument 'model_name'
AttributeError: 'NoneType' object has no attribute 'conjugate'
```
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(model_name="text-embedding-ada-002",
openai_api_key='xxxx',
chunk_size=512)
data = embeddings.embed_documents(texts)
```
### Expected behavior
Normal loading | OpenAIEmbeddings bug | https://api.github.com/repos/langchain-ai/langchain/issues/14377/comments | 3 | 2023-12-07T05:16:19Z | 2023-12-11T04:28:11Z | https://github.com/langchain-ai/langchain/issues/14377 | 2,029,920,632 | 14,377 |
[
"hwchase17",
"langchain"
] | ### Feature request
Add support for Private Service Connect endpoints in Vertex AI LLM.
### Motivation
Currently api_endpoint for VertexAIModelGarden LLM is hard coded to ```aiplatform.googleapis.com```
https://github.com/langchain-ai/langchain/blob/db6bf8b022c17353b46f97ab3b9f44ff9e88a488/libs/langchain/langchain/llms/vertexai.py#L380-L382
Google supports Private Service Connect or private endpoints to google services.
https://cloud.google.com/vpc/docs/configure-private-service-connect-apis#using-endpoints
Users can use private domains i.e. ```us-central1-aiplatform-xxxx.p.googleapis.com``` to call models in vertex ai.
### Your contribution
I can create a PR to add api_endpoint_base to let users specify google api endpoint for VertexAIModelGarden.
Currently pretrained models in ```vertexai.language_models```doesn't support specifying googleapi endpoint.
I'll also create an issue on vertex ai python sdk.
Changes to pretrained model can be made if necessary changes are made to vertex ai python sdk. | Add support for private endpoint(Private Service Connect) for Vertex AI LLM | https://api.github.com/repos/langchain-ai/langchain/issues/14374/comments | 1 | 2023-12-07T02:53:41Z | 2024-03-17T16:09:22Z | https://github.com/langchain-ai/langchain/issues/14374 | 2,029,776,848 | 14,374 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Can the 'AgentType parameter of ‘initialize_agent’ function only be of one type? How can I specify multiple types? I want to set agent type to 'STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION' and 'ZERO_SHOT_REACT_DESCRIPTION' at the same time ?
### Suggestion:
Set agent type to 'STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION' and 'ZERO_SHOT_REACT_DESCRIPTION' at the same time when create a agent? | Issue: How can we specify multiple types when initialize an agent | https://api.github.com/repos/langchain-ai/langchain/issues/14372/comments | 10 | 2023-12-07T02:10:55Z | 2023-12-07T05:41:04Z | https://github.com/langchain-ai/langchain/issues/14372 | 2,029,730,782 | 14,372 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Issue:
```
PS E:\CODE\research-agents> python app.py
Traceback (most recent call last):
File "E:\CODE\research-agents\app.py", line 3, in <module>
from langchain.text_splitter import RecursiveCharacterTextSplitter
ModuleNotFoundError: No module named 'langchain'
```
What I've tried:
- Upgrading to langchain newest version
- Downgrading to langchain version==0.0.340
- Adding python path to environment variables
- Fresh environment with Python 3.10.x
- Fresh environment with Python 3.11.5
System:
Python ver: 3.11.5 (Anaconda)
Langchain ver: 0.0.340
OS: Windows 10
Thoughts, suggestions and tips are great appreciated. Thanks in advance!
### Suggestion:
_No response_ | Issue: No module named 'langchain' | https://api.github.com/repos/langchain-ai/langchain/issues/14371/comments | 9 | 2023-12-07T02:09:09Z | 2024-07-21T15:47:04Z | https://github.com/langchain-ai/langchain/issues/14371 | 2,029,727,951 | 14,371 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi, I'm trying to make a Chinese agent using customized tools and prompt in Chinese. However the pre-defined output parser and streaming didn't work well. For example, the thought process was printed out even if I'm using the FinalStreamingStdOutCallbackHandler. I was wondering if you can help me 1, to understand how the prompt, output parser and streaming works in agent; 2 provide me some suggestions of making my own prompt, output parser and streaming class for Chinese processing.
### Suggestion:
_No response_ | Issue: agent output parser | https://api.github.com/repos/langchain-ai/langchain/issues/14363/comments | 1 | 2023-12-06T21:40:20Z | 2024-03-16T16:12:16Z | https://github.com/langchain-ai/langchain/issues/14363 | 2,029,429,941 | 14,363 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/integrations/chat/ollama_functions
Ollama functions page shows how to start a conversation where the llm calls a function.
But it stops there.
How do we keep the conversation going? (i.e. I should give the llm the answer to the function call, and then it should give a text reply with his explanation of the weather in Paris, right?)
Here's what I tried
```
>>> from langchain_experimental.llms.ollama_functions import OllamaFunctions
>>> from langchain.schema import HumanMessage, FunctionMessage
>>> import json
>>> model = OllamaFunctions(model="mistral")
>>> model = model.bind(
... functions=[
... {
... "name": "get_current_weather",
... "description": "Get the current weather in a given location",
... "parameters": {
... "type": "object",
... "properties": {
... "location": {
... "type": "string",
... "description": "The city and state, " "e.g. San Francisco, CA",
... },
... "unit": {
... "type": "string",
... "enum": ["celsius", "fahrenheit"],
... },
... },
... "required": ["location"],
... },
... }
... ],
... function_call={"name": "get_current_weather"},
... )
>>>
>>> messages = [HumanMessage(content="how is the weather in Paris?")]
>>> aim = model.invoke(messages)
>>> aim
AIMessage(content='', additional_kwargs={'function_call': {'name': 'get_current_weather', 'arguments': '{"location": "Paris, FR", "unit": "celsius"}'}})
>>> messages.append(aim)
>>> fm = FunctionMessage(name='get_current_weather', content=json.dumps({'temperature': '25 celsius'}))
>>> messages.append(fm)
>>> aim = model.invoke(messages)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2871, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 160, in invoke
self.generate_prompt(
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 491, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 378, in generate
raise e
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 368, in generate
self._generate_with_cache(
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 524, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_experimental/llms/ollama_functions.py", line 90, in _generate
response_message = self.llm.predict_messages(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 685, in predict_messages
return self(messages, stop=_stop, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 632, in __call__
generation = self.generate(
^^^^^^^^^^^^^^
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 378, in generate
raise e
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 368, in generate
self._generate_with_cache(
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 524, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain/chat_models/ollama.py", line 97, in _generate
prompt = self._format_messages_as_text(messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain/chat_models/ollama.py", line 70, in _format_messages_as_text
[self._format_message_as_text(message) for message in messages]
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain/chat_models/ollama.py", line 70, in <listcomp>
[self._format_message_as_text(message) for message in messages]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tony/.pyenv/versions/functionary/lib/python3.11/site-packages/langchain/chat_models/ollama.py", line 65, in _format_message_as_text
raise ValueError(f"Got unknown type {message}")
ValueError: Got unknown type content='{"temperature": "25 celsius"}' name='get_current_weather'
>>>
```
### Idea or request for content:
_No response_ | DOC: Explain how to continue the conversation with OllamaFunctions | https://api.github.com/repos/langchain-ai/langchain/issues/14360/comments | 12 | 2023-12-06T21:17:30Z | 2024-06-13T16:07:37Z | https://github.com/langchain-ai/langchain/issues/14360 | 2,029,399,684 | 14,360 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain version: 0.0.315
python3.9
### Who can help?
@hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
splitter = CharacterTextSplitter(
separator="\n",
chunk_size=1000,
chunk_overlap=i0
)
paragraphs = splitter.split_text("This is the first paragraph.\n\nThis is the second paragraph.")
print(paragraphs)
paragraphs = splitter.split_text("This is the first paragraph.\n \nThis is the second paragraph.")
print(paragraphs)
```
Returns
```
This is the first paragraph.\nThis is the second paragraph. #seems wrong, as It omits a new line character.
This is the first paragraph.\n \nThis is the second paragraph. #correct
```
### Expected behavior
```
This is the first paragraph.\n\nThis is the second paragraph.
This is the first paragraph.\n \nThis is the second paragraph.
```
| Unexpected behaviour: CharacterTextSpitter | https://api.github.com/repos/langchain-ai/langchain/issues/14348/comments | 3 | 2023-12-06T16:02:29Z | 2024-03-17T16:09:11Z | https://github.com/langchain-ai/langchain/issues/14348 | 2,028,892,316 | 14,348 |
[
"hwchase17",
"langchain"
] | ### System Info
I want to use the `with_retry` from the Runnable class with Bedrock class to initiate a retry if Bedrock is raising a ThrottlingException (too many requests) the problem is the error catching in `BedrockBase` class in the `_prepare_input_and_invoke` method is too broad. (line 269)
```
except Exception as e:
raise ValueError(f"Error raised by bedrock service: {e}")
```
Is it possible to use something like :
```
except Exception as e:
raise ValueError(f"Error raised by bedrock service: {e}") from e
```
Or:
```
except Exception as e:
raise ValueError(f"Error raised by bedrock service: {e}")with_traceback(e.__traceback__)
```
To keep the initial exception type.
Am I missing something that would explain this broad error catching?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
No reproduction needed
### Expected behavior
Expected behavior explained previously | Error catching in BedrockBase | https://api.github.com/repos/langchain-ai/langchain/issues/14347/comments | 3 | 2023-12-06T14:54:05Z | 2024-01-03T01:25:50Z | https://github.com/langchain-ai/langchain/issues/14347 | 2,028,739,597 | 14,347 |
[
"hwchase17",
"langchain"
] | ### System Info
I have been using OpenAI Embeddings specifically text-embedding-ada-002 and noticed it was very sensitive to punctuation even. I have around 1000 chunks and need to extract each time the 15 most similar chunks to my query. I have been testing my query without punctuation and when I add a dot '.' at the end of my query it changes the initial set I got from the retriever with the query without punctuation (some chunks are the same but new ones may appear or the initial order is different).
- Have you noticed anything similar ?
- Is it the basic behaviour of this embedding to be that sensitive to punctuation ?
- Is there a way to make it more robust to minor changes in the query ?
FYI: I am using PGvector to store my chunks vectors
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
-
### Expected behavior
- | Embedding very sensitive to punctuation | https://api.github.com/repos/langchain-ai/langchain/issues/14346/comments | 1 | 2023-12-06T13:00:05Z | 2024-03-16T16:12:06Z | https://github.com/langchain-ai/langchain/issues/14346 | 2,028,506,571 | 14,346 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain version: 0.0.346
Python version: 3.9.16
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`PythonREPL` which has been moved to `experimental` still exists in based library under path:
`libs/langchain/langchain/utilities/python.py`
which triggers security scans vulnerabilities (`exec()` call) and doesn't allow us to use the package on the production environment.
Since
https://nvd.nist.gov/vuln/detail/CVE-2023-39631
Should be most likely closed soon, this is only vulnerability that would have to be addressed so we can freely use `langchain`.
### Expected behavior
`PythonREPL` should only exist in `experimental` version of `langchain` | `PythonREPL` removal from langchain library | https://api.github.com/repos/langchain-ai/langchain/issues/14345/comments | 5 | 2023-12-06T12:20:42Z | 2024-05-22T17:48:58Z | https://github.com/langchain-ai/langchain/issues/14345 | 2,028,418,418 | 14,345 |
[
"hwchase17",
"langchain"
] | ### Feature request
I have created a multi model inference endpoint in the new version of SageMaker studio (the original one now is called studio classic). I don't see there is a place that I can set the `inference component` in the `SagemakerEndpoint` class. So I ended up get an expected error from SageMaker.
`An error occurred (ValidationError) when calling the InvokeEndpointWithResponseStream operation: Inference Component Name header is required for endpoints to which you plan to deploy inference components. Please include Inference Component Name header or consider using SageMaker models.`
### Motivation
To support multi model endpoint in SageMaker which is cost efficient way to run models.
### Your contribution
I can test and verify. | Support SageMaker Inference Component of multi model endpoint | https://api.github.com/repos/langchain-ai/langchain/issues/14344/comments | 3 | 2023-12-06T11:51:12Z | 2024-01-12T02:36:39Z | https://github.com/langchain-ai/langchain/issues/14344 | 2,028,370,080 | 14,344 |
[
"hwchase17",
"langchain"
] | ### System Info
python3.10.13
langchain==0.0.346
langchain-core==0.0.10
### Who can help?
@agola11, @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain.chat_models import ChatOpenAI
from langchain.agents.format_scratchpad import format_to_openai_function_messages
text_with_call = """You are a helpful assistant. Here is a function call that you should not imitate: <functioncall> {"name":"generate_anagram", "arguments": {"word": "listen"}}
"""
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
text_with_call,
),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| prompt
| llm
| OpenAIFunctionsAgentOutputParser()
)
output = agent.invoke(
{
"input": "Hello",
"intermediate_steps": [],
}
)
```
### Expected behavior
I would expect not to get an error.
I get error: `KeyError: 'Input to ChatPromptTemplate is missing variable \'"name"\'. Expected: [\'"name"\', \'agent_scratchpad\', \'input\'] Received: [\'input\', \'agent_scratchpad\']'`
> I think this error is cause by the f-string prompt template recognising the brackets inside the prompt as other variables needed an input.
I tried using Jinja2 template for my prompt instead, but I cannot setup this option in `ChatPromptTemplate`.
I understand this could be for security reasons as mentioned in [this issue](https://github.com/langchain-ai/langchain/issues/4394)
So the possible solutions I see :
- Use PromptTemplate like:
```
prompt = PromptTemplate.from_template(
text_with_call, template_format="jinja2"
)
```
But I would like to make use of the `agent_scratchpad`, so the ChatPromptTemplate is needed in my case.
- Change ChatPromptTemplate class to support jinja2 template, which I understand could not be done
- Re-implement my custom ChatPromptTemplate
- find another way to accept this prompt without falsely flagging prompt elements as input variables.
Do you have any ideas? Thanks for your help 😃 | Specific prompt adds false input variables | https://api.github.com/repos/langchain-ai/langchain/issues/14343/comments | 2 | 2023-12-06T11:35:34Z | 2023-12-06T11:48:20Z | https://github.com/langchain-ai/langchain/issues/14343 | 2,028,344,953 | 14,343 |
[
"hwchase17",
"langchain"
] | ### System Info
I try this example code
```
from langchain.retrievers import ParentDocumentRetriever
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.storage import InMemoryStore
# This text splitter is used to create the parent documents
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000)
# This text splitter is used to create the child documents
# It should create documents smaller than the parent
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400)
# The vectorstore to use to index the child chunks
vectorstore = Chroma(embedding_function=OpenAIEmbeddings())
# The storage layer for the parent documents
store = InMemoryStore()
vectorstore = Chroma(collection_name="test", embedding_function=OpenAIEmbeddings())
```
# Initialize the retriever
parent_document_retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter,
)
but I encountered an error:
```
1 # Initialize the retriever
----> 2 parent_document_retriever = ParentDocumentRetriever(
3 vectorstore=vectorstore,
4 docstore=store,
5 child_splitter=child_splitter,
TypeError: MultiVectorRetriever.__init__() got an unexpected keyword argument 'child_splitter'
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.retrievers import ParentDocumentRetriever
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.storage import InMemoryStore
# This text splitter is used to create the parent documents
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000)
# This text splitter is used to create the child documents
# It should create documents smaller than the parent
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400)
# The vectorstore to use to index the child chunks
vectorstore = Chroma(embedding_function=OpenAIEmbeddings())
# The storage layer for the parent documents
store = InMemoryStore()
vectorstore = Chroma(collection_name="test", embedding_function=OpenAIEmbeddings())
# Initialize the retriever
parent_document_retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter,
)
```
### Expected behavior
I can run. | Error: | https://api.github.com/repos/langchain-ai/langchain/issues/14342/comments | 5 | 2023-12-06T11:09:11Z | 2023-12-06T19:12:51Z | https://github.com/langchain-ai/langchain/issues/14342 | 2,028,301,021 | 14,342 |
[
"hwchase17",
"langchain"
] | ### System Info
Windows 10 Pro - 22H2 - Build 9045.3693 - Windows Feature Experience Pack 1000.19053.1000.0
Python 3.11.5
langchain-cli 0.0.19
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. creating a virtual environments with python - m venv .venv and activate it;
2. installing langchain-cli with pip install -U "langchain-cli[serve]"
3. launch langchain app new qdrant-app --package self-query-qdrant
### Expected behavior
As for other templates that I have installed, I expected to find the .py files of the app in the packages directory, but it's empty.
[log.txt](https://github.com/langchain-ai/langchain/files/13580197/log.txt)
| When I install the template "self-query-qdrant" I get this error: UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 132: character maps to <undefined> | https://api.github.com/repos/langchain-ai/langchain/issues/14341/comments | 5 | 2023-12-06T10:42:16Z | 2024-03-18T16:07:29Z | https://github.com/langchain-ai/langchain/issues/14341 | 2,028,253,492 | 14,341 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.299 python 3.8 gpt-4
when AzureChatOpenAI use gpt-4 ,I use agents to deal with problems,raised a error about RateLimitError; but gpt-3.5 do not appear this quesition
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
llm = AzureChatOpenAI(
openai_api_base=api_base,
openai_api_version=api_version,
deployment_name=deployment_name,
openai_api_key=api_token,
openai_api_type="azure",
max_tokens = max_tokens,
model_name = model_name,
)
memory = ConversationBufferMemory(memory_key="chat_history")
agent = create_pandas_dataframe_agent(
llm,
df,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
memory=memory
)
question = "what are the top 2 most polluted counties"
res = agent.run(question)
### Expected behavior
Tell me what caused this error and how to avoid it
| Requests to the Creates a completion for the chat message Operation under Azure OpenAI API version 2023-03-15-preview have exceeded token rate limit of your current OpenAI S0 pricing tier. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit. | https://api.github.com/repos/langchain-ai/langchain/issues/14339/comments | 1 | 2023-12-06T10:29:14Z | 2024-03-16T16:11:56Z | https://github.com/langchain-ai/langchain/issues/14339 | 2,028,230,943 | 14,339 |
[
"hwchase17",
"langchain"
] | ### System Info
- LangChain version: 0.0.346
- Platform: Mac mini M1 16GB - macOS Sonoma 14.0
- Python version: 3.11
- LiteLLM version: 1.10.6
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.chat_models import ChatLiteLLM
# Code for initializing the ChatLiteLLM instance
chat_model = ChatLiteLLM(api_base="https://custom.endpoints.huggingface.cloud", model="huggingface/Intel/neural-chat-7b-v3-1")
# Make a call to LiteLLM
text = "What would be a good company name for a company that makes colorful socks?"
messages = [HumanMessage(content=text)]
print(chat_model(messages).content)
```
Error:
```
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/litellm/utils.py", line 4919, in handle_huggingface_chunk
raise ValueError(chunk)
ValueError: {"error":"The model Intel/neural-chat-7b-v3-1 is too large to be loaded automatically (14GB > 10GB). Please use Spaces (https://huggingface.co/spaces) or Inference Endpoints (https://huggingface.co/inference-endpoints)."}
```
Is the same error if:
`chat_model = ChatLiteLLM(model="huggingface/Intel/neural-chat-7b-v3-1")`
So api_base parameter not properly propagated in client calls in ChatLiteLLM.
### Expected behavior
I would expect the ChatLiteLLM instance to correctly utilize the api_base parameter when making requests to the LiteLLM client. This should enable using models larger than the default size limit without encountering the error message about model size limits.
Notably, if I explicitly add the api_base argument in chat_models/litellm.py on line 239 (e.g., `return self.client.completion(api_base=self.api_base, **kwargs)`), the problem is resolved. This suggests that the api_base argument is not being correctly passed through **kwargs. | api_base parameter not properly propagated in client calls in ChatLiteLLM | https://api.github.com/repos/langchain-ai/langchain/issues/14338/comments | 7 | 2023-12-06T09:13:46Z | 2023-12-07T11:20:13Z | https://github.com/langchain-ai/langchain/issues/14338 | 2,028,088,277 | 14,338 |
[
"hwchase17",
"langchain"
] | ### System Info
Trying to execute the chatbot script with sagemaker endpoint of LLAMA2 llm model getting dict validation error for RetrievalQA
Request:
def retreiveFromLL(userQuery: str) -> QueryResponse:
pre_prompt = """[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Answer exactly in detail from the context
<</SYS>>
Answer the question below from context below :
"""
prompt = pre_prompt + "CONTEXT:\n\n{context}\n" +"Question : {question}" + "[\INST]"
llama_prompt = PromptTemplate(template=prompt, input_variables=["context", "question"])
chain_type_kwargs = {"prompt": llama_prompt}
embeddings = SentenceTransformerEmbeddings(model_name=EMBEDDING_MODEL)
# Initialize PGVector index
vector_db = PGVector(
embedding_function=embeddings,
collection_name='CSE_runbooks',
connection_string=CONNECTION_STRING,
)
print("**Invoking PGVector")
# Custom ContentHandler to handle input and output to the SageMaker Endpoint
class LlamaChatContentHandler(LLMContentHandler):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, inputs: str, model_kwargs: Dict = {}) -> bytes:
payload = {
"inputs": pre_prompt,
"parameters": {"max_new_tokens":2000, "top_p":0.9, "temperature":0.1}}
input_str = ' '.join(inputs)
input_str = json.dumps(payload)
print(payload)
return input_str.encode("utf-8")
def transform_output(self, output: bytes) -> str:
response_json = json.loads(output.read().decode("utf-8"))
content = response_json[0]["generated_text"]
return content
# Initialize SagemakerEndpoint
print("Invoking LLM SageMaker Endpoint")
llm = SagemakerEndpoint(
endpoint_name=LLAMA2_ENDPOINT,
region_name=AWS_REGION,
content_handler=LlamaChatContentHandler(),
callbacks=[StreamingStdOutCallbackHandler()],
endpoint_kwargs={"CustomAttributes": "accept_eula=true"},
)
print(llm)
# Create a RetrievalQA instance with Pinecone as the retriever
query = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=vector_db, return_source_documents=True, chain_type_kwargs=chain_type_kwargs)
print("**Invoking query")
result = query({"query": userQuery})
response = result["result"]
Error:
Traceback (most recent call last):
File "/home/ec2-user/.local/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 534, in _run_script
exec(code, module.__dict__)
File "/home/ec2-user/milvus/qa_UI.py", line 26, in <module>
userResponse = getLLMResponse(user_input)
File "/home/ec2-user/milvus/getLLMResponse1.py", line 37, in getLLMResponse
userResponse = retreiveFromLL(userQuery)
File "/home/ec2-user/milvus/getLLMResponse1.py", line 97, in retreiveFromLL
query = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=vector_db, return_source_documents=True, chain_type_kwargs=chain_type_kwargs)
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain/chains/retrieval_qa/base.py", line 103, in from_chain_type
return cls(combine_documents_chain=combine_documents_chain, **kwargs)
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain/load/serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "/home/ec2-user/.local/lib/python3.9/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for RetrievalQA
retriever
value is not a valid dict (type=type_error.dict)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Execute the code
### Expected behavior
response from LLM | Dict validation error | https://api.github.com/repos/langchain-ai/langchain/issues/14337/comments | 28 | 2023-12-06T08:43:25Z | 2024-06-29T16:16:49Z | https://github.com/langchain-ai/langchain/issues/14337 | 2,028,038,022 | 14,337 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I'm exploring the use of open-source models via Ollama excellent application, I'm facing quite some challenges in adapting the instructions that were made for OpenAI-type models (they are perfectly interchangeable, I must say).
Is the format_instructions generated from the PydanticOutputParser supposed to work also for non-Openai models?
With Zephyr the model keeps returning the answer AND the JSON schema, even when I try to stop it from doing that...
### Idea or request for content:
_No response_ | Get JSON output from non-OpenAI models | https://api.github.com/repos/langchain-ai/langchain/issues/14335/comments | 2 | 2023-12-06T07:50:44Z | 2024-03-17T16:09:02Z | https://github.com/langchain-ai/langchain/issues/14335 | 2,027,899,154 | 14,335 |
[
"hwchase17",
"langchain"
] | ### System Info
Deployed on Cloud Run
```
Python 3.10
langchain 0.0.345
langchain-cli 0.0.19
langchain-core 0.0.9
langchain-experimental 0.0.43
langdetect 1.0.9
langserve 0.0.32
langsmith 0.0.69
```
```
google-ai-generativelanguage 0.3.3
google-api-core 2.14.0
google-api-python-client 2.109.0
google-auth 2.24.0
google-auth-httplib2 0.1.1
google-auth-oauthlib 1.1.0
google-cloud-aiplatform 1.36.4
google-cloud-bigquery 3.13.0
google-cloud-core 2.3.3
google-cloud-discoveryengine 0.11.3
google-cloud-pubsub 2.18.4
google-cloud-resource-manager 1.10.4
google-cloud-storage 2.13.0
google-crc32c 1.5.0
google-generativeai 0.2.2
google-resumable-media 2.6.0
googleapis-common-protos 1.61.0
```
Deployed via the Langchain template here:
https://github.com/langchain-ai/langchain/tree/master/templates/rag-google-cloud-vertexai-search
### Who can help?
@holtskinner I think has been fixing Google related stuff or knows someone who can
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Add https://github.com/langchain-ai/langchain/tree/master/templates/rag-google-cloud-vertexai-search to Langserve server
Run in playground with any input - get error.
AttributeError("'ProtoType' object has no attribute 'DESCRIPTOR'")
Langsmith chain:
```
{
"id": [
"langchain_core",
"runnables",
"RunnableSequence"
],
"lc": 1,
"type": "constructor",
"kwargs": {
"last": {
"id": [
"langchain_core",
"output_parsers",
"string",
"StrOutputParser"
],
"lc": 1,
"type": "constructor",
"kwargs": {}
},
"first": {
"id": [
"langchain_core",
"runnables",
"RunnableParallel"
],
"lc": 1,
"type": "constructor",
"kwargs": {
"steps": {
"context": {
"id": [
"langchain",
"retrievers",
"google_vertex_ai_search",
"GoogleVertexAISearchRetriever"
],
"lc": 1,
"repr": "GoogleVertexAISearchRetriever(project_id='project-id', data_store_id='datastore-id')",
"type": "not_implemented"
},
"question": {
"id": [
"langchain_core",
"runnables",
"RunnablePassthrough"
],
"lc": 1,
"type": "constructor",
"kwargs": {
"func": null,
"afunc": null,
"input_type": null
}
}
}
}
},
"middle": [
{
"id": [
"langchain_core",
"prompts",
"chat",
"ChatPromptTemplate"
],
"lc": 1,
"type": "constructor",
"kwargs": {
"messages": [
{
"id": [
"langchain_core",
"prompts",
"chat",
"HumanMessagePromptTemplate"
],
"lc": 1,
"type": "constructor",
"kwargs": {
"prompt": {
"id": [
"langchain_core",
"prompts",
"prompt",
"PromptTemplate"
],
"lc": 1,
"type": "constructor",
"kwargs": {
"template": "Answer the question based only on the following context:\n{context}\nQuestion: {question}\n",
"input_variables": [
"context",
"question"
],
"template_format": "f-string",
"partial_variables": {}
}
}
}
}
],
"input_variables": [
"context",
"question"
]
}
},
{
"id": [
"langchain",
"chat_models",
"vertexai",
"ChatVertexAI"
],
"lc": 1,
"type": "constructor",
"kwargs": {
"model_name": "chat-bison",
"temperature": 0
}
}
]
}
}
```
The traceback comes back as:
```
File "/usr/local/lib/python3.10/site-packages/langchain_core/retrievers.py", line 157, in _aget_relevant_documents
return await asyncio.get_running_loop().run_in_executor(
File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.10/site-packages/langchain/retrievers/google_vertex_ai_search.py", line 343, in _get_relevant_documents
search_request = self._create_search_request(query)
File "/usr/local/lib/python3.10/site-packages/langchain/retrievers/google_vertex_ai_search.py", line 327, in _create_search_request
return SearchRequest(
File "/usr/local/lib/python3.10/site-packages/proto/message.py", line 570, in __init__
pb_value = marshal.to_proto(pb_type, value)
File "/usr/local/lib/python3.10/site-packages/proto/marshal/marshal.py", line 222, in to_proto
proto_type.DESCRIPTOR.has_options
AttributeError: 'ProtoType' object has no attribute 'DESCRIPTOR'
```
### Expected behavior
The call to https://github.com/langchain-ai/langchain/blob/e1ea1912377ca7c013e89fac4c1d26c0cb836009/libs/langchain/langchain/retrievers/google_vertex_ai_search.py#L327
returns correctly with some documents. | GoogleVertexAISearchRetriever - AttributeError: 'ProtoType' object has no attribute 'DESCRIPTOR' | https://api.github.com/repos/langchain-ai/langchain/issues/14333/comments | 1 | 2023-12-06T07:13:51Z | 2024-03-17T16:08:58Z | https://github.com/langchain-ai/langchain/issues/14333 | 2,027,823,831 | 14,333 |
[
"hwchase17",
"langchain"
] | ### System Info
I already create venv and re-install langchain: `pip install langchain` but error:
> from langserve import add_routes
> File "/usr/local/lib/python3.10/dist-packages/langserve/__init__.py", line 7, in <module>
> from langserve.client import RemoteRunnable
> File "/usr/local/lib/python3.10/dist-packages/langserve/client.py", line 29, in <module>
> from langchain.callbacks.tracers.log_stream import RunLogPatch
> ModuleNotFoundError: No module named 'langchain.callbacks.tracers.log_stream'
>
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install langchain
$ pip install google-generativeai
$ pip install fastapi
$ pip install uvicorn
$ pip install langserve
$ `uvicorn main:app --host 0.0.0.0 --port 8000`
### Expected behavior
App run with langserve without error | No module named 'langchain.callbacks.tracers.log_stream' | https://api.github.com/repos/langchain-ai/langchain/issues/14330/comments | 4 | 2023-12-06T04:00:41Z | 2024-03-17T16:08:51Z | https://github.com/langchain-ai/langchain/issues/14330 | 2,027,603,413 | 14,330 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Cannot figure out why validate_tools_single_input is necessary. I've tried to adopt multi-inputs tools in CHAT_CONVERSATIONAL_REACT_DESCRIPTION agents by just commenting out the implementation of such function. Everything seemed fun.
So what's the purpose of such design, and if it is possible to use multi-inputs tools in a more flexible setup?
### Suggestion:
_No response_ | Issue: The purpose to validate_tools_single_input in CHAT_CONVERSATIONAL_REACT_DESCRIPTION agent | https://api.github.com/repos/langchain-ai/langchain/issues/14329/comments | 2 | 2023-12-06T03:49:44Z | 2024-03-17T16:08:47Z | https://github.com/langchain-ai/langchain/issues/14329 | 2,027,594,343 | 14,329 |
[
"hwchase17",
"langchain"
] | ### Feature request
The langchain library only supports anonymizing it seems. The native microsoft library can redact data:
```
analyzer_results = analyzer.analyze(text=text_to_anonymize,
entities=["PERSON", "PHONE_NUMBER", "EMAIL_ADDRESS", "URL", "LOCATION"],
ad_hoc_recognizers=ad_hoc_recognizers,
language='en')
anonymized_results = anonymizer.anonymize(
text=text_to_anonymize,
analyzer_results=analyzer_results,
operators={"DEFAULT": OperatorConfig("redact", {})}
)
```
I don't see a way to do this via langchain_experimental.data_anonymizer
### Motivation
Redaction is cleaner than anonymizing. It's better than replacing first names with gibberish for my. use case
### Your contribution
I can help provide use-cases, like i did above. | Allow PresidioAnonymizer() to redact data instead of anonymizing. | https://api.github.com/repos/langchain-ai/langchain/issues/14328/comments | 3 | 2023-12-06T03:35:45Z | 2023-12-27T11:33:34Z | https://github.com/langchain-ai/langchain/issues/14328 | 2,027,583,278 | 14,328 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I tried to run the notebook [Semi_structured_multi_modal_RAG_LLaMA2.ipynb](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb). I installed and imported the required libraries, but an error message was returned as a result of running the below code:
```
from typing import Any
from pydantic import BaseModel
from unstructured.partition.pdf import partition_pdf
path = "/home/nickjtay/LLaVA/"
raw_pdf_elements = partition_pdf(
filename=path + "LLaVA.pdf",
# Using pdf format to find embedded image blocks
extract_images_in_pdf=True,
# Use layout model (YOLOX) to get bounding boxes (for tables) and find titles
# Titles are any sub-section of the document
infer_table_structure=True,
# Post processing to aggregate text once we have the title
chunking_strategy="by_title",
# Chunking params to aggregate text blocks
# Attempt to create a new chunk 3800 chars
# Attempt to keep chunks > 2000 chars
# Hard max on chunks
max_characters=4000,
new_after_n_chars=3800,
combine_text_under_n_chars=2000,
image_output_dir_path=path,
)
```
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[5], line 4
1 from typing import Any
3 from pydantic import BaseModel
----> 4 from unstructured.partition.pdf import partition_pdf
5 import pdfminer
6 print(pdfminer.utils.__version__)
File ~/Projects/langtest1/lib/python3.10/site-packages/unstructured/partition/pdf.py:40
38 from pdfminer.pdfparser import PSSyntaxError
39 from pdfminer.pdftypes import PDFObjRef
---> 40 from pdfminer.utils import open_filename
41 from PIL import Image as PILImage
43 from unstructured.chunking.title import add_chunking_strategy
ImportError: cannot import name 'open_filename' from 'pdfminer.utils' (/home/nickjtay/Projects/langtest1/lib/python3.10/site-packages/pdfminer/utils.py)
```
### Suggestion:
_No response_ | Issue: <Please write a coImportError: cannot import name 'open_filename' from 'pdfminer.utils' (/home/nickjtay/Projects/langtest1/lib/python3.10/site-packages/pdfminer/utils.py)mprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/14326/comments | 6 | 2023-12-06T02:23:04Z | 2024-06-28T16:05:53Z | https://github.com/langchain-ai/langchain/issues/14326 | 2,027,520,337 | 14,326 |
[
"hwchase17",
"langchain"
] | ### Feature request
Currently when getting started developing, there is:
- `extras`: `extended_testing`
- `poetry`: `dev`, `test`, `test_integration`
There is also an annoying `rapidfuzz` error one can hit: https://github.com/langchain-ai/langchain/issues/12237
Can we have the poetry groups reusing `extras`?
### Motivation
Less moving parts, and a more clear installation workflow
### Your contribution
I propose:
- Removing `dev` group
- Having the `test` group install the `extended_testing` extra | Better synergy with `poetry` groups and extras | https://api.github.com/repos/langchain-ai/langchain/issues/14321/comments | 1 | 2023-12-05T22:51:16Z | 2024-03-16T16:11:31Z | https://github.com/langchain-ai/langchain/issues/14321 | 2,027,269,893 | 14,321 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I believe the Oobabooga Text Generation Web UI API was rewritten, causing the code on the TextGen page of the Langchain docs to stop working.
e.g.: the way the code handles talking to a ws: causes a 403. I can execute API calls that work well, e.g.: curl http://127.0.0.1:5000/v1/chat/completions \
> -H "Content-Type: application/json" \
> -d '{
> "messages": [
> {
> "role": "user",
> "content": "Hello! Who are you?"
> }
> ],
> "mode": "chat",
> "character": "Example"
> }'
works.
while llm_chain.run(question) returns a 403 (failed handshake).
### Idea or request for content:
It would be awesome if this would be fixed. If not, please pull the page. | DOC: TextGen (Text Generation Web UI) - the code no longer works. | https://api.github.com/repos/langchain-ai/langchain/issues/14318/comments | 10 | 2023-12-05T22:22:39Z | 2024-05-16T11:09:38Z | https://github.com/langchain-ai/langchain/issues/14318 | 2,027,229,878 | 14,318 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi,
Currently, I want to build RAG chatbot for production.
I already had my LLM API and I want to create a custom LLM and then use this in RetrievalQA.from_chain_type function.
```
curl --location 'https:/myhost:10001/llama/api' -k \
--header 'Content-Type: application/json' \
--data-raw '{
"inputs": "[INST] Question: Who is Albert Einstein? \n Answer: [/INST]",
"parameters": {"max_new_tokens":100},
"token": "abcdfejkwehr"
}
```
I don't know whether Langchain support this in my case.
I read about this topic on reddit: https://www.reddit.com/r/LangChain/comments/17v1rhv/integrating_llm_rest_api_into_a_langchain/
And in langchain document: https://python.langchain.com/docs/modules/model_io/llms/custom_llm
But this still does not work when I apply the custom LLM to qa_chain.
Below is my code, hope for the support from you, sorry for my language, english is not my mother tongue.
```
from pydantic import Extra
import requests
from typing import Any, List, Mapping, Optional
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.llms.base import LLM
class LlamaLLM(LLM):
llm_url = 'https:/myhost/llama/api'
class Config:
extra = Extra.forbid
@property
def _llm_type(self) -> str:
return "Llama2 7B"
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
if stop is not None:
raise ValueError("stop kwargs are not permitted.")
payload = {
"inputs": prompt,
"parameters": {"max_new_tokens": 100},
"token": "abcdfejkwehr"
}
headers = {"Content-Type": "application/json"}
response = requests.post(self.llm_url, json=payload, headers=headers, verify=False)
response.raise_for_status()
# print("API Response:", response.json())
return response.json()['generated_text'] # get the response from the API
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {"llmUrl": self.llm_url}
```
```
llm = LlamaLLM()
```
```
#Testing
prompt = "[INST] Question: Who is Albert Einstein? \n Answer: [/INST]"
result = llm._call(prompt)
print(result)
Albert Einstein (1879-1955) was a German-born theoretical physicist who is widely regarded as one of the most influential scientists of the 20th century. He is best known for his theory of relativity, which revolutionized our understanding of space and time, and his famous equation E=mc².
```
```
# Build prompt
from langchain.prompts import PromptTemplate
template = """[INST] <<SYS>>
Answer the question base on the context below.
<</SYS>>
Context: {context}
Question: {question}
Answer:
[/INST]"""
QA_CHAIN_PROMPT = PromptTemplate(input_variables=["context", "question"],template=template,)
# Run chain
from langchain.chains import RetrievalQA
qa_chain = RetrievalQA.from_chain_type(llm,
verbose=True,
# retriever=vectordb.as_retriever(),
retriever=custom_retriever,
return_source_documents=True,
chain_type_kwargs={"prompt": QA_CHAIN_PROMPT})
```
```
question = "Is probability a class topic?"
result = qa_chain({"query": question})
result["result"]
Encountered some errors. Please recheck your request!
```
### Suggestion:
_No response_ | Custom LLM from API for QA chain | https://api.github.com/repos/langchain-ai/langchain/issues/14302/comments | 24 | 2023-12-05T17:36:18Z | 2023-12-26T11:37:08Z | https://github.com/langchain-ai/langchain/issues/14302 | 2,026,785,573 | 14,302 |
[
"hwchase17",
"langchain"
] | ### System Info
azure-search-documents==11.4.0b9
langchain 0.0.342
langchain-core 0.0.7
### Who can help?
@hwc
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
My local.settings.json has the custom field names for Azure Cognitive Search:
```
{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "python",
"AZURESEARCH_FIELDS_ID" :"chunk_id",
"AZURESEARCH_FIELDS_CONTENT" :"chunk",
"AZURESEARCH_FIELDS_CONTENT_VECTOR " :"vector",
"AZURESEARCH_FIELDS_TAG" :"metadata",
"FIELDS_ID" : "chunk_id",
"FIELDS_CONTENT" : "chunk",
"FIELDS_CONTENT_VECTOR" : "vector",
"FIELDS_METADATA" : "metadata",
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"AzureWebJobsFeatureFlags": "EnableWorkerIndexing"
}
}
```
I also tried to create a Fields array and pass it into the AzureSearch constructor like this:
```
os.environ["AZURE_OPENAI_API_KEY"] = "xx"
os.environ["AZURE_OPENAI_ENDPOINT"] = "https://xx.openai.azure.com/"
embeddings = AzureOpenAIEmbeddings(
azure_deployment="text-embedding-ada-002",
openai_api_version="2023-05-15",
)
fields = [
SimpleField(
name="chunk_id",
type=SearchFieldDataType.String,
key=True,
filterable=True,
),
SearchableField(
name="chunk",
type=SearchFieldDataType.String,
searchable=True,
),
SearchField(
name="vector",
type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True,
vector_search_dimensions=1536,
vector_search_configuration="default",
)
]
FIELDS_ID = get_from_env(
key="AZURESEARCH_FIELDS_ID", env_key="AZURESEARCH_FIELDS_ID", default="id"
)
FIELDS_CONTENT = get_from_env(
key="AZURESEARCH_FIELDS_CONTENT",
env_key="AZURESEARCH_FIELDS_CONTENT",
default="content",
)
FIELDS_CONTENT_VECTOR = get_from_env(
key="AZURESEARCH_FIELDS_CONTENT_VECTOR",
env_key="AZURESEARCH_FIELDS_CONTENT_VECTOR",
default="content_vector",
)
FIELDS_METADATA = get_from_env(
key="AZURESEARCH_FIELDS_TAG", env_key="AZURESEARCH_FIELDS_TAG", default="metadata"
)
vector_store_address: str = "https://xx.search.windows.net"
vector_store_password: str = "xx"
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name="vector-1701341754619",
fiekds=fields,
embedding_function=embeddings.embed_query
)
llm = AzureChatOpenAI(
azure_deployment="chat",
openai_api_version="2023-05-15",
)
chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=Element61Retriever(vectorstore=vector_store),
return_source_documents=True)
result = chain({"query": 'Whats out of scope?'})
return result
```
However I am always getting:
```
Executed 'Functions.TestCustomRetriever' (Failed, Id=2f243ed8-24bd-414b-af51-6cf1419633a5, Duration=6900ms)
[2023-12-05T15:08:53.252Z] System.Private.CoreLib: Exception while executing function: Functions.TestCustomRetriever. System.Private.CoreLib: Result: Failure
Exception: HttpResponseError: (InvalidRequestParameter) Unknown field 'content_vector' in vector field list.
Code: InvalidRequestParameter
Message: Unknown field 'content_vector' in vector field list.
Exception Details: (UnknownField) Unknown field 'content_vector' in vector field list.
Code: UnknownField
```
Please note this is being executed in an Azure Function locally
### Expected behavior
The custom field names should be taken into account | Using AzureSearch with custom vector field names | https://api.github.com/repos/langchain-ai/langchain/issues/14298/comments | 9 | 2023-12-05T15:09:36Z | 2024-08-06T20:18:36Z | https://github.com/langchain-ai/langchain/issues/14298 | 2,026,436,998 | 14,298 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.344
langchain-core==0.0.8
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [x] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The code incorrectly calls __add with kwargs=kwargs
```python
return self.__add(
texts,
embeddings,
metadatas=metadatas,
ids=ids,
bulk_size=bulk_size,
kwargs=kwargs,
)
```
This results that the _add method has a kwargs dict which contains a key 'kwargs' and all provided parameters (eg: engine="faiss") are not picked up...
### Expected behavior
The code does not mistakingly wraps the kwargs, it does it correctly in the add_embeddings:
```python
return self.__add(
list(texts),
list(embeddings),
metadatas=metadatas,
ids=ids,
bulk_size=bulk_size,
**kwargs,
)
``` | OpenSearchVectorSearch add_texts does wraps kwargs | https://api.github.com/repos/langchain-ai/langchain/issues/14295/comments | 1 | 2023-12-05T14:48:10Z | 2023-12-08T06:58:43Z | https://github.com/langchain-ai/langchain/issues/14295 | 2,026,387,711 | 14,295 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Hey there,
the `_format_chat_history` function [in the RAG cookbook entry](https://python.langchain.com/docs/expression_language/cookbook/retrieval) contains some older syntax. As far as I understand, the cookbook is supposed to provide examples for the newest Langchain version, though.
```
for dialogue_turn in chat_history:
human = "Human: " + dialogue_turn[0]
ai = "Assistant: " + dialogue_turn[1]
```
should be:
```
for dialogue_turn in chat_history:
human = "Human: " + dialogue_turn[0].content
ai = "Assistant: " + dialogue_turn[1].content
```
Thank you. :)
### Idea or request for content:
_No response_ | DOC: Cookbook entry for RAG contains older syntax | https://api.github.com/repos/langchain-ai/langchain/issues/14292/comments | 1 | 2023-12-05T12:40:54Z | 2024-03-16T16:11:26Z | https://github.com/langchain-ai/langchain/issues/14292 | 2,026,106,178 | 14,292 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
In this Announcements (https://github.com/langchain-ai/langchain/discussions/categories/announcements) [LangChain Core #13823
](https://github.com/langchain-ai/langchain/discussions/13823) hwchase17 say he
> TL;DR: we are splitting our core functionality to langchain-core to make LangChain more stable and reliable. This should be invisible to the eye and will happen in the background for the next two weeks, and we’d recommend not using langchain-core until then, but we’re flagging for transparency.
RunnablePassthrough moved from `langchain_core.runnables ` to from `langchain.schema.runnable`
And the same
langchain version : 0.0.336
### Idea or request for content:
**change**
```
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_core.runnables import ConfigurableField
```
**To**
```
from langchain.schema.runnable import RunnablePassthrough
from langchain.schema import StrOutputParser
from langchain.prompts import ChatPromptTemplate
from langchain.schema.runnable import RunnablePassthrough
from langchain.schema.runnable.utils import ConfigurableField
```
| DOC: Why use LCEL ModuleNotFoundError: No module named 'langchain_core' | https://api.github.com/repos/langchain-ai/langchain/issues/14287/comments | 7 | 2023-12-05T10:38:45Z | 2024-03-30T16:05:46Z | https://github.com/langchain-ai/langchain/issues/14287 | 2,025,877,332 | 14,287 |
[
"hwchase17",
"langchain"
] | ### System Info
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[27], line 5
1 #### All together!
2 # Put it all together now
3 full_chain = {"topic": chain, "question": lambda x: x["question"]} | branch
----> 5 full_chain.invoke({"question":"what is the best city?"})
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain_core/runnables/base.py:1427, in RunnableSequence.invoke(self, input, config)
1425 try:
1426 for i, step in enumerate(self.steps):
-> 1427 input = step.invoke(
1428 input,
1429 # mark each step as a child run
1430 patch_config(
1431 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
1432 ),
1433 )
1434 # finish the root run
1435 except BaseException as e:
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain_core/runnables/branch.py:186, in RunnableBranch.invoke(self, input, config, **kwargs)
177 expression_value = condition.invoke(
178 input,
179 config=patch_config(
(...)
182 ),
183 )
185 if expression_value:
--> 186 output = runnable.invoke(
187 input,
188 config=patch_config(
189 config,
190 callbacks=run_manager.get_child(tag=f"branch:{idx + 1}"),
191 ),
192 **kwargs,
193 )
194 break
195 else:
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain/chains/base.py:89, in Chain.invoke(self, input, config, **kwargs)
82 def invoke(
83 self,
84 input: Dict[str, Any],
85 config: Optional[RunnableConfig] = None,
86 **kwargs: Any,
87 ) -> Dict[str, Any]:
88 config = config or {}
---> 89 return self(
90 input,
91 callbacks=config.get("callbacks"),
92 tags=config.get("tags"),
93 metadata=config.get("metadata"),
94 run_name=config.get("run_name"),
95 **kwargs,
96 )
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain/chains/base.py:312, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
310 except BaseException as e:
311 run_manager.on_chain_error(e)
--> 312 raise e
313 run_manager.on_chain_end(outputs)
314 final_outputs: Dict[str, Any] = self.prep_outputs(
315 inputs, outputs, return_only_outputs
316 )
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain/chains/base.py:306, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
299 run_manager = callback_manager.on_chain_start(
300 dumpd(self),
301 inputs,
302 name=run_name,
303 )
304 try:
305 outputs = (
--> 306 self._call(inputs, run_manager=run_manager)
307 if new_arg_supported
308 else self._call(inputs)
309 )
310 except BaseException as e:
311 run_manager.on_chain_error(e)
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain/agents/agent.py:1251, in AgentExecutor._call(self, inputs, run_manager)
1249 # We now enter the agent loop (until it returns something).
1250 while self._should_continue(iterations, time_elapsed):
-> 1251 next_step_output = self._take_next_step(
1252 name_to_tool_map,
1253 color_mapping,
1254 inputs,
1255 intermediate_steps,
1256 run_manager=run_manager,
1257 )
1258 if isinstance(next_step_output, AgentFinish):
1259 return self._return(
1260 next_step_output, intermediate_steps, run_manager=run_manager
1261 )
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain/agents/agent.py:1038, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1035 intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
1037 # Call the LLM to see what to do.
-> 1038 output = self.agent.plan(
1039 intermediate_steps,
1040 callbacks=run_manager.get_child() if run_manager else None,
1041 **inputs,
1042 )
1043 except OutputParserException as e:
1044 if isinstance(self.handle_parsing_errors, bool):
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain/agents/agent.py:391, in RunnableAgent.plan(self, intermediate_steps, callbacks, **kwargs)
379 """Given input, decided what to do.
380
381 Args:
(...)
388 Action specifying what tool to use.
389 """
390 inputs = {**kwargs, **{"intermediate_steps": intermediate_steps}}
--> 391 output = self.runnable.invoke(inputs, config={"callbacks": callbacks})
392 return output
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain_core/runnables/base.py:1427, in RunnableSequence.invoke(self, input, config)
1425 try:
1426 for i, step in enumerate(self.steps):
-> 1427 input = step.invoke(
1428 input,
1429 # mark each step as a child run
1430 patch_config(
1431 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
1432 ),
1433 )
1434 # finish the root run
1435 except BaseException as e:
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain_core/output_parsers/base.py:170, in BaseOutputParser.invoke(self, input, config)
166 def invoke(
167 self, input: Union[str, BaseMessage], config: Optional[RunnableConfig] = None
168 ) -> T:
169 if isinstance(input, BaseMessage):
--> 170 return self._call_with_config(
171 lambda inner_input: self.parse_result(
172 [ChatGeneration(message=inner_input)]
173 ),
174 input,
175 config,
176 run_type="parser",
177 )
178 else:
179 return self._call_with_config(
180 lambda inner_input: self.parse_result([Generation(text=inner_input)]),
181 input,
182 config,
183 run_type="parser",
184 )
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain_core/runnables/base.py:848, in Runnable._call_with_config(self, func, input, config, run_type, **kwargs)
841 run_manager = callback_manager.on_chain_start(
842 dumpd(self),
843 input,
844 run_type=run_type,
845 name=config.get("run_name"),
846 )
847 try:
--> 848 output = call_func_with_variable_args(
849 func, input, config, run_manager, **kwargs
850 )
851 except BaseException as e:
852 run_manager.on_chain_error(e)
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain_core/runnables/config.py:308, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
306 if run_manager is not None and accepts_run_manager(func):
307 kwargs["run_manager"] = run_manager
--> 308 return func(input, **kwargs)
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain_core/output_parsers/base.py:171, in BaseOutputParser.invoke.<locals>.<lambda>(inner_input)
166 def invoke(
167 self, input: Union[str, BaseMessage], config: Optional[RunnableConfig] = None
168 ) -> T:
169 if isinstance(input, BaseMessage):
170 return self._call_with_config(
--> 171 lambda inner_input: self.parse_result(
172 [ChatGeneration(message=inner_input)]
173 ),
174 input,
175 config,
176 run_type="parser",
177 )
178 else:
179 return self._call_with_config(
180 lambda inner_input: self.parse_result([Generation(text=inner_input)]),
181 input,
182 config,
183 run_type="parser",
184 )
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain_core/output_parsers/base.py:222, in BaseOutputParser.parse_result(self, result, partial)
209 def parse_result(self, result: List[Generation], *, partial: bool = False) -> T:
210 """Parse a list of candidate model Generations into a specific format.
211
212 The return value is parsed from only the first Generation in the result, which
(...)
220 Structured output.
221 """
--> 222 return self.parse(result[0].text)
File ~/miniconda3/envs/langchain_multiagent_env/lib/python3.8/site-packages/langchain/agents/output_parsers/xml.py:45, in XMLAgentOutputParser.parse(self, text)
43 return AgentFinish(return_values={"output": answer}, log=text)
44 else:
---> 45 raise ValueError(f"Could not parse output: {text}")
ValueError:
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.prompts import PromptTemplate
from langchain.schema.output_parser import StrOutputParser
from langchain.chat_models import AzureChatOpenAI
import openai
openai.api_key = "........................................."
llm = AzureChatOpenAI(
azure_endpoint=".......................................",
openai_api_version="....................................",
deployment_name=..............................................'',
openai_api_key=openai.api_key,
openai_api_type="azure",
temperature=0
)
from langchain.chains import LLMChain
from langchain.memory import ConversationBufferMemory
from langchain.callbacks.stdout import StdOutCallbackHandler
memory = ConversationBufferMemory(return_messages=True)
chain = LLMChain(prompt = PromptTemplate.from_template(template="""
Given the user question below, classify it as either being about 'city', `weather` or `other`.
Do not respond with more than one word.
<question>```
{question}
</question>
Classification:""",
output_parser = StrOutputParser(),
),
llm = llm,
memory = memory,
callbacks=[StdOutCallbackHandler()]
)
from langchain.agents import XMLAgent, tool, AgentExecutor
from langchain.chat_models import ChatAnthropic
model = llm
@tool
def search(query: str) -> str:
"""Search things about current wheather."""
return "32 degrees" # 32 degrees Dzień Dobry przyjacielu!
tool_list = [search
@tool
def find_city(query: str) -> str:
"""Search the answer"""
return "Gdynia" # 32 degrees
city_tool_list = [find_city]
prompt = XMLAgent.get_default_prompt()
def convert_intermediate_steps(intermediate_steps):
log = ""
for action, observation in intermediate_steps:
print('\n')
print(action)
print(observation)
log += (
f"<tool>{action.tool}</tool><tool_input>{action.tool_input}"
f"</tool_input><observation>{observation}</observation>"
)
return log
def convert_tools(tools):
return "\n".join([f"{tool.name}: {tool.description}" for tool in tools])
agent = (
{
"question": lambda x: x["question"],
"intermediate_steps": lambda x: convert_intermediate_steps(x["intermediate_steps"])
}
| prompt.partial(tools=convert_tools(tool_list))
| model.bind(stop=["</tool_input>", "</final_answer>"])
| XMLAgent.get_default_output_parser()
)
agent_executor = AgentExecutor(agent=agent, tools=tool_list, verbose=True)
city_prompt = XMLAgent.get_default_prompt()
new_template = """Use the tools to find the answer.
You have access to the following tools:
{tools}
In order to use a tool, you can use <tool></tool> and <tool_input></tool_input> tags. You will then get back a response in the form <observation></observation>
For example, if you have a tool called 'find_city' that could run and search best city:
<tool>find_city</tool><tool_input>what city?</tool_input>
<observation>Gdynia</observation>
When you are done, respond with a final answer between <final_answer></final_answer>. For example:
<final_answer>The city is Gdynia</final_answer>
Begin!
Question: {question}"""
city_prompt.messages[0].prompt.template = new_template
city_agent = (
{
"question": lambda x: x["question"],
"intermediate_steps": lambda x: convert_intermediate_steps(x["intermediate_steps"])
}
| city_prompt.partial(tools=convert_tools(city_tool_list))
| model.bind(stop=["</tool_input>", "</final_answer>"]) # .bind(memory=memory)
| XMLAgent.get_default_output_parser()
)
city_agent_executor = AgentExecutor(agent=city_agent, tools=city_tool_list, verbose=True)
general_chain = PromptTemplate.from_template("""Respond to the following question:
Question: {question}
Answer:""") | llm
from langchain.schema.runnable import RunnableBranch
branch = RunnableBranch(
(lambda x: "weather" in x["topic"]['text'].lower(), agent_executor),
(lambda x: "city" in x["topic"]['text'].lower(), city_agent_executor),
general_chain
)
full_chain = {"topic": chain, "question": lambda x: x["question"]} | branch
full_chain.invoke({"question":"what is the best city?"})
```
```
### Expected behavior
In xml.py
langchain==0.0.341
line 45.
Is now:
`raise ValueError`
It would be better to have:
`raise ValueError(f"Could not pa](url)rse output: {text}")` | xml.py Value Error --> insufficient error information | https://api.github.com/repos/langchain-ai/langchain/issues/14286/comments | 1 | 2023-12-05T10:12:15Z | 2024-03-16T16:11:21Z | https://github.com/langchain-ai/langchain/issues/14286 | 2,025,828,110 | 14,286 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
so i have this function that saves data into the vector database , what i want is , to extract that vector id so i can delete it later if i want to.
is there a way to accomplish this ?
### Suggestion:
_No response_ | extract vector id and delete one | https://api.github.com/repos/langchain-ai/langchain/issues/14284/comments | 2 | 2023-12-05T09:18:12Z | 2024-03-16T16:11:16Z | https://github.com/langchain-ai/langchain/issues/14284 | 2,025,725,528 | 14,284 |
[
"hwchase17",
"langchain"
] | ### System Info
MacOS 14.1.1
```
pip3 show langchain
Name: langchain
Version: 0.0.345
Summary: Building applications with LLMs through composability
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Location: /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages
Requires: aiohttp, anyio, dataclasses-json, jsonpatch, langchain-core, langsmith, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity
Required-by: langserve, permchain
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
testFunctions = [
{
"name": "set_animal_properties",
"description": "Set different properties for an animal.",
"parameters": {
"type": "object",
"properties": {
"animals": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Name of the animal",
},
"appearance": {
"type": "string",
"description": "Summary of the appearance of the animal",
},
"joke": {
"type": "string",
"description": "A joke about the animal",
}
}
}
}
}
}
}
]
def get_chain_animals() -> Runnable:
"""Return a chain."""
prompt = ChatPromptTemplate.from_template("Pick 5 random animals for me. For each of them, give me a 300 word summary of their appearance, and tell me a joke about them. Please call a function with this information.")
# Uncomment to use ChatOpenAI model
#model = ChatOpenAI(model_name="gpt-3.5-turbo-1106",
# openai_api_key=OpenAISettings.OPENAI_API_KEY,
# ).bind(functions=testFunctions, function_call={"name": "set_animal_properties"})
model = AzureChatOpenAI(temperature=.7,
openai_api_base=AzureSettings.BASE_URL,
openai_api_version=AzureSettings.API_VERSION,
deployment_name=AzureSettings.DEPLOYMENT_NAME,
openai_api_key=AzureSettings.API_KEY,
openai_api_type=AzureSettings.API_TYPE,
).bind(functions=testFunctions, function_call={"name": "set_animal_properties"})
parser = JsonOutputFunctionsParser()
return prompt | model | parser
if __name__ == "__main__":
chain = get_chain_animals()
for chunk in chain.stream({}):
print(chunk)
```
### Expected behavior
LCEL Streaming doesn't seem to work properly when using an AzureChatOpenAI model, and the JsonOutputFunctionsParser parser.
I'm unable to stream an LCEL chain correctly when using an Azure-hosted OpenAI model (using the AzureChatOpenAI class).
I'm using a simple LCEL chain:
`chain = promptTemplate | model | parser`
The parser is of type `langchain.output_parsers.openai_functions.JsonOutputFunctionsParser`
When using a AzureChatOpenAI model, the text is not streamed as the tokens are generated. Instead, it appears that I receive all of the text at once after all the tokens are generated.
However, if I **replace** the AzureChatOpenAI model with a ChatOpenAI model (using the same prompt, function bindings, etc.), the stream **DOES** work as intended, returning text in real-time as the tokens are generated.
So I believe I've isolated the problem down to that particular AzureChatOpenAI model.
Any insight or workarounds would be appreciated. | LCEL Streaming doesn't seem to work properly when using an AzureChatOpenAI model, and the JsonOutputFunctionsParser parser | https://api.github.com/repos/langchain-ai/langchain/issues/14280/comments | 2 | 2023-12-05T08:30:23Z | 2024-04-30T16:30:14Z | https://github.com/langchain-ai/langchain/issues/14280 | 2,025,593,782 | 14,280 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
It's not clear from the documentation whether when calling Ollama, langchain will take care of formatting the template correctly or if I have to supply the template by myself.
For example, in https://ollama.ai/library/mistral:instruct
we have:
```
parameter
stop "[INST]"
stop "[/INST]"
stop "<<SYS>>"
stop "<</SYS>>"
template
[INST] {{ .System }} {{ .Prompt }} [/INST]
```
Do I have to take care of formatting my instructions using these parameters and template, or langchain will take care of it?
### Idea or request for content:
If this is not implemented, would be very useful to have definitely | Ollama: parameters and instruction templates | https://api.github.com/repos/langchain-ai/langchain/issues/14279/comments | 4 | 2023-12-05T07:49:56Z | 2024-03-29T21:36:33Z | https://github.com/langchain-ai/langchain/issues/14279 | 2,025,500,538 | 14,279 |
[
"hwchase17",
"langchain"
] | ### Feature request
Currently, when working with LangChain and generating JSON files which will be rendered in user facing application. We haven't found a dedicated method to check whether a JSON file adheres to the desired structure before executing subsequent operators.
This feature request suggests the implementation of a method or utility for asserting the structure of a JSON file/format, ideally use defined schema (i.e: ResponseSchema). This would allow users to proactively identify any issues with the format.
### Motivation
In the current workflow, users have to rely on executing downstream operators (e.g., RetryWithErrorOutputParser) to discover issues with the JSON structure. This approach can be inefficient, especially when the error detection occurs after execution. Having a method to check the JSON structure beforehand would enable users to catch format issues early in the process. This would prevent users to further interact with LLM
### Your contribution
I'm willing to contribute to the implementation of this feature. I'll carefully read the CONTRIBUTING.md and follow the guidelines to submit a Pull Request once the scope is clarified.
Some simple example use case code:
```python
import os
import json
from dotenv import load_dotenv, find_dotenv
import openai
from langchain.output_parsers import StructuredOutputParser
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from jsonschema import validate, exceptions
_ = load_dotenv(find_dotenv())
openai.api_key = os.environ["OPENAI_API_KEY"]
expected_json_file_path = "app/files/expected_schema.json"
# LCEL - INSTANCES & STRUCTURE
prompt = ChatPromptTemplate.from_template(template=template_string)
chat_llm = ChatOpenAI(temperature=0.0)
# Define output parser based on reponse schemas
output_parser = StructuredOutputParser.from_response_schemas(
asset_instance.response_schemas
)
# Define format instruction using get format instruction method
format_instructions = output_parser.get_format_instructions()
# LCEL DEFINE CHAIN
simple_chain_validator = prompt | chat_llm | output_parser
# LCEL INVOKE CHAIN
chain_to_parse = simple_chain_validator.invoke(
{
"content_to_format": asset_instance.raw_information,
"format_instructions": format_instructions,
}
)
# Define the expected JSON schema
with open(expected_json_file_path, "r") as schema_file:
expected_schema = json.load(schema_file)
# Validate against the schema
try:
validate(instance=chain_to_parse, schema=expected_schema)
print("JSON is valid.")
except exceptions.ValidationError as e:
print(f"Validation Error: {e}")
``` | OutputParser cheap json format Validation | https://api.github.com/repos/langchain-ai/langchain/issues/14276/comments | 1 | 2023-12-05T07:05:27Z | 2024-03-16T16:11:11Z | https://github.com/langchain-ai/langchain/issues/14276 | 2,025,423,592 | 14,276 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.330
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
class SummarizationTool(BaseTool):
name = "summarization_tool"
description = '''This tool must be used at the very end.
It is used to summarize the results from each of other tools.
It needs the entire text of the results from each of the previous tool.'''
llm: BaseLanguageModel
return_direct = True
def _run(self, query: str, run_manager: Optional[CallbackManagerForToolRun] = None) -> str:
print ('\n in summarization tool. query:', query, 'query type:', type(query), query[0], query[1])
```
I can see that this tool is called at the very end, as expected.
The output from the query is:
in summarization tool. query: [text from openai_search tool, text from wikipedia_search tool] query type: <class 'str'> [ t
Instead of " [text from openai_search tool, text from wikipedia_search tool]", I want the actual text.
How do I get it to pass the actual text?
### Expected behavior
Instead of " [text from openai_search tool, text from wikipedia_search tool]", I want the actual text.
How do I get it to pass the actual text? | Agent not calling tool with the right data. | https://api.github.com/repos/langchain-ai/langchain/issues/14274/comments | 1 | 2023-12-05T05:44:22Z | 2024-03-16T16:11:06Z | https://github.com/langchain-ai/langchain/issues/14274 | 2,025,331,957 | 14,274 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
In UnstructuredExcelLoader code comment, introduce 'elements' mode for twice, but nothing about 'single' mode.
Original code comment are following:
Unstructured loaders, UnstructuredExcelLoader can be used in both "single" and "elements" mode. **_If you use the loader in "elements" mode_**, each sheet in the Excel file will be a an Unstructured Table element. **_If you use the loader in "elements" mode_**, an HTML representation of the table will be available in the "text_as_html" key in the document metadata.
### Idea or request for content:
_No response_ | DOC: UnstructuredExcelLoader code comment‘s error | https://api.github.com/repos/langchain-ai/langchain/issues/14271/comments | 2 | 2023-12-05T05:20:47Z | 2024-03-16T16:11:01Z | https://github.com/langchain-ai/langchain/issues/14271 | 2,025,310,554 | 14,271 |
[
"hwchase17",
"langchain"
] | ### System Info
Every time I use Langchain, something is wrong with it. This is just the latest iteration. If you guys want people to use your library you seriously need to clean things up.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
To reproduce:
from llama_index import SimpleDirectoryReader, LLMPredictor, ServiceContext, GPTVectorStoreIndex
### Expected behavior
Run without error | ImportError: cannot import name 'BaseCache' from 'langchain' | https://api.github.com/repos/langchain-ai/langchain/issues/14268/comments | 6 | 2023-12-05T04:51:11Z | 2024-07-02T16:08:12Z | https://github.com/langchain-ai/langchain/issues/14268 | 2,025,281,757 | 14,268 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Hi, I'm not quite sure how to translate the `ParentDocumentRetriever` examples to ingest documents to OpenSearch in one phase, and then reconnect to it by instantiating a retriever at a later point.
The examples use an `InMemoryStore()` for the parent documents. Is the idea then that it would be necessary if I wanted to use OpenSearch to create two different OpenSearch clusters, one for the parent docs and one for the child docs? Or is there a more simple way to do this?
| DOC: ParentDocumentRetriever without InMemoryStore | https://api.github.com/repos/langchain-ai/langchain/issues/14267/comments | 16 | 2023-12-05T04:26:07Z | 2024-07-24T11:29:39Z | https://github.com/langchain-ai/langchain/issues/14267 | 2,025,258,589 | 14,267 |
[
"hwchase17",
"langchain"
] | ### Feature request
Does langchain support using local LLM models to request the Neo4j database in a non-openai access mode?
### Motivation
It is inconvenient to use local LLM for cypher generation
### Your contribution
No solution available at this time | Does langchain support using local LLM models to request the Neo4j database? | https://api.github.com/repos/langchain-ai/langchain/issues/14261/comments | 1 | 2023-12-05T02:18:01Z | 2024-03-16T16:10:56Z | https://github.com/langchain-ai/langchain/issues/14261 | 2,025,142,888 | 14,261 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
_No response_
### Suggestion:
_No response_ | how to add Qwen model in initialize_agent() | https://api.github.com/repos/langchain-ai/langchain/issues/14260/comments | 1 | 2023-12-05T02:10:39Z | 2024-03-16T16:10:51Z | https://github.com/langchain-ai/langchain/issues/14260 | 2,025,137,077 | 14,260 |
[
"hwchase17",
"langchain"
] | ### System Info
When running the tool: Error YahooFinanceNewsTool()
the following error message is displayed:
```
File "C:\Users\xxx\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\tools\yahoo_finance_news.py", line 66, in <listcomp>
if query in doc.metadata["description"] or query in doc.metadata["title"]
KeyError: 'description'
```
I have tried to reproduce the error. It looks like the docs element does not contain a field that can return a "description".
```
loader = WebBaseLoader(web_paths=links)
docs = loader.load()
print(docs) # only insert for test
```
output print (docs):
....
tieren\nAlle ablehnen\nDatenschutzeinstellungen verwalten\n\n\n\n\n\nZum Ende\n \n\n\n\n\n\n\n\n\n\n\n\n\n', metadata={'source': 'https://finance.yahoo.com/m/280830e6-928c-3b1f-97f4-bd37147499cb/not-even-tesla-bulls-love-the.html', 'title': 'Yahooist Teil der Yahoo Markenfamilie', 'language': 'No language found.'}), Document(page_content='\n\n\nYahooist Teil der Yahoo Markenfamilie\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n guce guce \n\n\n\n Yahoo ist Teil der Yahoo MarkenfamilieDie Websites und Apps, die wir betreiben und verwalten, einschließlich Yahoo und AOL, sowie unser digitaler Werbedienst Yahoo Advertising.Yahoo Markenfamilie.\n\n Bei der Nutzung unserer Websites und Apps verwenden wir CookiesMithilfe von Cookies (einschließlich ähnlicher Technologien wie der Webspeicherung) können die Betreiber von Websites und Apps Informationen auf Ihrem Gerät speichern und ablesen. Weitere Informationen finden Sie in unserer Cookie-Richtlinie.Cookies, um:\n\n\nunsere Websites und Apps für Sie bereitzustellen\nNutzer zu authentifizieren, Sicherheitsmaßnahmen anzuwenden und Spam und Missbrauch zu verhindern, und\nIhre Nutzung unserer Websites und Apps zu messen\n\n\n\n Wenn Sie auf „Alle akzeptieren“ klicken, verwenden wir und unsere Partner (einschließlich der 239, die dem IAB Transparency & Consent Framework angehören) Cookies und Ihre personenbezogenen Daten, wie IP-Adresse, genauen Standort sowie Browsing- und Suchdaten, auch für folgende Zwecke:\n\n\npersonalisierte Werbung und Inhalte auf der Grundlage von Interessenprofilen anzuzeigen\ndie Effektivität von personalisierten Anzeigen und Inhalten zu messen, sowie\nunsere Produkte und Dienstleistungen zu entwickeln und zu verbessern\n\n Klicken Sie auf „Alle ablehnen“, wenn Sie nicht möchten, dass wir und unsere Partner Cookies und personenbezogene Daten für diese zusätzlichen Zwecke verwenden.\n\n Wenn Sie Ihre Auswahl anpassen möchten, klicken Sie auf „Datenschutzeinstellungen verwalten“.\n\n Sie können Ihre Einstellungen jederzeit ändern, indem Sie auf unseren Websites und Apps auf den Link „Datenschutz- und Cookie-Einstellungen“ oder „Datenschutz-Dashboard“ klicken. Weitere Informationen darüber, wie wir Ihre personenbezogenen Daten nutzen, finden Sie in unserer Datenschutzerklärung und unserer Cookie-Richtlinie.\n\n\n\n\n\n\n\n\nAlle akzeptieren\nAlle ablehnen\nDatenschutzeinstellungen verwalten\n\n\n\n\n\nZum Ende\n
\n\n\n\n\n\n\n\n\n\n\n\n\n', metadata={'source': 'https://finance.yahoo.com/news/minister-khera-participates-unesco-global-215400355.html', 'title': 'Yahooist Teil der Yahoo Markenfamilie', 'language': 'No language found.'})]
....
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The example in the LangChain docs.
### Expected behavior
The example in the LangChain docs. | Error YahooFinanceNewsTool() Tools | https://api.github.com/repos/langchain-ai/langchain/issues/14248/comments | 8 | 2023-12-04T22:17:30Z | 2024-08-02T16:06:53Z | https://github.com/langchain-ai/langchain/issues/14248 | 2,024,863,086 | 14,248 |
[
"hwchase17",
"langchain"
] | ### System Info
Name: langchain
Version: 0.0.344
Python 3.9.6
--(tags/v3.9.6:db3ff76, Jun 28 2021, 15:26:21) [MSC v.1929 64 bit (AMD64)] on win32
OS - Window 11
Programming language - Python
Database - SQL Anywhere 17
### Who can help?
@hwchase17
@agola11
**I have SAP SQL Anywhere 17 database server** which I want to talk to **using Langchain and OpenAI in python language**
I am successful able to do this with MS SQL **but failing to do with SQL Anywhere**
**Code below**
import os
import pyodbc
import tkinter as tk
import tkinter.ttk as ttk
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
from langchain.agents import AgentExecutor
DATABASE_SERVER = 'sqldemo'
DATABASE_NAME = 'demo'
DATABASE_USERNAME = 'dba'
DATABASE_PASSWORD = 'sql'
DRIVER = '{SQL Anywhere 17}'
// For Microsoft SQL DB - working
// conn_uri = f"mssql+pyodbc://{DATABASE_SERVER}/{DATABASE_NAME}?driver=ODBC+Driver+18+for+SQL+Server&TrustServerCertificate=yes&Trusted_Connection=yes"
//For SQL Anywhere 17 - Not working
conn_uri = f"**sqlanywhere+pyodbc:**//{DATABASE_USERNAME}:{DATABASE_PASSWORD}@{DATABASE_SERVER}/{DATABASE_NAME}?driver=SQL+Anywhere+17"
db = SQLDatabase.from_uri(conn_uri) #Error line
llm = OpenAI(api_key=os.environ['OPENAI_API_KEY'], temperature=0)
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
agent_executor = create_sql_agent(
llm=OpenAI(temperature=0),
toolkit=toolkit,
verbose=True
)
return agent_executor
================================
**Error I am getting as below**
Exception has occurred: NoSuchModuleError
**Can't load plugin: sqlalchemy.dialects:sqlanywhere.pyodbc**

I can be reached at [email protected] if needed
### Information
- [] The official example notebooks/scripts
- [X ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code can be found here
Please download and check
https://github.com/developer20sujeet/Self_GenerativeAI/blob/main/Langchain_OpenAI_SQLAnywhere_Chat/sql_anywhere_error.py
### Expected behavior
able to connect and query SQL anyhwere 17 | Lang chain not able to connect to SQL Anywhere 17 | https://api.github.com/repos/langchain-ai/langchain/issues/14247/comments | 6 | 2023-12-04T21:47:32Z | 2024-03-17T16:08:42Z | https://github.com/langchain-ai/langchain/issues/14247 | 2,024,815,058 | 14,247 |
[
"hwchase17",
"langchain"
] | ### Feature request
With `LLMChain`, it was possible to instantiate with `callbacks`, and just pass around the `LLMChain`.
With LCEL, the only way to handle `callbacks` is to pass them to every `invoke` call. This requires one to pass around both the runnable LCEL object as well as the `callbacks`.
### Motivation
It's preferable to bake the `callbacks` into the LCEL object in advance at instantiation, then they get called each `invoke`.
### Your contribution
I can contribute something if I can get a confirm this is desirable. The callbacks would be inserted at the base of the LCEL:
```python
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.schema.callbacks.stdout import StdOutCallbackHandler
from langchain.schema.runnable import RunnableConfig
config = RunnableConfig(callbacks=[StdOutCallbackHandler()])
prompt = PromptTemplate.from_template(
"What is a good name for a company that makes {product}?"
)
runnable = config | prompt | ChatOpenAI()
runnable.invoke(input={"product": "colorful socks"})
``` | Request: ability to set callbacks with LCEL at instantiation | https://api.github.com/repos/langchain-ai/langchain/issues/14241/comments | 5 | 2023-12-04T18:33:49Z | 2024-07-17T16:04:33Z | https://github.com/langchain-ai/langchain/issues/14241 | 2,024,477,858 | 14,241 |
[
"hwchase17",
"langchain"
] | ### System Info
I have just updated my LangChain packages. Until a few weeks ago, LangChain was working fine for me with my Azure OpenAI resource and deployment of the GPT-4-32K model. As I've gone to create more complex applications with it, I got stuck at one section where I kept getting the error: "InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again."
I have given up on this error personally, but I currently have multiple Microsoft employees trying to help me figure it out. However, all of sudden, the basic implementations of LangChain now seem to be creating the same issue. Even though the native/direct API call to Azure OpenAI services is functioning correctly with the same credentials.
I am trying to use the following code to get the basic implementation working again, following directly what is written at https://python.langchain.com/docs/integrations/chat/azure_chat_openai :
model = AzureChatOpenAI(
azure_deployment="Brian",
openai_api_version="2023-05-15"
)
message = HumanMessage(
content="Translate this sentence from English to French. I love programming."
)
print(model([message]))
This keeps getting the same error (as above with "the API deployment does not exist...") and I'm am completely stumped. How has LangChain gone from working to this error without me changing my credentials? And at the same time these credentials still work for the native Azure OpenAI API call. Any help would be massively appreciated.
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have put the code above that you can use, but it seems more of an issue about how LangChain uses my Azure OpenAI account.
### Expected behavior
WARNING! azure_deployment is not default parameter.
azure_deployment was transferred to model_kwargs.
Please confirm that azure_deployment is what you intended.
---------------------------------------------------------------------------
InvalidRequestError Traceback (most recent call last)
Cell In[16], line 9
1 model = AzureChatOpenAI(
2 azure_deployment="Brian",
3 openai_api_version="2023-05-15"
4 )
6 message = HumanMessage(
7 content="Translate this sentence from English to French. I love programming."
8 )
----> 9 print(model([message]))
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chat_models/base.py:606, in BaseChatModel.__call__(self, messages, stop, callbacks, **kwargs)
599 def __call__(
600 self,
601 messages: List[BaseMessage],
(...)
604 **kwargs: Any,
605 ) -> BaseMessage:
--> 606 generation = self.generate(
607 [messages], stop=stop, callbacks=callbacks, **kwargs
608 ).generations[0][0]
609 if isinstance(generation, ChatGeneration):
610 return generation.message
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chat_models/base.py:355, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
353 if run_managers:
354 run_managers[i].on_llm_error(e)
--> 355 raise e
356 flattened_outputs = [
357 LLMResult(generations=[res.generations], llm_output=res.llm_output)
358 for res in results
359 ]
360 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chat_models/base.py:345, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
342 for i, m in enumerate(messages):
343 try:
344 results.append(
--> 345 self._generate_with_cache(
346 m,
347 stop=stop,
348 run_manager=run_managers[i] if run_managers else None,
349 **kwargs,
350 )
351 )
352 except BaseException as e:
353 if run_managers:
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chat_models/base.py:498, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
494 raise ValueError(
495 "Asked to cache, but no cache found at `langchain.cache`."
496 )
497 if new_arg_supported:
--> 498 return self._generate(
499 messages, stop=stop, run_manager=run_manager, **kwargs
500 )
501 else:
502 return self._generate(messages, stop=stop, **kwargs)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chat_models/openai.py:360, in ChatOpenAI._generate(self, messages, stop, run_manager, stream, **kwargs)
358 message_dicts, params = self._create_message_dicts(messages, stop)
359 params = {**params, **kwargs}
--> 360 response = self.completion_with_retry(
361 messages=message_dicts, run_manager=run_manager, **params
362 )
363 return self._create_chat_result(response)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chat_models/openai.py:299, in ChatOpenAI.completion_with_retry(self, run_manager, **kwargs)
295 @retry_decorator
296 def _completion_with_retry(**kwargs: Any) -> Any:
297 return self.client.create(**kwargs)
--> 299 return _completion_with_retry(**kwargs)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/tenacity/__init__.py:289, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw)
287 @functools.wraps(f)
288 def wrapped_f(*args: t.Any, **kw: t.Any) -> t.Any:
--> 289 return self(f, *args, **kw)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/tenacity/__init__.py:379, in Retrying.__call__(self, fn, *args, **kwargs)
377 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
378 while True:
--> 379 do = self.iter(retry_state=retry_state)
380 if isinstance(do, DoAttempt):
381 try:
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/tenacity/__init__.py:314, in BaseRetrying.iter(self, retry_state)
312 is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain)
313 if not (is_explicit_retry or self.retry(retry_state)):
--> 314 return fut.result()
316 if self.after is not None:
317 self.after(retry_state)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/concurrent/futures/_base.py:451, in Future.result(self, timeout)
449 raise CancelledError()
450 elif self._state == FINISHED:
--> 451 return self.__get_result()
453 self._condition.wait(timeout)
455 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/concurrent/futures/_base.py:403, in Future.__get_result(self)
401 if self._exception:
402 try:
--> 403 raise self._exception
404 finally:
405 # Break a reference cycle with the exception in self._exception
406 self = None
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/tenacity/__init__.py:382, in Retrying.__call__(self, fn, *args, **kwargs)
380 if isinstance(do, DoAttempt):
381 try:
--> 382 result = fn(*args, **kwargs)
383 except BaseException: # noqa: B902
384 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/langchain/chat_models/openai.py:297, in ChatOpenAI.completion_with_retry.<locals>._completion_with_retry(**kwargs)
295 @retry_decorator
296 def _completion_with_retry(**kwargs: Any) -> Any:
--> 297 return self.client.create(**kwargs)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/openai/api_resources/chat_completion.py:25, in ChatCompletion.create(cls, *args, **kwargs)
23 while True:
24 try:
---> 25 return super().create(*args, **kwargs)
26 except TryAgain as e:
27 if timeout is not None and time.time() > start + timeout:
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py:155, in EngineAPIResource.create(cls, api_key, api_base, api_type, request_id, api_version, organization, **params)
129 @classmethod
130 def create(
131 cls,
(...)
138 **params,
139 ):
140 (
141 deployment_id,
142 engine,
(...)
152 api_key, api_base, api_type, api_version, organization, **params
153 )
--> 155 response, _, api_key = requestor.request(
156 "post",
157 url,
158 params=params,
159 headers=headers,
160 stream=stream,
161 request_id=request_id,
162 request_timeout=request_timeout,
163 )
165 if stream:
166 # must be an iterator
167 assert not isinstance(response, OpenAIResponse)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/openai/api_requestor.py:299, in APIRequestor.request(self, method, url, params, headers, files, stream, request_id, request_timeout)
278 def request(
279 self,
280 method,
(...)
287 request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
288 ) -> Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], bool, str]:
289 result = self.request_raw(
290 method.lower(),
291 url,
(...)
297 request_timeout=request_timeout,
298 )
--> 299 resp, got_stream = self._interpret_response(result, stream)
300 return resp, got_stream, self.api_key
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/openai/api_requestor.py:710, in APIRequestor._interpret_response(self, result, stream)
702 return (
703 self._interpret_response_line(
704 line, result.status_code, result.headers, stream=True
705 )
706 for line in parse_stream(result.iter_lines())
707 ), True
708 else:
709 return (
--> 710 self._interpret_response_line(
711 result.content.decode("utf-8"),
712 result.status_code,
713 result.headers,
714 stream=False,
715 ),
716 False,
717 )
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/openai/api_requestor.py:775, in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream)
773 stream_error = stream and "error" in resp.data
774 if stream_error or not 200 <= rcode < 300:
--> 775 raise self.handle_error_response(
776 rbody, rcode, resp.data, rheaders, stream_error=stream_error
777 )
778 return resp
InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again. | Using AzureChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/14238/comments | 5 | 2023-12-04T16:40:49Z | 2024-05-13T16:10:00Z | https://github.com/langchain-ai/langchain/issues/14238 | 2,024,280,548 | 14,238 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
from langchain.agents import AgentType, initialize_agent
from langchain.chat_models import ChatOpenAI
from langchain.tools import DuckDuckGoSearchRun
llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo-16k")
web_search = DuckDuckGoSearchRun()
tools = [
web_search,
# other tools
]
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("<Some question>")
VQDExtractionException: Could not extract vqd. keywords="blah blah blah"
---
The code above worked fine several weeks ago, but it is not working any more. I tried every version of LangChain released after September.
Maybe there is a change in DuckDuckGo's API. Please help me, thanks.
### Suggestion:
_No response_ | Issue: Recently, the DuckduckGo search tool seems not working. VQDExtractionException: Could not extract vqd. keywords="keywords" | https://api.github.com/repos/langchain-ai/langchain/issues/14233/comments | 9 | 2023-12-04T16:08:09Z | 2024-07-10T16:05:25Z | https://github.com/langchain-ai/langchain/issues/14233 | 2,024,212,346 | 14,233 |
[
"hwchase17",
"langchain"
] | ### System Info
python 3.11.6
langchain==0.0.345
langchain-core==0.0.9
jupyter_client==8.6.0
jupyter_core==5.5.0
ipykernel==6.27.0
ipython==8.17.2
on mac M2
### Who can help?
@baskaryan @tomasonjo @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. The setup is the same as in https://github.com/langchain-ai/langchain/issues/14231, although I don't think it matters.
2. Run `existing_graph.similarity_search_with_score("It is the end of the world. Take shelter!")`
3. It returns the following error
---------------------------------------------------------------------------
ClientError Traceback (most recent call last)
[/Users/josselinperrus/Projects/streetpress/neo4j.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/neo4j.ipynb) Cell 17 line 1
----> [1](vscode-notebook-cell:/Users/josselinperrus/Projects/streetpress/neo4j.ipynb#X21sZmlsZQ%3D%3D?line=0) existing_graph.similarity_search_with_score("It is the end of the world. Take shelter !")
File [~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:550](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:550), in Neo4jVector.similarity_search_with_score(self, query, k)
[540](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:540) """Return docs most similar to query.
[541](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:541)
[542](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:542) Args:
(...)
[547](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:547) List of Documents most similar to the query and score for each
[548](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:548) """
[549](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:549) embedding = self.embedding.embed_query(query)
--> [550](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:550) docs = self.similarity_search_with_score_by_vector(
[551](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:551) embedding=embedding, k=k, query=query
[552](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:552) )
[553](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:553) return docs
File [~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:595](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:595), in Neo4jVector.similarity_search_with_score_by_vector(self, embedding, k, **kwargs)
[586](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:586) read_query = _get_search_index_query(self.search_type) + retrieval_query
[587](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:587) parameters = {
[588](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:588) "index": self.index_name,
[589](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:589) "k": k,
(...)
[592](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:592) "query": kwargs["query"],
[593](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:593) }
--> [595](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:595) results = self.query(read_query, params=parameters)
[597](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:597) docs = [
[598](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:598) (
[599](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:599) Document(
(...)
[607](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:607) for result in results
[608](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:608) ]
[609](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:609) return docs
File [~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:242](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:242), in Neo4jVector.query(self, query, params)
[240](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:240) try:
[241](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:241) data = session.run(query, params)
--> [242](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:242) return [r.data() for r in data]
[243](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:243) except CypherSyntaxError as e:
[244](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:244) raise ValueError(f"Cypher Statement is not valid\n{e}")
File [~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:242](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:242), in <listcomp>(.0)
[240](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:240) try:
[241](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:241) data = session.run(query, params)
--> [242](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:242) return [r.data() for r in data]
[243](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:243) except CypherSyntaxError as e:
[244](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/langchain/vectorstores/neo4j_vector.py:244) raise ValueError(f"Cypher Statement is not valid\n{e}")
File [~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/work/result.py:270](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/work/result.py:270), in Result.__iter__(self)
[268](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/work/result.py:268) yield self._record_buffer.popleft()
[269](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/work/result.py:269) elif self._streaming:
--> [270](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/work/result.py:270) self._connection.fetch_message()
[271](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/work/result.py:271) elif self._discarding:
[272](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/work/result.py:272) self._discard()
File [~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:178](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:178), in ConnectionErrorHandler.__getattr__.<locals>.outer.<locals>.inner(*args, **kwargs)
[176](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:176) def inner(*args, **kwargs):
[177](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:177) try:
--> [178](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:178) func(*args, **kwargs)
[179](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:179) except (Neo4jError, ServiceUnavailable, SessionExpired) as exc:
[180](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:180) assert not asyncio.iscoroutinefunction(self.__on_error)
File [~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py:849](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py:849), in Bolt.fetch_message(self)
[845](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py:845) # Receive exactly one message
[846](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py:846) tag, fields = self.inbox.pop(
[847](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py:847) hydration_hooks=self.responses[0].hydration_hooks
[848](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py:848) )
--> [849](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py:849) res = self._process_message(tag, fields)
[850](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py:850) self.idle_since = perf_counter()
[851](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt.py:851) return res
File [~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt5.py:374](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt5.py:374), in Bolt5x0._process_message(self, tag, fields)
[372](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt5.py:372) self._server_state_manager.state = self.bolt_states.FAILED
[373](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt5.py:373) try:
--> [374](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt5.py:374) response.on_failure(summary_metadata or {})
[375](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt5.py:375) except (ServiceUnavailable, DatabaseUnavailable):
[376](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_bolt5.py:376) if self.pool:
File [~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:245](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:245), in Response.on_failure(self, metadata)
[243](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:243) handler = self.handlers.get("on_summary")
[244](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:244) Util.callback(handler)
--> [245](https://file+.vscode-resource.vscode-cdn.net/Users/josselinperrus/Projects/streetpress/~/Projects/streetpress/venv/lib/python3.11/site-packages/neo4j/_sync/io/_common.py:245) raise Neo4jError.hydrate(**metadata)
ClientError: {code: Neo.ClientError.Procedure.ProcedureCallFailed} {message: Failed to invoke procedure `db.index.fulltext.queryNodes`: Caused by: org.apache.lucene.queryparser.classic.ParseException: Encountered "<EOF>" at line 1, column 42.
Was expecting one of:
<BAREOPER> ...
"(" ...
"*" ...
<QUOTED> ...
<TERM> ...
<PREFIXTERM> ...
<WILDTERM> ...
<REGEXPTERM> ...
"[" ...
"{" ...
<NUMBER> ...
<TERM> ...
"*" ...
}
### Expected behavior
No error | similarity_search_with_score does not accept "!" in the query | https://api.github.com/repos/langchain-ai/langchain/issues/14232/comments | 1 | 2023-12-04T16:07:02Z | 2023-12-13T17:09:52Z | https://github.com/langchain-ai/langchain/issues/14232 | 2,024,210,192 | 14,232 |
[
"hwchase17",
"langchain"
] | ### System Info
python 3.11.6
langchain==0.0.345
langchain-core==0.0.9
jupyter_client==8.6.0
jupyter_core==5.5.0
ipykernel==6.27.0
ipython==8.17.2
on mac M2
### Who can help?
@baskaryan @tomasonjo @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Add a first node to the graph
`first_node = Node(
id="1",
type="Sentence",
properties={
"description": "This is my first node"
}
)
graph_document = GraphDocument(
nodes= [first_node],
relationships= [],
source= Document(page_content="my first document")
)
graph.add_graph_documents([graph_document])`
2. Add a second node to the graph
`second_node = Node(
id="2",
type="Sentence",
properties={
"description": "I love eating spinach"
}
)
graph_document = GraphDocument(
nodes= [second_node],
relationships= [],
source= Document(page_content="second doc")
)
graph.add_graph_documents([graph_document])`
3. Create a hybrid index
`existing_graph = Neo4jVector.from_existing_graph(
embedding=OpenAIEmbeddings(),
url=url,
username=username,
password=password,
index_name="sentence_index",
keyword_index_name="sentence_kindex",
node_label="Sentence",
text_node_properties=["description"],
embedding_node_property="embedding",
search_type="hybrid"
)`
4. Do a similarity search
`existing_graph.similarity_search_with_score("It is the end of the world. Take shelter")`
5. This yields a relevance score of 1 for the 1st result
`[(Document(page_content='\ndescription: This is my first node'), 1.0),
(Document(page_content='\ndescription: I love eating spinach'),
0.8576263189315796)]`
6. Test the strategy in use:
`existing_graph._distance_strategy`
which returns `<DistanceStrategy.COSINE: 'COSINE'>`
### Expected behavior
The relevance score should return the cosine similarity score.
In this particular case the cosine similarity score is 0.747
`def get_embedding(text):
response = openai.embeddings.create(input=text, model="text-embedding-ada-002")
return response.data[0].embedding
def cosine_similarity(vec1, vec2):
return dot(vec1, vec2) / (norm(vec1) * norm(vec2))
embedding1 = get_embedding("This is my first node")
embedding2 = get_embedding("It is the end of the world. Take shelter")
similarity = cosine_similarity(embedding1, embedding2)
print(f"Cosine Similarity: {similarity}")`
returns
`Cosine Similarity: 0.7475260325549817` | similarity_search_with_relevance_scores returns incoherent relevance scores with Neo4jVector | https://api.github.com/repos/langchain-ai/langchain/issues/14231/comments | 3 | 2023-12-04T16:00:03Z | 2024-03-17T16:08:36Z | https://github.com/langchain-ai/langchain/issues/14231 | 2,024,195,747 | 14,231 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I cant find documentattion about filtered retrievers, basically I want to fix this issue: https://github.com/langchain-ai/langchain/issues/14227
### Idea or request for content:
And according to other github issues, the way to fix it is with custom filtered retrievers.
So I tried the following:
```
class FilteredRetriever:
def __init__(self, retriever, title):
self.retriever = retriever
self.title = title
def retrieve(self, *args, **kwargs):
results = self.retriever.retrieve(*args, **kwargs)
return [doc for doc in results if doc['title'].startswith(self.title)]
filtered_retriever = FilteredRetriever(vector_store.as_retriever(), '25_1_0.pdf')
llm = AzureChatOpenAI(
azure_deployment="chat",
openai_api_version="2023-05-15",
)
retriever = vector_store.as_retriever(search_type="similarity", kwargs={"k": 3})
chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=filtered_retriever,
return_source_documents=True)
result = chain({"query": 'Can Colleagues contact their managers??'})
for res in result['source_documents']:
print(res.metadata['title'])
```
However I get this error:
```
ValidationError: 1 validation error for RetrievalQA
retriever
value is not a valid dict (type=type_error.dict)
```
but as I cant find documentation about it, I am not sure how to solve it | DOC: How to create a custom filtered retriever | https://api.github.com/repos/langchain-ai/langchain/issues/14229/comments | 7 | 2023-12-04T15:07:02Z | 2024-05-15T16:06:53Z | https://github.com/langchain-ai/langchain/issues/14229 | 2,024,080,225 | 14,229 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.342
langchain-core 0.0.7
azure-search-documents 11.4.0b8
Python: 3.10
### Who can help?
@hw
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The following code works fine:
```
from langchain_core.vectorstores import VectorStore, VectorStoreRetriever
index_name: str = "langchain-vector-demo"
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name="vector-1701341754619",
embedding_function=embeddings.embed_query
)
res = vector_store.similarity_search(
query="Can Colleagues contact their managers?", k=20, search_type="hybrid", filters="title eq '25_1_0.pdf'")
The res object contains the chunkcs where title is 25_1_0.pdf' ONLY
```
However when using it with an LLM:
```
llm = AzureChatOpenAI(
azure_deployment="chat",
openai_api_version="2023-05-15",
)
retriever = vector_store.as_retriever(search_type="similarity", filters="title eq '25_1_0.pdf'", kwargs={"k": 3})
chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
result = chain({"query": 'Can Colleagues contact their managers??'})
for res in result['source_documents']:
print(res.metadata['title'])
```
My output has chunks which dont respect the filter:
142_2_0.pdf
99_9_0.docx
99_9_0.docx
142_2_0.pdf
### Expected behavior
The answer generated with source_documents, should contain chunks which respects the given filters. | Filters dont work with Azure Search Vector Store retriever | https://api.github.com/repos/langchain-ai/langchain/issues/14227/comments | 5 | 2023-12-04T14:58:48Z | 2024-05-27T16:06:08Z | https://github.com/langchain-ai/langchain/issues/14227 | 2,024,062,849 | 14,227 |
[
"hwchase17",
"langchain"
] | ### System Info
ml.g5.48xlarge EC2 instance on AWS with:
- Langchain 0.0.305
- Python 3.10
### Who can help?
@hwchase17 @agola11
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I run into the above error when importing `HuggingFaceEmbeddings`.
```py
from langchain.embeddings import HuggingFaceEmbeddings
```
I believe this is due to the fact that we have a file named `requests.py` in the root folder of the project, which conflicts with the `requests` package.
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[19], line 1
----> 1 from langchain.embeddings import HuggingFaceEmbeddings
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/embeddings/__init__.py:17
14 import logging
15 from typing import Any
---> 17 from langchain.embeddings.aleph_alpha import (
18 AlephAlphaAsymmetricSemanticEmbedding,
19 AlephAlphaSymmetricSemanticEmbedding,
20 )
21 from langchain.embeddings.awa import AwaEmbeddings
22 from langchain.embeddings.azure_openai import AzureOpenAIEmbeddings
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/embeddings/aleph_alpha.py:6
3 from langchain_core.embeddings import Embeddings
4 from langchain_core.pydantic_v1 import BaseModel, root_validator
----> 6 from langchain.utils import get_from_dict_or_env
9 class AlephAlphaAsymmetricSemanticEmbedding(BaseModel, Embeddings):
10 """Aleph Alpha's asymmetric semantic embedding.
11
12 AA provides you with an endpoint to embed a document and a query.
(...)
30
31 """
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/utils/__init__.py:14
7 from langchain_core.utils.formatting import StrictFormatter, formatter
8 from langchain_core.utils.input import (
9 get_bolded_text,
10 get_color_mapping,
11 get_colored_text,
12 print_text,
13 )
---> 14 from langchain_core.utils.utils import (
15 check_package_version,
16 convert_to_secret_str,
17 get_pydantic_field_names,
18 guard_import,
19 mock_now,
20 raise_for_status_with_text,
21 xor_args,
22 )
24 from langchain.utils.env import get_from_dict_or_env, get_from_env
25 from langchain.utils.math import cosine_similarity, cosine_similarity_top_k
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain_core/utils/__init__.py:14
7 from langchain_core.utils.formatting import StrictFormatter, formatter
8 from langchain_core.utils.input import (
9 get_bolded_text,
10 get_color_mapping,
11 get_colored_text,
12 print_text,
13 )
---> 14 from langchain_core.utils.loading import try_load_from_hub
15 from langchain_core.utils.utils import (
16 build_extra_kwargs,
17 check_package_version,
(...)
23 xor_args,
24 )
26 __all__ = [
27 "StrictFormatter",
28 "check_package_version",
(...)
41 "build_extra_kwargs",
42 ]
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain_core/utils/loading.py:10
7 from typing import Any, Callable, Optional, Set, TypeVar, Union
8 from urllib.parse import urljoin
---> 10 import requests
12 DEFAULT_REF = os.environ.get("LANGCHAIN_HUB_DEFAULT_REF", "master")
13 URL_BASE = os.environ.get(
14 "LANGCHAIN_HUB_URL_BASE",
15 "[https://raw.githubusercontent.com/hwchase17/langchain-hub/{ref}/](https://raw.githubusercontent.com/hwchase17/langchain-hub/%7Bref%7D/)",
16 )
File ~/SageMaker/langchain/libs/langchain/langchain/requests.py:2
1 """DEPRECATED: Kept for backwards compatibility."""
----> 2 from langchain.utilities import Requests, RequestsWrapper, TextRequestsWrapper
4 __all__ = [
5 "Requests",
6 "RequestsWrapper",
7 "TextRequestsWrapper",
8 ]
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/utilities/__init__.py:8
1 """**Utilities** are the integrations with third-part systems and packages.
2
3 Other LangChain classes use **Utilities** to interact with third-part systems
4 and packages.
5 """
6 from typing import Any
----> 8 from langchain.utilities.requests import Requests, RequestsWrapper, TextRequestsWrapper
11 def _import_alpha_vantage() -> Any:
12 from langchain.utilities.alpha_vantage import AlphaVantageAPIWrapper
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/utilities/requests.py:10
6 import requests
7 from langchain_core.pydantic_v1 import BaseModel, Extra
---> 10 class Requests(BaseModel):
11 """Wrapper around requests to handle auth and async.
12
13 The main purpose of this wrapper is to handle authentication (by saving
14 headers) and enable easy async methods on the same base object.
15 """
17 headers: Optional[Dict[str, str]] = None
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/utilities/requests.py:27, in Requests()
24 extra = Extra.forbid
25 arbitrary_types_allowed = True
---> 27 def get(self, url: str, **kwargs: Any) -> requests.Response:
28 """GET the URL and return the text."""
29 return requests.get(url, headers=self.headers, auth=self.auth, **kwargs)
AttributeError: partially initialized module 'requests' has no attribute 'Response' (most likely due to a circular import)
```
### Expected behavior
To be able to correctly import and use the embeddings. | partially initialized module 'requests' has no attribute 'Response' | https://api.github.com/repos/langchain-ai/langchain/issues/14226/comments | 1 | 2023-12-04T14:24:15Z | 2024-03-16T16:10:31Z | https://github.com/langchain-ai/langchain/issues/14226 | 2,023,977,308 | 14,226 |
[
"hwchase17",
"langchain"
] | ### System Info
Python: 3.10
LangChain: 0.0.344
OpenSearch: Amazon OpenSearch Serverless Vector Engine
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have loaded documents with metadata into OpenSearch which includes a date value. When generating a SelfQuerying Retriever the generated query specifies the date type as "date", and results in an invalid query being generated and sent to OpenSearch that then returns an error. I have set the AttributeInfo to "string", but even if it was "date" the query would be invalid.
```
document_content_description = "Feedback from users"
metadata_field_info = [
AttributeInfo(
name="timestamp",
description="A string representing the date that the feedback was submitted in the 'yyyy-MM-dd' format",
type="string",
)
]
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
retriever.get_relevant_documents("Summarize feedback submitted on 2023-06-01")
```
Output Logs:
```
"repr": "StructuredQuery(query=' ', filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='timestamp', value={'date': '2023-06-01', 'type': 'date'}), limit=None)"
...
{
"size": 4,
"query": {
"bool": {
"filter": {
"term": {
"metadata.timestamp": {
"date": "2023-06-01",
"type": "date"
}
}
},
...
}
{
"error": {
"root_cause": [
{
"type": "parsing_exception",
"reason": "[term] query does not support [date]",
"line": 1,
"col": 76
}
],
"type": "x_content_parse_exception",
"reason": "[1:76] [bool] failed to parse field [filter]",
"caused_by": {
"type": "parsing_exception",
"reason": "[term] query does not support [date]",
"line": 1,
"col": 76
}
},
"status": 400
}
```
### Expected behavior
LangChain should generate a valid OpenSearch query such as:
```
{
"size": 4,
"query": {
"bool": {
"filter": {
"term": {
"metadata.timestamp.keyword": "2023-06-01"
}
},
...
}
``` | SelfQueryRetriever with OpenSearch generating invalid queries with Date type | https://api.github.com/repos/langchain-ai/langchain/issues/14225/comments | 3 | 2023-12-04T12:21:19Z | 2024-01-22T09:09:43Z | https://github.com/langchain-ai/langchain/issues/14225 | 2,023,732,355 | 14,225 |
[
"hwchase17",
"langchain"
] | ### System Info
MacOS
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
As described in the [docs page](https://python.langchain.com/docs/integrations/tools/dalle_image_generator):
```
from langchain.agents import initialize_agent, load_tools
tools = load_tools(["dalle-image-generator"])
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)
output = agent.run("Create an image of a halloween night at a haunted museum")
```
Below is output:
```
> Entering new AgentExecutor chain...
I need to generate an image from a text description
Action: Dall-E-Image-Generator
Action Input: "Halloween night at a haunted museum"
Observation: https://oaidalleapiprodscus.blob.core.windows.net/private/org-yt03sAlJZ8YRfqIcNivAAqZu/user-xV6mgISZftMROz9SukNqCHqH/img-EKpmrjqlb1988YrkkBm0vgjr.png?st=2023-12-04T08%3A30%3A50Z&se=2023-12-04T10%3A30%3A50Z&sp=r&sv=2021-08-06&sr=b&rscd=inline&rsct=image/png&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2023-12-03T23%3A02%3A03Z&ske=2023-12-04T23%3A02%3A03Z&sks=b&skv=2021-08-06&sig=9TvprwW3Wl3ZHj%2B2ga6juBT1KQLJIc9TUz%2BDIVcd3XA%3D
Thought: I now know the final answer
Final Answer: https://oaidalleapiprodscus.blob.core.windows.net/private/org-yt03sAlJZ8YRfqIcNivAAqZu/user-xV6mgISZftMROz9SukNqCHqH/img-EKpmrjqlb1988YrkkBm0vgjr.png?st=2023-12-04T08%3A30%3A50Z&se=2023-12-04T10%3A30%3A50Z&sp=r&sv=2021-08-06&sr=b&rscd=inline&rsct=image/png&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2023-12-03T23%3A02%3A03Z&ske=2023-12-04T23%
> Finished chain.
```
Seems ok? But the url has been cut, it not origin! So you will not get the right picture.
Instead got this:
<img width="1472" alt="image" src="https://github.com/langchain-ai/langchain/assets/19658300/f856e606-9d9e-466d-bd8f-de4eed70f15c">
### Expected behavior
Dalle is a import tool in openai.
I want get the picture which I can see it. So it should give me the origin link.
Just like this:
```
> Entering new AgentExecutor chain...
I can use the Dall-E-Image-Generator to generate an image of a volcano island based on the text description.
Action: Dall-E-Image-Generator
Action Input: "A volcano island"
Observation: https://oaidalleapiprodscus.blob.core.windows.net/private/org-yt03sAlJZ8YRfqIcNivAAqZu/user-xV6mgISZftMROz9SukNqCHqH/img-H3m0wSNxDXVUkUKiE9kOKgvg.png?st=2023-12-04T09%3A39%3A05Z&se=2023-12-04T11%3A39%3A05Z&sp=r&sv=2021-08-06&sr=b&rscd=inline&rsct=image/png&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2023-12-03T22%3A42%3A28Z&ske=2023-12-04T22%3A42%3A28Z&sks=b&skv=2021-08-06&sig=WSEO5/OX5GgYaNTWxZhNmsK%2BqeaDLMEsDdGEnHX18BY%3D
Thought:I now know the final answer.
Final Answer: The image of a volcano island can be found at the following link: https://oaidalleapiprodscus.blob.core.windows.net/private/org-yt03sAlJZ8YRfqIcNivAAqZu/user-xV6mgISZftMROz9SukNqCHqH/img-H3m0wSNxDXVUkUKiE9kOKgvg.png?st=2023-12-04T09%3A39%3A05Z&se=2023-12-04T11%3A39%3A05Z&sp=r&sv=2021-08-06&sr=b&rscd=inline&rsct=image/png&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2023-12-03T22%3A42%3A28Z&ske=2023-12-04T22%3A42%3A28Z&sks=b&skv=2021-08-06&sig=WSEO5/OX5GgYaNTWxZhNmsK%2BqeaDLMEsDdGEnHX18BY%3D
> Finished chain.
The image of a volcano island can be found at the following link: https://oaidalleapiprodscus.blob.core.windows.net/private/org-yt03sAlJZ8YRfqIcNivAAqZu/user-xV6mgISZftMROz9SukNqCHqH/img-H3m0wSNxDXVUkUKiE9kOKgvg.png?st=2023-12-04T09%3A39%3A05Z&se=2023-12-04T11%3A39%3A05Z&sp=r&sv=2021-08-06&sr=b&rscd=inline&rsct=image/png&skoid=6aaadede-4fb3-4698-a8f6-684d7786b067&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2023-12-03T22%3A42%3A28Z&ske=2023-12-04T22%3A42%3A28Z&sks=b&skv=2021-08-06&sig=WSEO5/OX5GgYaNTWxZhNmsK%2BqeaDLMEsDdGEnHX18BY%3D
``` | Dall-E Image Generator return url without authentication information | https://api.github.com/repos/langchain-ai/langchain/issues/14223/comments | 4 | 2023-12-04T10:41:50Z | 2024-04-16T22:36:58Z | https://github.com/langchain-ai/langchain/issues/14223 | 2,023,542,482 | 14,223 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am working on extracting data from HTML files. I need to extract table data to store in a data frame as a table. With the help of langchain document loader I can extract the data row wise but the headers of columns are not getting extracted. How to extract column headers along with the data.
### Suggestion:
_No response_ | Table data extraction from HTML files. | https://api.github.com/repos/langchain-ai/langchain/issues/14218/comments | 2 | 2023-12-04T09:49:35Z | 2024-03-17T16:08:31Z | https://github.com/langchain-ai/langchain/issues/14218 | 2,023,436,264 | 14,218 |
[
"hwchase17",
"langchain"
] | https://github.com/langchain-ai/langchain/blob/ee94ef55ee6ab064da08340817955f821dfa6261/libs/langchain/langchain/chains/llm.py#L71
In my humble opinion, I think `llm_kwargs` variable should be a parameter declared in the LLM class rather than the LLM Chain. One of the reasons for this, is that when you are declaring, for instance, a `RetrievalQA` chain `from_chain_type` or `from_llm` , you cannot specify these `llm_kwargs`.
You can workaround that by calling the inner `llm_chain` with (given that you're using a `combine_documents_chain`):
```python
chain.combine_documents_chain.llm_chain.llm_kwargs = {'test': 'test'}
```
But seems very odd that you have to do that. And the kwargs seems that it should be a responsibility of the LLM, not the LLM Chain.
Happy to be proven wrong! 🙂 | Add `llm_kwargs` to `BaseRetrievalQA.from_llm` | https://api.github.com/repos/langchain-ai/langchain/issues/14216/comments | 2 | 2023-12-04T09:38:10Z | 2024-03-16T16:10:17Z | https://github.com/langchain-ai/langchain/issues/14216 | 2,023,415,515 | 14,216 |
[
"hwchase17",
"langchain"
] | ### System Info
Mac os
Python3.9
### Who can help?
@Jiaaming
I'm working on this issue.
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
when you run the BiliBiliLoader example from the official [docs](https://python.langchain.com/docs/integrations/document_loaders/bilibili)
```
loader = BiliBiliLoader(
[
"https://www.bilibili.com/video/BV1g84y1R7oE/",
]
)
docs = loader.load()
print(docs)
```
will get
```
bilibili_api.exceptions.CredentialNoSessdataException.CredentialNoSessdataException: Credential 类未提供 sessdata 或者为空。
Process finished with exit code 1
```
The is because the original [bilibili_api](https://nemo2011.github.io/bilibili-api/#/get-credential) require a Credential to fetch the info from the video
### Expected behavior
Should return a Document object
```
[Document(page_content="Video Title:...,description:....)]
``` | BiliBiliLoader Credential No Sessdata error | https://api.github.com/repos/langchain-ai/langchain/issues/14213/comments | 4 | 2023-12-04T06:58:42Z | 2024-06-17T16:09:49Z | https://github.com/langchain-ai/langchain/issues/14213 | 2,023,161,496 | 14,213 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
ValidationError: 2 validation errors for ConversationChain
advisor_summary
extra fields not permitted (type=value_error.extra)
__root__
Got unexpected prompt input variables. The prompt expects ['advisor_summary', 'history', 'input'], but got ['history'] as inputs from memory, and input as the normal input key. (type=value_error)
How to resolve this error?
### Suggestion:
_No response_ | ConversationChain error with multiple inputs | https://api.github.com/repos/langchain-ai/langchain/issues/14210/comments | 4 | 2023-12-04T04:34:52Z | 2024-03-17T16:08:26Z | https://github.com/langchain-ai/langchain/issues/14210 | 2,023,006,180 | 14,210 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi, I am trying to use `initialize_agent` to leverage custom tools but when the agent is accepting it, it is modifying the user's input in its own way and causing error in the output. For example, in SQLTool, it is automatically converting it into SQL query which is wrong and ended up giving me wrong answer. I want to know how can I ensure raw input is getting passed into the `action_input` of `initialize_agent`! An example below:
User input: How many users have purchased PlayStation from New York in October?
Expected flow of execution will start like this:
```
> Entering new AgentExecutor chain...
Action:
```
{
"action": "SQLTool",
"action_input": {
"raw_input": "How many users have purchased PlayStation from New York in October?"
}
}
```
```
Also, since I am using BaseTool in the custom tools, I had to ensure output format is in string - can we customize it like getting it in JSON format?
### Suggestion:
_No response_ | Issue: How to avoid modifying user input and output in initialize_agent? | https://api.github.com/repos/langchain-ai/langchain/issues/14209/comments | 11 | 2023-12-04T03:45:23Z | 2024-03-19T16:05:32Z | https://github.com/langchain-ai/langchain/issues/14209 | 2,022,958,852 | 14,209 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I have a problem in this code `llm_chain = LLMChain(prompt=prompt,llm=local_llm)`, how to define the local huggingface models into this `local_llm`
### Suggestion:
_No response_ | Issue: <How to use the local huggingface models> | https://api.github.com/repos/langchain-ai/langchain/issues/14208/comments | 1 | 2023-12-04T03:05:05Z | 2024-03-16T16:10:01Z | https://github.com/langchain-ai/langchain/issues/14208 | 2,022,925,148 | 14,208 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I'd like to use Hugging Face's Chat UI frontend with LangChain.
https://github.com/huggingface/chat-ui
But it looks like the Chat UI is only available through Hugginf Face's Text Generation Inference endpoint.
https://github.com/huggingface/chat-ui/issues/466
How can I serve the chain I have configured with LangChain in TGI format so I can use Chat UI?
Thank you in advance.
### Suggestion:
_No response_ | Issue: I'd like to use Hugging Face's Chat UI frontend with LangChain. | https://api.github.com/repos/langchain-ai/langchain/issues/14207/comments | 2 | 2023-12-04T02:38:47Z | 2024-04-15T10:06:12Z | https://github.com/langchain-ai/langchain/issues/14207 | 2,022,901,487 | 14,207 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi,
I am wondering if anyone has a work around using ConversationalRetrievalChain to retrieve documents with their sources, and prevent the chain from returning sources for questions without sources.
query = "How are you doing?"
result = chain({"question": query, "chat_history": chat_history})
result['answer']
"""
I'm doing well, thank you.
SOURCES: /content/xxx.pdf
"""
### Suggestion:
SOURCES: | ConversationalRetrievalChain returns sources to questions without context | https://api.github.com/repos/langchain-ai/langchain/issues/14203/comments | 6 | 2023-12-03T21:33:41Z | 2024-04-09T00:22:31Z | https://github.com/langchain-ai/langchain/issues/14203 | 2,022,724,502 | 14,203 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
_No response_
### Suggestion:
_No response_ | Can i save openai token usage information which get from get_openai_callback directlly to database with SQLChatMessageHistory | https://api.github.com/repos/langchain-ai/langchain/issues/14199/comments | 1 | 2023-12-03T15:59:28Z | 2024-03-16T16:09:46Z | https://github.com/langchain-ai/langchain/issues/14199 | 2,022,602,013 | 14,199 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain version: 0.0.342
Python version: 3.9
OS: Mac OS
### Who can help?
@3coins
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use any Kendra index that contains FAQ style answers and try to query them as a retriever.
### Expected behavior
### Background
When using `AmazonKendraRetriever`, `get_relevant_documents` first calls the `retrieve` API and if nothing is returned, it calls the `query` API (see [here](https://github.com/langchain-ai/langchain/blob/0bdb4343838c4513d15cd9702868adf6f652421c/libs/langchain/langchain/retrievers/kendra.py#L391-L399)).
The `query` API has the capability to return answers from not just documents but also Kendra's FAQs (as described in [API docs for Kendra](https://docs.aws.amazon.com/kendra/latest/dg/query-responses-types.html#response-types)).
### Issue
While Kendra normally only returns snippets, it does return full text for FAQs. The problem is that it is ignored by langchain today hence return snippet for FAQs also. The problem lies in these line of code:
https://github.com/langchain-ai/langchain/blob/0bdb4343838c4513d15cd9702868adf6f652421c/libs/langchain/langchain/retrievers/kendra.py#L225-L244
The Key `AnswerText` is always assumed to be at the 0th index when it can often be in fact on the 1st index. No such assumption can made also based on the [documentation](https://docs.aws.amazon.com/kendra/latest/APIReference/API_QueryResultItem.html#API_QueryResultItem_Contents) of the `QueryResultItem` structure.
For the Kendra index I am testing with, this is indeed the case which is how I discovered the bug.
This is easily fixable where we loop over all indices to search for the key `AnswerText`. I can make the PR if this is deemed as the desired behavior.
Side note: I am happy to add some tests for `kendra.py` which I realized has no testing impemented. | Amazon Kendra: Full answer text not returned for FAQ type answers | https://api.github.com/repos/langchain-ai/langchain/issues/14198/comments | 1 | 2023-12-03T15:31:50Z | 2024-03-16T16:09:41Z | https://github.com/langchain-ai/langchain/issues/14198 | 2,022,589,587 | 14,198 |
[
"hwchase17",
"langchain"
] | Hey @dosubot, im using this code
memory = ConversationBufferMemory(
return_messages=True, output_key="answer", input_key="question"
)
retriever = load_emdeddings(cfg.faiss_persist_directory, cfg.embeddings).as_retriever(search_type="similarity_score_threshold",
search_kwargs={"score_threshold": .65,
"k": 2})
memory.load_memory_variables({})
_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""
CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
ANSWER_PROMPT = ChatPromptTemplate.from_template(template)
DEFAULT_DOCUMENT_PROMPT = PromptTemplate.from_template(template="{page_content}")
def _combine_documents(
docs, document_prompt=DEFAULT_DOCUMENT_PROMPT, document_separator="\n\n"
):
doc_strings = [format_document(doc, document_prompt) for doc in docs]
return document_separator.join(doc_strings)
def _format_chat_history(chat_history: List[Tuple[str, str]]) -> str:
# chat history is of format:
# [
# (human_message_str, ai_message_str),
# ...
# ]
# see below for an example of how it's invoked
buffer = ""
for dialogue_turn in chat_history:
human = "Human: " + dialogue_turn[0]
ai = "Assistant: " + dialogue_turn[1]
buffer += "\n" + "\n".join([human, ai])
return buffer
_inputs = RunnableParallel(
standalone_question=RunnablePassthrough.assign(
chat_history=lambda x: _format_chat_history(x["chat_history"])
)
| CONDENSE_QUESTION_PROMPT
| ChatOpenAI(temperature=0)
| StrOutputParser(),
)
_context = {
"context": itemgetter("standalone_question") | retriever | _combine_documents,
"question": lambda x: x["standalone_question"],
}
conversational_qa_chain = _inputs | _context | ANSWER_PROMPT | ChatOpenAI()
loaded_memory = RunnablePassthrough.assign(
chat_history=RunnableLambda(memory.load_memory_variables) | itemgetter("history")
)
# Now we calculate the standalone question
standalone_question = {
"standalone_question": {
"question": lambda x: x["question"],
"chat_history": lambda x: _format_chat_history(x["chat_history"]),
}
| CONDENSE_QUESTION_PROMPT
| ChatOpenAI(temperature=0)
| StrOutputParser(),
}
# Now we retrieve the documents
retrieved_documents = {
"docs": itemgetter("standalone_question") | retriever,
"question": lambda x: x["standalone_question"],
}
# Now we construct the inputs for the final prompt
final_inputs = {
"context": lambda x: _combine_documents(x["docs"]),
"question": itemgetter("question"),
}
# And finally, we do the part that returns the answers
answer = {
"answer": final_inputs | ANSWER_PROMPT | ChatOpenAI(),
"docs": itemgetter("docs"),
}
# And now we put it all together!
final_chain = loaded_memory | standalone_question | retrieved_documents | answer
inputs = {"question": "what is my name?"}
result = final_chain.invoke(inputs)
memory.save_context(inputs, {"answer": result["answer"].content})
result
**if i run this second time then im getting this error
TypeError: 'HumanMessage' object is not subscriptable**
| TypeError: 'HumanMessage' object is not subscriptable | https://api.github.com/repos/langchain-ai/langchain/issues/14196/comments | 7 | 2023-12-03T07:57:32Z | 2024-04-18T16:25:14Z | https://github.com/langchain-ai/langchain/issues/14196 | 2,022,412,641 | 14,196 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
We create an agent using initialize_agent, register the tool, and configure it to be selected through the agent.
However, in the agent, regardless of which tool is selected, I want to pass a data object called request of BaseModel type when executing the tool.
I would appreciate it if you could tell me how to configure and connect the agent and tool.
### Suggestion:
_No response_ | Passing BaseModel type data from agent to tool | https://api.github.com/repos/langchain-ai/langchain/issues/14192/comments | 2 | 2023-12-03T05:03:47Z | 2024-03-16T16:09:36Z | https://github.com/langchain-ai/langchain/issues/14192 | 2,022,372,837 | 14,192 |
[
"hwchase17",
"langchain"
] | @dosubot , how do i use system prompt inside conversational retreival chain?
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"You are a nice chatbot named James-AI having a conversation with a human."
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}")]
)
memory = ConversationBufferWindowMemory(k=5, memory_key="chat_history", return_messages=True)
**retriever = new_db.as_retriever(search_type="similarity_score_threshold",
search_kwargs={"score_threshold": .65,
"k": 2})**
**qa = ConversationalRetrievalChain.from_llm(cfg.llm, verbose=True,retriever=retriever, memory=memory, prompt=prompt)**
if i use like that then it says
ValidationError: 1 validation error for ConversationalRetrievalChain
prompt
extra fields not permitted (type=value_error.extra) | How do i use system prompt template inside conversational retrieval chain? | https://api.github.com/repos/langchain-ai/langchain/issues/14191/comments | 8 | 2023-12-03T04:30:52Z | 2024-04-19T16:26:03Z | https://github.com/langchain-ai/langchain/issues/14191 | 2,022,366,141 | 14,191 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
When I use an existing library that includes langsmith (such as https://github.com/langchain-ai/weblangchain), but I don't have access to langsmith API as I'm still on the waitlist. How can I quickly disable the langsmith functionality without commenting out the code?
### Suggestion:
_No response_ | Issue: How to conveniently disable langsmith calls? | https://api.github.com/repos/langchain-ai/langchain/issues/14189/comments | 9 | 2023-12-03T02:55:24Z | 2024-07-26T13:01:56Z | https://github.com/langchain-ai/langchain/issues/14189 | 2,022,302,241 | 14,189 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
pip list:
- langchain 0.0.314
- langchain-core 0.0.8
```python
prompt_template = """
### [INST]
Assistant is a large language model trained by Mistral.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Context:
------
Assistant has access to the following tools:
{tools}
To use a tool, please use the following format:
'''
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
'''
When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:
'''
Thought: Do I need to use a tool? No
Final Answer: [your response here]
'''
Begin!
Previous conversation history:
{chat_history}
New input: {input}
Current Scratchpad:
{agent_scratchpad}
[/INST]
"""
# Create prompt from prompt template
prompt = PromptTemplate(
input_variables=['agent_scratchpad', 'chat_history', 'input', 'tool_names', 'tools'],
template=prompt_template,
)
prompt = prompt.partial(
tools=render_text_description(tools),
tool_names=", ".join([t.name for t in tools]),
)
# Create llm chain
llm_chain = LLMChain(llm=mistral_llm, prompt=prompt)
from langchain.agents import AgentOutputParser
from langchain_core.agents import AgentAction, AgentFinish
import re
from typing import List, Union
class CustomOutputParser(AgentOutputParser):
def parse(self, output) -> Union[AgentAction, AgentFinish]:
print(output)
output = ' '.join(output.split())
# Check if the output contains 'Final Answer:'
if 'Final Answer:' in output:
# Extract the final answer from the output
final_answer = output.split('Final Answer:')[1].strip()
print(final_answer)
agent_finish = AgentFinish(
# Return values is generally always a dictionary with a single `output` key
# It is not recommended to try anything else at the moment :)
return_values={"output": final_answer},
log=output,
)
print(agent_finish)
return agent_finish
else:
# Handle other cases or raise an error
raise ValueError("Unexpected output format")
output_parser = CustomOutputParser()
# Create an agent with your LLMChain
agent = ConversationalAgent(llm_chain=llm_chain , output_parser=output_parser)
memory = ConversationBufferMemory(memory_key="chat_history")
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory)
agent_executor.run("Hi, I'm Madhav?")
```
I get the following output:
```python
> Entering new AgentExecutor chain...
Error in StdOutCallbackHandler.on_agent_action callback: 'tuple' object has no attribute 'log'
Thought: Do I need to use a tool? No
Final Answer: Hello, Madhav! How can I assist you today?
Hello, Madhav! How can I assist you today?
return_values={'output': 'Hello, Madhav! How can I assist you today?'} log='Thought: Do I need to use a tool? No Final Answer: Hello, Madhav! How can I assist you today?'
File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:1141, in AgentExecutor._call(self, inputs, run_manager)
1139 # We now enter the agent loop (until it returns something).
1140 while self._should_continue(iterations, time_elapsed):
-> 1141 next_step_output = self._take_next_step(
1142 name_to_tool_map,
1143 color_mapping,
1144 inputs,
1145 intermediate_steps,
1146 run_manager=run_manager,
1147 )
1148 if isinstance(next_step_output, AgentFinish):
1149 return self._return(
1150 next_step_output, intermediate_steps, run_manager=run_manager
1151 )
File /opt/conda/lib/python3.10/site-packages/langchain/agents/agent.py:983, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
981 run_manager.on_agent_action(agent_action, color="green")
982 # Otherwise we lookup the tool
--> 983 if agent_action.tool in name_to_tool_map:
984 tool = name_to_tool_map[agent_action.tool]
985 return_direct = tool.return_direct
AttributeError: 'tuple' object has no attribute 'tool'
...
```
I'm having trouble understading what is happening here. I'm certain my output contains `Final Answer` so I'd assume I need an `ActionFinish` as per [this tutorial](https://python.langchain.com/docs/modules/agents/how_to/custom_llm_agent#output-parser).
Any tips would be much appreciated!
### Suggestion:
_No response_ | Issue: Unable to properly parse output of custom chat agent. | https://api.github.com/repos/langchain-ai/langchain/issues/14185/comments | 5 | 2023-12-02T22:59:44Z | 2024-07-02T16:08:07Z | https://github.com/langchain-ai/langchain/issues/14185 | 2,022,245,713 | 14,185 |
[
"hwchase17",
"langchain"
] | ### Feature request
I think Langchain should have a Document Loader that would allow reading all issues from provided project as documents.
### Motivation
Langchain currently has agent for JIRA, but there is no Document Loader that would allow to read all issues from specified project and store them in Vector database. This could be a very useful tool for RAG usecases and building internal knowledge bases in the companies.
### Your contribution
I can create a PR for this issue. | Add new Document Loader for JIRA | https://api.github.com/repos/langchain-ai/langchain/issues/14180/comments | 1 | 2023-12-02T20:48:41Z | 2024-03-17T16:08:07Z | https://github.com/langchain-ai/langchain/issues/14180 | 2,022,198,845 | 14,180 |
[
"hwchase17",
"langchain"
] | ### Feature request
Looks like it should be doable to have Cloudflare Vectorize support, as there's a HTTP API for Vectorize.
https://developers.cloudflare.com/api/operations/vectorize-list-vectorize-indexes
### Motivation
Similar to langchain-js, it would be nice if we could use Cloudflare Vectorize when using LangChain with Python. It would allow us to use the same vector store for both our TypeScript Workers and Python Lambdas.
https://js.langchain.com/docs/integrations/vectorstores/cloudflare_vectorize
### Your contribution
N/A sadly | Cloudflare Vectorize Support | https://api.github.com/repos/langchain-ai/langchain/issues/14179/comments | 4 | 2023-12-02T20:43:58Z | 2024-07-30T03:32:28Z | https://github.com/langchain-ai/langchain/issues/14179 | 2,022,197,127 | 14,179 |
[
"hwchase17",
"langchain"
] | ### System Info
- LangChain - 0.0.344
- python version - 3.11.6
- platform - windows11
### Who can help?
https://python.langchain.com/docs/modules/agents/agent_types/openai_functions_agent
When running the above example, the following error occurs.
-----------------------------------------------------------------
```shell
> Entering new AgentExecutor chain...
Invoking: `Search` with `Leo DiCaprio girlfriend`
Vittoria Ceretti
Invoking: `Calculator` with `age ^ 0.43`
> Entering new LLMMathChain chain...
age ^ 0.43```text
Traceback (most recent call last):
File "C:\Users\hyung\AppData\Local\pypoetry\Cache\virtualenvs\langchainstudy-esJm8fcP-py3.11\Lib\site-packages\numexpr\necompiler.py", line 760, in getArguments
a = local_dict[name]
~~~~~~~~~~^^^^^^
KeyError: 'age'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\hyung\AppData\Local\pypoetry\Cache\virtualenvs\langchainstudy-esJm8fcP-py3.11\Lib\site-packages\langchain\chains\llm_math\base.py", line 89, in _evaluate_expression
numexpr.evaluate(
File "C:\Users\hyung\AppData\Local\pypoetry\Cache\virtualenvs\langchainstudy-esJm8fcP-py3.11\Lib\site-packages\numexpr\necompiler.py", line 975, in evaluate
raise e
File "C:\Users\hyung\AppData\Local\pypoetry\Cache\virtualenvs\langchainstudy-esJm8fcP-py3.11\Lib\site-packages\numexpr\necompiler.py", line 874, in validate
arguments = getArguments(names, local_dict, global_dict, _frame_depth=_frame_depth)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hyung\AppData\Local\pypoetry\Cache\virtualenvs\langchainstudy-esJm8fcP-py3.11\Lib\site-packages\numexpr\necompiler.py", line 762, in getArguments
a = global_dict[name]
~~~~~~~~~~~^^^^^^
KeyError: 'age'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\hyung\GitHubProjects\LangChainStudy\agent\openai_functions.py", line 76, in <module>
agent_executor.invoke(
File "C:\Users\hyung\AppData\Local\pypoetry\Cache\virtualenvs\langchainstudy-esJm8fcP-py3.11\Lib\site-packages\langchain\chains\base.py", line 89, in invoke
return self(
^^^^^
File "C:\Users\hyung\AppData\Local\pypoetry\Cache\virtualenvs\langchainstudy-esJm8fcP-py3.11\Lib\site-packages\langchain\chains\base.py", line 312, in __call__
raise e
File "C:\Users\hyung\AppData\Local\pypoetry\Cache\virtualenvs\langchainstudy-esJm8fcP-py3.11\Lib\site-packages\langchain\chains\base.py", line 306, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Users\hyung\AppData\Local\pypoetry\Cache\virtualenvs\langchainstudy-esJm8fcP-py3.11\Lib\site-packages\langchain\agents\agent.py", line 1312, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hyung\AppData\Local\pypoetry\Cache\virtualenvs\langchainstudy-esJm8fcP-py3.11\Lib\site-packages\langchain\agents\agent.py", line 1038, in _take_next_step
[
File "C:\Users\hyung\AppData\Local\pypoetry\Cache\virtualenvs\langchainstudy-esJm8fcP-py3.11\Lib\site-packages\langchain\agents\agent.py", line 1038, in <listcomp>
[
File "C:\Users\hyung\AppData\Local\pypoetry\Cache\virtualenvs\langchainstudy-esJm8fcP-py3.11\Lib\site-packages\langchain\agents\agent.py", line 1134, in _iter_next_step
observation = tool.run(
^^^^^^^^^
File "C:\Users\hyung\AppData\Local\pypoetry\Cache\virtualenvs\langchainstudy-esJm8fcP-py3.11\Lib\site-packages\langchain_core\tools.py", line 365, in run
raise e
File "C:\Users\hyung\AppData\Local\pypoetry\Cache\virtualenvs\langchainstudy-esJm8fcP-py3.11\Lib\site-packages\langchain_core\tools.py", line 337, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "C:\Users\hyung\AppData\Local\pypoetry\Cache\virtualenvs\langchainstudy-esJm8fcP-py3.11\Lib\site-packages\langchain_core\tools.py", line 510, in _run
self.func(
File "C:\Users\hyung\AppData\Local\pypoetry\Cache\virtualenvs\langchainstudy-esJm8fcP-py3.11\Lib\site-packages\langchain\chains\base.py", line 507, in run
age ** 0.43
'''
...numexpr.evaluate("age ** 0.43")...
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hyung\AppData\Local\pypoetry\Cache\virtualenvs\langchainstudy-esJm8fcP-py3.11\Lib\site-packages\langchain\chains\base.py", line 312, in __call__
raise e
File "C:\Users\hyung\AppData\Local\pypoetry\Cache\virtualenvs\langchainstudy-esJm8fcP-py3.11\Lib\site-packages\langchain\chains\base.py", line 306, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Users\hyung\AppData\Local\pypoetry\Cache\virtualenvs\langchainstudy-esJm8fcP-py3.11\Lib\site-packages\langchain\chains\llm_math\base.py", line 158, in _call
return self._process_llm_result(llm_output, _run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hyung\AppData\Local\pypoetry\Cache\virtualenvs\langchainstudy-esJm8fcP-py3.11\Lib\site-packages\langchain\chains\llm_math\base.py", line 112, in _process_llm_result
output = self._evaluate_expression(expression)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hyung\AppData\Local\pypoetry\Cache\virtualenvs\langchainstudy-esJm8fcP-py3.11\Lib\site-packages\langchain\chains\llm_math\base.py", line 96, in _evaluate_expression
raise ValueError(
ValueError: LLMMathChain._evaluate("
age ** 0.43
") raised error: 'age'. Please try again with a valid numerical expression
Process finished with exit code 1
'''
```
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://python.langchain.com/docs/modules/agents/agent_types/openai_functions_agent
Run the example
### Expected behavior
```shell
> Entering new AgentExecutor chain...
Invoking: `Search` with `Leo DiCaprio girlfriend`
Vittoria Ceretti
Invoking: `Calculator` with `25 ^ 0.43`
> Entering new LLMMathChain chain...
25 ^ 0.43'''text
/25 ** 0.43
'''
...numexpr.evaluate("25 ** 0.43")...
Answer: 3.991298452658078
> Finished chain.
Answer: 3.991298452658078Leo DiCaprio's girlfriend is Vittoria Ceretti. Her current age raised to the power of 0.43 is approximately 3.99.
> Finished chain.
``` | Issues: OpenAI functions Agent official example Error | https://api.github.com/repos/langchain-ai/langchain/issues/14177/comments | 1 | 2023-12-02T19:29:23Z | 2023-12-05T04:53:44Z | https://github.com/langchain-ai/langchain/issues/14177 | 2,022,175,407 | 14,177 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Is there a way to conditionally create a Pandas agent using `create_pandas_dataframe_agent` or update the DataFrame the agent has access to?
Say I have an agent that has access to tools like so:
```python
@tool()
def f(query: str) -> pd.DataFrame:
...
agent = initialize_agent(
tools=[f, PythonREPLTool()],
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
handle_parsing_errors=True,
)
```
Since `create_pandas_dataframe_agent` takes the `df` argument, I can't create this agent unless/until the tool `f` is called, producing the DataFrame. But without the specialized Pandas agent, the regular agent seems to be producing bad/bogus Python, trying to filter the DataFrame (either with syntax errors or just doing the wrong thing). Is there a way to either 1) call`create_pandas_dataframe_agent` ahead of time and then later update the contents of the `df` after the function call or 2) create a Pandas agent on-the-fly inside `f` to answer the question (I haven't found a way to pass the current `Thought` content to the nested agent in this case)?
### Idea or request for content:
It would be helpful to see how to work with Pandas data when reading dynamically created DataFrames coming from functions that agents may or may not call. | DOC: Is there a way to conditionally create/update a Pandas DataFrame agent? | https://api.github.com/repos/langchain-ai/langchain/issues/14176/comments | 2 | 2023-12-02T17:08:46Z | 2024-03-17T16:07:57Z | https://github.com/langchain-ai/langchain/issues/14176 | 2,022,127,350 | 14,176 |
[
"hwchase17",
"langchain"
] | ### System Info
platform: Vagrant - Ubuntu 2204
python: 3.9.18
langchain version: 0.0.344
langchain core: 0.0.8
clarifai: 9.10.4
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Install the latest version of clarifai (9.10.4)
2. Run the example: https://python.langchain.com/docs/integrations/llms/clarifai
``` bash
Could not import clarifai python package. Please install it with `pip install clarifai`.
File 'clarifai.py', line 77, in validate_environment:
raise ImportError( Traceback (most recent call last):
File "/home/vagrant/.virtualenvs/env/lib/python3.9/site-packages/langchain/llms/clarifai.py", line 74, in validate_environment
from clarifai.auth.helper import ClarifaiAuthHelper
ModuleNotFoundError: No module named 'clarifai.auth'
```
### Expected behavior
I expect **ClarifaiAuthHelper** to import correctly.
In the latest version of clarifai **ClarifaiAuthHelper** is imported in this way:
``` python
from clarifai.client.auth.helper import ClarifaiAuthHelper
``` | ModuleNotFoundError: No module named 'clarifai.auth' | https://api.github.com/repos/langchain-ai/langchain/issues/14175/comments | 1 | 2023-12-02T15:28:09Z | 2023-12-18T20:34:37Z | https://github.com/langchain-ai/langchain/issues/14175 | 2,022,085,674 | 14,175 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain version :
langchain 0.0.344
langchain-core 0.0.8
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
code:
from langchain.document_loaders.csv_loader import CSVLoader
loader = CSVLoader(file_path='./test.csv')
data = loader.load()
### Expected behavior
exception:
Traceback (most recent call last):
File "/Users/liucong/opt/miniconda3/envs/310-test/lib/python3.10/site-packages/langchain_core/__init__.py", line 4, in <module>
__version__ = metadata.version(__package__)
AttributeError: partially initialized module 'importlib.metadata' has no attribute 'version' (most likely due to a circular import)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/Users/liucong/opt/miniconda3/envs/310-test/lib/python3.10/site-packages/langchain/__init__.py", line 4, in <module>
from importlib import metadata
File "/Users/liucong/opt/miniconda3/envs/310-test/lib/python3.10/importlib/metadata/__init__.py", line 4, in <module>
import csv
File "/Users/liucong/PycharmProjects/llm_learn/lang_chain/data_connection/csv.py", line 1, in <module>
from langchain.document_loaders.csv_loader import CSVLoader
File "/Users/liucong/opt/miniconda3/envs/310-test/lib/python3.10/site-packages/langchain/document_loaders/__init__.py", line 18, in <module>
from langchain.document_loaders.acreom import AcreomLoader
File "/Users/liucong/opt/miniconda3/envs/310-test/lib/python3.10/site-packages/langchain/document_loaders/acreom.py", line 5, in <module>
from langchain_core.documents import Document
File "/Users/liucong/opt/miniconda3/envs/310-test/lib/python3.10/site-packages/langchain_core/__init__.py", line 5, in <module>
except metadata.PackageNotFoundError:
AttributeError: partially initialized module 'importlib.metadata' has no attribute 'PackageNotFoundError' (most likely due to a circular import) | BUG: csv loader exception, AttributeError: partially initialized module 'importlib.metadata' has no attribute 'version' (most likely due to a circular import) | https://api.github.com/repos/langchain-ai/langchain/issues/14173/comments | 3 | 2023-12-02T12:42:14Z | 2024-04-07T16:06:39Z | https://github.com/langchain-ai/langchain/issues/14173 | 2,022,026,571 | 14,173 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Please help!
Error:
GGML_ASSERT: C:\Users\user\AppData\Local\Temp\pip-install-l0_yao9s\llama-cpp-python_686671b6d7b8440a98e23acb5bc6a41a\vendor\llama.cpp\ggml.c:15149: cgraph->nodes[cgraph->n_nodes - 1] == tensor
GGML_ASSERT: C:\Users\user\AppData\Local\Temp\pip-install-l0_yao9s\llama-cpp-python_686671b6d7b8440a98e23acb5bc6a41a\vendor\llama.cpp\ggml.c:4326: ggml_nelements(a) == (ne0ne1ne2*ne3)
Repeated trials yield same results.
Local llm: TheBloke/Llama-2-7B-Chat-GGUF (llama-2-7b-chat.Q6_K.gguf)
Embedding: BAAI/bge-large-en-v1.5
```
import os
os.environ['OPENAI_API_KEY'] = 'dummy_key'
# import paperscraper
from paperqa import Docs
from langchain.llms.llamacpp import LlamaCpp
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.embeddings import LlamaCppEmbeddings
from langchain.embeddings import HuggingFaceBgeEmbeddings
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
# Make sure the model path is correct for your system!
llm = LlamaCpp(
model_path="C:/paper-qa/models/llama-2-7b-chat.Q6_K.gguf", callbacks=[StreamingStdOutCallbackHandler()],
)
model_name = "BAAI/bge-large-en-v1.5"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': True}
embeddings = HuggingFaceBgeEmbeddings(
model_name=model_name,
# model_kwargs=model_kwargs,
# encode_kwargs=encode_kwargs,
cache_folder="C:\paper-qa\models"
)
# embeddings = LlamaCppEmbeddings(
# model_path="C:/paper-qa/models/bge-large-en-v1_5_pytorch_model.bin"
# )
# model_name = "BAAI/bge-large-en-v1.5"
# model_kwargs = {'device': 'cpu'}
# encode_kwargs = {'normalize_embeddings': False}
# hf = HuggingFaceEmbeddings(
# model_name=model_name,
# model_kwargs=model_kwargs,
# encode_kwargs=encode_kwargs
# )
docs = Docs(llm=llm, embeddings=embeddings)
# keyword_search = 'bispecific antibody manufacture'
# papers = paperscraper.search_papers(keyword_search, limit=2)
# for path,data in papers.items():
# try:
# docs.add(path,chunk_chars=500)
# except ValueError as e:
# print('Could not read', path, e)
path = "C:\\paper-qa\\Source\\The Encyclopedia of the Cold War_90-95.pdf"
print("Before adding document")
docs.add(path,chunk_chars=500)
print("Document added")
answer = docs.query("Summarize 2 interesting events of the cold war from the book.")
print("Queried.")
print(answer)
```
whole output
```
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from C:/paper-qa/models/llama-2-7b-chat.Q6_K.gguf (version GGUF V2)
llama_model_loader: - tensor 0: token_embd.weight q6_K [ 4096, 32000,
1, 1 ]
llama_model_loader: - tensor 1: blk.0.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 2: blk.0.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 3: blk.0.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 4: blk.0.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 5: blk.0.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 6: blk.0.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 7: blk.0.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 8: blk.0.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 9: blk.0.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 10: blk.1.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 11: blk.1.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 12: blk.1.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 13: blk.1.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 14: blk.1.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 15: blk.1.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 16: blk.1.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 17: blk.1.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 18: blk.1.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 19: blk.10.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 20: blk.10.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 21: blk.10.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 22: blk.10.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 23: blk.10.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 24: blk.10.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 25: blk.10.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 26: blk.10.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 27: blk.10.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 28: blk.11.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 29: blk.11.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 30: blk.11.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 31: blk.11.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 32: blk.11.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 33: blk.11.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 34: blk.11.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 35: blk.11.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 36: blk.11.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 37: blk.12.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 38: blk.12.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 39: blk.12.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 40: blk.12.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 41: blk.12.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 42: blk.12.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 43: blk.12.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 44: blk.12.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 45: blk.12.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 46: blk.13.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 47: blk.13.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 48: blk.13.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 49: blk.13.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 50: blk.13.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 51: blk.13.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 52: blk.13.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 53: blk.13.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 54: blk.13.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 55: blk.14.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 56: blk.14.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 57: blk.14.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 58: blk.14.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 59: blk.14.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 60: blk.14.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 61: blk.14.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 62: blk.14.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 63: blk.14.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 64: blk.15.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 65: blk.15.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 66: blk.15.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 67: blk.15.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 68: blk.15.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 69: blk.15.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 70: blk.15.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 71: blk.15.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 72: blk.15.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 73: blk.16.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 74: blk.16.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 75: blk.16.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 76: blk.16.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 77: blk.16.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 78: blk.16.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 79: blk.16.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 80: blk.16.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 81: blk.16.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 82: blk.17.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 83: blk.17.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 84: blk.17.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 85: blk.17.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 86: blk.17.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 87: blk.17.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 88: blk.17.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 89: blk.17.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 90: blk.17.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 91: blk.18.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 92: blk.18.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 93: blk.18.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 94: blk.18.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 95: blk.18.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 96: blk.18.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 97: blk.18.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 98: blk.18.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 99: blk.18.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 100: blk.19.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 101: blk.19.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 102: blk.19.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 103: blk.19.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 104: blk.19.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 105: blk.19.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 106: blk.19.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 107: blk.19.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 108: blk.19.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 109: blk.2.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 110: blk.2.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 111: blk.2.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 112: blk.2.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 113: blk.2.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 114: blk.2.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 115: blk.2.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 116: blk.2.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 117: blk.2.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 118: blk.20.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 119: blk.20.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 120: blk.20.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 121: blk.20.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 122: blk.20.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 123: blk.20.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 124: blk.20.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 125: blk.20.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 126: blk.20.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 127: blk.21.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 128: blk.21.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 129: blk.21.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 130: blk.21.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 131: blk.21.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 132: blk.21.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 133: blk.21.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 134: blk.21.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 135: blk.21.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 136: blk.22.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 137: blk.22.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 138: blk.22.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 139: blk.22.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 140: blk.22.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 141: blk.22.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 142: blk.22.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 143: blk.22.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 144: blk.22.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 145: blk.23.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 146: blk.23.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 147: blk.23.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 148: blk.23.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 149: blk.23.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 150: blk.23.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 151: blk.23.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 152: blk.23.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 153: blk.23.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 154: blk.3.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 155: blk.3.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 156: blk.3.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 157: blk.3.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 158: blk.3.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 159: blk.3.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 160: blk.3.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 161: blk.3.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 162: blk.3.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 163: blk.4.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 164: blk.4.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 165: blk.4.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 166: blk.4.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 167: blk.4.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 168: blk.4.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 169: blk.4.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 170: blk.4.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 171: blk.4.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 172: blk.5.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 173: blk.5.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 174: blk.5.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 175: blk.5.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 176: blk.5.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 177: blk.5.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 178: blk.5.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 179: blk.5.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 180: blk.5.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 181: blk.6.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 182: blk.6.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 183: blk.6.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 184: blk.6.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 185: blk.6.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 186: blk.6.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 187: blk.6.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 188: blk.6.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 189: blk.6.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 190: blk.7.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 191: blk.7.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 192: blk.7.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 193: blk.7.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 194: blk.7.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 195: blk.7.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 196: blk.7.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 197: blk.7.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 198: blk.7.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 199: blk.8.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 200: blk.8.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 201: blk.8.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 202: blk.8.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 203: blk.8.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 204: blk.8.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 205: blk.8.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 206: blk.8.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 207: blk.8.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 208: blk.9.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 209: blk.9.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 210: blk.9.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 211: blk.9.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 212: blk.9.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 213: blk.9.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 214: blk.9.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 215: blk.9.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 216: blk.9.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 217: output.weight q6_K [ 4096, 32000,
1, 1 ]
llama_model_loader: - tensor 218: blk.24.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 219: blk.24.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 220: blk.24.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 221: blk.24.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 222: blk.24.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 223: blk.24.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 224: blk.24.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 225: blk.24.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 226: blk.24.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 227: blk.25.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 228: blk.25.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 229: blk.25.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 230: blk.25.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 231: blk.25.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 232: blk.25.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 233: blk.25.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 234: blk.25.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 235: blk.25.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 236: blk.26.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 237: blk.26.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 238: blk.26.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 239: blk.26.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 240: blk.26.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 241: blk.26.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 242: blk.26.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 243: blk.26.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 244: blk.26.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 245: blk.27.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 246: blk.27.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 247: blk.27.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 248: blk.27.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 249: blk.27.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 250: blk.27.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 251: blk.27.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 252: blk.27.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 253: blk.27.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 254: blk.28.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 255: blk.28.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 256: blk.28.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 257: blk.28.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 258: blk.28.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 259: blk.28.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 260: blk.28.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 261: blk.28.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 262: blk.28.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 263: blk.29.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 264: blk.29.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 265: blk.29.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 266: blk.29.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 267: blk.29.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 268: blk.29.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 269: blk.29.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 270: blk.29.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 271: blk.29.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 272: blk.30.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 273: blk.30.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 274: blk.30.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 275: blk.30.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 276: blk.30.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 277: blk.30.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 278: blk.30.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 279: blk.30.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 280: blk.30.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 281: blk.31.attn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 282: blk.31.ffn_down.weight q6_K [ 11008, 4096,
1, 1 ]
llama_model_loader: - tensor 283: blk.31.ffn_gate.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 284: blk.31.ffn_up.weight q6_K [ 4096, 11008,
1, 1 ]
llama_model_loader: - tensor 285: blk.31.ffn_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - tensor 286: blk.31.attn_k.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 287: blk.31.attn_output.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 288: blk.31.attn_q.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 289: blk.31.attn_v.weight q6_K [ 4096, 4096,
1, 1 ]
llama_model_loader: - tensor 290: output_norm.weight f32 [ 4096, 1,
1, 1 ]
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = LLaMA v2
llama_model_loader: - kv 2: llama.context_length u32 = 4096
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 10: general.file_type u32 = 18llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32000] = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 18: general.quantization_version u32 = 2
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type q6_K: 226 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V2
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 4096
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 32
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 11008
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 4096
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = mostly Q6_K
llm_load_print_meta: model params = 6.74 B
llm_load_print_meta: model size = 5.15 GiB (6.56 BPW)
llm_load_print_meta: general.name = LLaMA v2
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token = 13 '<0x0A>'
llm_load_tensors: ggml ctx size = 0.11 MiB
llm_load_tensors: mem required = 5272.45 MiB
....................................................................................................
llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_new_context_with_model: kv self size = 256.00 MiB
llama_build_graph: non-view tensors processed: 740/740
llama_new_context_with_model: compute buffer total size = 4.16 MiB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 0 | VSX = 0
|
Before adding document
Abdullah I of Jordan. "Soldier, Diplomat, and King of Jordan." The Oxford Encyclopedia of the
Modern World, vol. 1, Oxford University Press, 2023, pp. 56-57.
llama_print_timings: load time = 7311.60 ms
llama_print_timings: sample time = 18.23 ms / 55 runs ( 0.33 ms per token,
3016.67 tokens per second)
llama_print_timings: prompt eval time = 53923.10 ms / 190 tokens ( 283.81 ms per token,
3.52 tokens per second)
llama_print_timings: eval time = 19552.25 ms / 54 runs ( 362.08 ms per token,
2.76 tokens per second)
llama_print_timings: total time = 73785.66 ms
Document added
Llama.generate: prefix-match hit
Llama.generate: prefix-match hit
Llama.generate: prefix-match hit
Llama.generate: prefix-match hit
Llama.generate: prefix-match hit
GGML_ASSERT: C:\Users\user\AppData\Local\Temp\pip-install-l0_yao9s\llama-cpp-python_686671b6d7b8440a98e23acb5bc6a41a\vendor\llama.cpp\ggml.c:15149: cgraph->nodes[cgraph->n_nodes - 1] == tensor
GGML_ASSERT: C:\Users\user\AppData\Local\Temp\pip-install-l0_yao9s\llama-cpp-python_686671b6d7b8440a98e23acb5bc6a41a\vendor\llama.cpp\ggml.c:4326: ggml_nelements(a) == (ne0*ne1*ne2*ne3)
PS C:\paper-qa>
```
### Suggestion:
_No response_ | Local LLM ISSUE GGML_ASSERT | https://api.github.com/repos/langchain-ai/langchain/issues/14169/comments | 6 | 2023-12-02T09:07:35Z | 2024-03-18T16:07:14Z | https://github.com/langchain-ai/langchain/issues/14169 | 2,021,955,776 | 14,169 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
```python
from langchain.chains import create_extraction_chain
from langchain.chat_models import ChatOpenAI
import dotenv
dotenv.load_dotenv()
# Schema
schema = {
"properties": {
"name": {"type": "string"},
"education": {"type": "string"},
"company": {"type": "string"},
},
"required": ["name", "education"],
}
# Input
inp = """小明毕业于北京大学,在微软上班\n小李毕业于清华大学在阿里巴巴上班\n小王毕业于浙江大学在烟草局上班"""
# Run chain
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-1106")
chain = create_extraction_chain(schema, llm)
print(chain.run(inp)[0])
```
```shell
#output
{'name': 'å°\x8fæ\x98\x8e', 'education': 'å\x8c\x97京大å\xad¦', 'company': '微软'}
```
How to fix this Chinese `encoding/decoding` problem?
### Suggestion:
_No response_ | Question about runing Langchain Extraction Demo in Chinese text. | https://api.github.com/repos/langchain-ai/langchain/issues/14168/comments | 4 | 2023-12-02T09:01:35Z | 2023-12-04T02:10:15Z | https://github.com/langchain-ai/langchain/issues/14168 | 2,021,953,293 | 14,168 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Code:
retriever = BM25Retriever.from_documents(
[
Document(page_content="foo"),
Document(page_content="bar"),
Document(page_content="world"),
Document(page_content="hello"),
Document(page_content="foo bar"),
]
)
Error:
---> 30 retriever = BM25Retriever.from_documents(
31 [
32 Document(page_content="foo"),
33 Document(page_content="bar"),
34 Document(page_content="world"),
35 Document(page_content="hello"),
36 Document(page_content="foo bar"),
37 ]
38 )
AttributeError: type object 'BM25Retriever' has no attribute 'from_documents'
### Suggestion:
_No response_ | Issue: 'BM25Retriever' has no attribute 'from_documents' | https://api.github.com/repos/langchain-ai/langchain/issues/14167/comments | 4 | 2023-12-02T08:44:31Z | 2024-03-25T16:06:57Z | https://github.com/langchain-ai/langchain/issues/14167 | 2,021,948,035 | 14,167 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain: 0.0.339
Python: 3.11.6
I followed instructions in
[https://docs.smith.langchain.com/evaluation/evaluator-implementations](https://docs.smith.langchain.com/evaluation/evaluator-implementations)
`from langsmith import Client
from langchain.smith import RunEvalConfig, run_on_dataset
from backend.tools.self_query_tool import do_self_query
evaluation_config = RunEvalConfig(
evaluators=[
"cot_qa",
]
)
client = Client()
run_on_dataset(
dataset_name="self_query_tool",
llm_or_chain_factory=do_self_query,
client=client,
evaluation=evaluation_config,
verbose=True,
project_name="default",
)`
got this blob code. I used the langsmith UI to create a dataset and a single example.
However when i run the test with
python test/self_query_tool_test.py
i get
`
File "/home/dharshana/.local/share/virtualenvs/turners-virtual-assistant-QSiRQ3G1/lib/python3.11/site-packages/langsmith/client.py", line 1113, in create_project
ls_utils.raise_for_status_with_text(response)
File "/home/dharshana/.local/share/virtualenvs/turners-virtual-assistant-QSiRQ3G1/lib/python3.11/site-packages/langsmith/utils.py", line 85, in raise_for_status_with_text
raise requests.HTTPError(str(e), response.text) from e
requests.exceptions.HTTPError: [Errno 409 Client Error: Conflict for url: https://api.langchain.plus/sessions] {"detail":"Session already exists."}
❯
`
I have never attempted an evaluation of my chain before. Im just calling a function which creates its own llm client and runs
`llm = ChatOpenAI(temperature=0, model="gpt-4-1106-preview")
retriever = SelfQueryRetriever.from_llm(
llm,
vectordb,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True
)`
Is there an issue with doign this? I have been using langsmith successfully to log my llm calls and agent actions
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a function which does
`
llm = ChatOpenAI(temperature=0, model="gpt-4-1106-preview")
retriever = SelfQueryRetriever.from_llm(
llm,
vectordb,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True
)
`
then evaluate the function with
`
evaluation_config = RunEvalConfig(
evaluators=[
"cot_qa",
]
)
client = Client()
run_on_dataset(
dataset_name="self_query_tool",
llm_or_chain_factory=do_self_query,
client=client,
evaluation=evaluation_config,
verbose=True,
project_name="default",
)
`
### Expected behavior
The function is evaluated with test results in the UI | run_on_dataset: Errno 409 Client Error: Session already exists. | https://api.github.com/repos/langchain-ai/langchain/issues/14161/comments | 1 | 2023-12-02T01:44:13Z | 2023-12-02T01:52:08Z | https://github.com/langchain-ai/langchain/issues/14161 | 2,021,798,420 | 14,161 |
[
"hwchase17",
"langchain"
] | ### System Info
[pydantic_chain_test_code.txt](https://github.com/langchain-ai/langchain/files/13532796/pydantic_chain_test_code.txt)
[error_log.txt](https://github.com/langchain-ai/langchain/files/13532799/error_log.txt)
### Who can help?
@agola11 @hwchase17
The attached code is adopted from the official example from https://python.langchain.com/docs/use_cases/tagging. I am using the AzureChatOpenAI API, but I don't think the cause is related to that. I added an example prompt to ensure that the llm is working properly.
It appears that even when passing in a confirmed subclass of BaseModel, which is pydantic_schema, create_tagging_chain_pydantic(Tags, llm) is triggering validation errors. Please see the attached test code and error log.
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run the attached code and match it with the attached error log, which shows the version numbers of Python, Langchain and Pydantic. I am running on WIndows 11.:
Python version: 3.11.5 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:26:23) [MSC v.1916 64 bit (AMD64)]
Langchain version: 0.0.344
Pydantic version: 2.5.2
### Expected behavior
Expected behavior is as shown in the last example on https://python.langchain.com/docs/use_cases/tagging
Tags(sentiment='sad', aggressiveness=5, language='spanish') | create_tagging_chain_pydantic() pydantic schema validation errors | https://api.github.com/repos/langchain-ai/langchain/issues/14159/comments | 2 | 2023-12-02T00:35:53Z | 2023-12-04T20:06:05Z | https://github.com/langchain-ai/langchain/issues/14159 | 2,021,770,513 | 14,159 |
[
"hwchase17",
"langchain"
] | ### System Info
conda 23.3.1
Python 3.10.6
angchain 0.0.344 pypi_0 pypi
langchain-core 0.0.8 pypi_0 pypi
langchain-experimental 0.0.43 pypi_0 pypi
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
#This Notebook will show the integration between Generative AI with SAP HANA Cloud
!pip install -q --upgrade langchain boto3 awscli botocore
!pip install -q sqlalchemy-hana langchain_experimental hdbcli
import langchain
from langchain.sql_database import SQLDatabase
from langchain_experimental.sql import SQLDatabaseChain
from langchain.prompts.prompt import PromptTemplate
from langchain.llms.sagemaker_endpoint import LLMContentHandler
from langchain import SagemakerEndpoint
from langchain.llms import bedrock
from urllib.parse import quote
import sqlalchemy
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy import create_engine, select, Table, MetaData, Column, String
import hdbcli
import json
#Next step is to prepare the template for prompt and input to be used by the Generative AI
table_info = "Table Hotel has fields Name, Address, City, State, Zip code. \
Table Room has fields Free or Available, Price. \
Table Customer has fields Customer Number, title, first name, name, address, zip code. \
Table Reservation has fields Reservation Number,Arrival Date, Departure Date. \
Table Maintenance has fields Description, Date performed, Performed by."
_DEFAULT_TEMPLATE = """
Given an input question, create a syntactically correct {dialect} SQL query to run without comments, then provide answer to the question in english based on the result of SQL Query.
Always use schema USER1. You DO NOT need to check on the SQL Query for common mistake.
{table_info}
Question: {input}"""
PROMPT = PromptTemplate(
input_variables=["input", "table_info", "dialect"], template=_DEFAULT_TEMPLATE
)
#Next let's instatiate the bedrock and the handler treatment of input and output
class ContentHandler(LLMContentHandler):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, prompt: str, model_kwargs: dict) -> bytes:
input_str = json.dumps({"text_inputs": prompt, **model_kwargs})
return input_str.encode('utf-8')
def transform_output(self, output: bytes) -> str:
response_json = json.loads(output.read().decode("utf-8"))
return response_json["generated_texts"][0]
content_handler = ContentHandler()
model_parameter = {"temperature": 0, "max_tokens_to_sample": 4000}
#Make sure your account has access to anthropic claude access. This can be enabled from Bedrock console. Access is auto approved.
llm = bedrock.Bedrock(model_id="anthropic.claude-v2:1", model_kwargs=model_parameter, region_name="us-west-2")
#Let's connect to the SAP HANA Database and then execute langchain SQL Database Chain to query from the Generative AI
db = SQLDatabase.from_uri("hana://USER:Password@<mydb>.hana.trial-us10.hanacloud.ondemand.com:443")
db_chain = SQLDatabaseChain.from_llm(llm=llm, db=db, prompt=PROMPT, verbose=True, use_query_checker=True, top_k=5 )
Execute the first query, this is a simple English to text SQL
db_chain.run("How many Hotels are there ?")
UNEXPECTED OUTPUT *****************
> Entering new SQLDatabaseChain chain...
How many Hotels are there ?
SQLQuery:I don't see any issues with the original SQL query. It looks good as written. Here is the query again:
```sql
SELECT COUNT(*) AS num_hotels
FROM hotel;
```
This counts the number of rows in the hotel table to get the number of unique hotels and returns the result in the num_hotels column alias. The query follows proper practices and does not contain any of the common mistakes listed.
---------------------------------------------------------------------------
ProgrammingError Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/sqlalchemy/engine/base.py:1819, in Connection._execute_context(self, dialect, constructor, statement, parameters, execution_options, *args, **kw)
1818 if not evt_handled:
-> 1819 self.dialect.do_execute(
1820 cursor, statement, parameters, context
1821 )
1823 if self._has_events or self.engine._has_events:
File /opt/conda/lib/python3.10/site-packages/sqlalchemy/engine/default.py:732, in DefaultDialect.do_execute(self, cursor, statement, parameters, context)
731 def do_execute(self, cursor, statement, parameters, context=None):
--> 732 cursor.execute(statement, parameters)
ProgrammingError: (257, 'sql syntax error: incorrect syntax near "I": line 1 col 1 (at pos 1)')
The above exception was the direct cause of the following exception:
ProgrammingError Traceback (most recent call last)
Cell In[43], line 2
1 #Execute the first query, this is a simple English to text SQL
----> 2 db_chain.run("How many Hotels are there ?")
File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:507, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
505 if len(args) != 1:
506 raise ValueError("`run` supports only one positional argument.")
--> 507 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
508 _output_key
509 ]
511 if kwargs and not args:
512 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
513 _output_key
514 ]
File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:312, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
310 except BaseException as e:
311 run_manager.on_chain_error(e)
--> 312 raise e
313 run_manager.on_chain_end(outputs)
314 final_outputs: Dict[str, Any] = self.prep_outputs(
315 inputs, outputs, return_only_outputs
316 )
File /opt/conda/lib/python3.10/site-packages/langchain/chains/base.py:306, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
299 run_manager = callback_manager.on_chain_start(
300 dumpd(self),
301 inputs,
302 name=run_name,
303 )
304 try:
305 outputs = (
--> 306 self._call(inputs, run_manager=run_manager)
307 if new_arg_supported
308 else self._call(inputs)
309 )
310 except BaseException as e:
311 run_manager.on_chain_error(e)
File /opt/conda/lib/python3.10/site-packages/langchain_experimental/sql/base.py:198, in SQLDatabaseChain._call(self, inputs, run_manager)
194 except Exception as exc:
195 # Append intermediate steps to exception, to aid in logging and later
196 # improvement of few shot prompt seeds
197 exc.intermediate_steps = intermediate_steps # type: ignore
--> 198 raise exc
File /opt/conda/lib/python3.10/site-packages/langchain_experimental/sql/base.py:168, in SQLDatabaseChain._call(self, inputs, run_manager)
162 _run_manager.on_text(
163 checked_sql_command, color="green", verbose=self.verbose
164 )
165 intermediate_steps.append(
166 {"sql_cmd": checked_sql_command}
167 ) # input: sql exec
--> 168 result = self.database.run(checked_sql_command)
169 intermediate_steps.append(str(result)) # output: sql exec
170 sql_cmd = checked_sql_command
File /opt/conda/lib/python3.10/site-packages/langchain/utilities/sql_database.py:433, in SQLDatabase.run(self, command, fetch)
423 def run(
424 self,
425 command: str,
426 fetch: Union[Literal["all"], Literal["one"]] = "all",
427 ) -> str:
428 """Execute a SQL command and return a string representing the results.
429
430 If the statement returns rows, a string of the results is returned.
431 If the statement returns no rows, an empty string is returned.
432 """
--> 433 result = self._execute(command, fetch)
434 # Convert columns values to string to avoid issues with sqlalchemy
435 # truncating text
436 res = [
437 tuple(truncate_word(c, length=self._max_string_length) for c in r.values())
438 for r in result
439 ]
File /opt/conda/lib/python3.10/site-packages/langchain/utilities/sql_database.py:411, in SQLDatabase._execute(self, command, fetch)
409 else: # postgresql and other compatible dialects
410 connection.exec_driver_sql("SET search_path TO %s", (self._schema,))
--> 411 cursor = connection.execute(text(command))
412 if cursor.returns_rows:
413 if fetch == "all":
File /opt/conda/lib/python3.10/site-packages/sqlalchemy/engine/base.py:1306, in Connection.execute(self, statement, *multiparams, **params)
1302 util.raise_(
1303 exc.ObjectNotExecutableError(statement), replace_context=err
1304 )
1305 else:
-> 1306 return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File /opt/conda/lib/python3.10/site-packages/sqlalchemy/sql/elements.py:332, in ClauseElement._execute_on_connection(self, connection, multiparams, params, execution_options, _force)
328 def _execute_on_connection(
329 self, connection, multiparams, params, execution_options, _force=False
330 ):
331 if _force or self.supports_execution:
--> 332 return connection._execute_clauseelement(
333 self, multiparams, params, execution_options
334 )
335 else:
336 raise exc.ObjectNotExecutableError(self)
File /opt/conda/lib/python3.10/site-packages/sqlalchemy/engine/base.py:1498, in Connection._execute_clauseelement(self, elem, multiparams, params, execution_options)
1486 compiled_cache = execution_options.get(
1487 "compiled_cache", self.engine._compiled_cache
1488 )
1490 compiled_sql, extracted_params, cache_hit = elem._compile_w_cache(
1491 dialect=dialect,
1492 compiled_cache=compiled_cache,
(...)
1496 linting=self.dialect.compiler_linting | compiler.WARN_LINTING,
1497 )
-> 1498 ret = self._execute_context(
1499 dialect,
1500 dialect.execution_ctx_cls._init_compiled,
1501 compiled_sql,
1502 distilled_params,
1503 execution_options,
1504 compiled_sql,
1505 distilled_params,
1506 elem,
1507 extracted_params,
1508 cache_hit=cache_hit,
1509 )
1510 if has_events:
1511 self.dispatch.after_execute(
1512 self,
1513 elem,
(...)
1517 ret,
1518 )
File /opt/conda/lib/python3.10/site-packages/sqlalchemy/engine/base.py:1862, in Connection._execute_context(self, dialect, constructor, statement, parameters, execution_options, *args, **kw)
1859 branched.close()
1861 except BaseException as e:
-> 1862 self._handle_dbapi_exception(
1863 e, statement, parameters, cursor, context
1864 )
1866 return result
File /opt/conda/lib/python3.10/site-packages/sqlalchemy/engine/base.py:2043, in Connection._handle_dbapi_exception(self, e, statement, parameters, cursor, context)
2041 util.raise_(newraise, with_traceback=exc_info[2], from_=e)
2042 elif should_wrap:
-> 2043 util.raise_(
2044 sqlalchemy_exception, with_traceback=exc_info[2], from_=e
2045 )
2046 else:
2047 util.raise_(exc_info[1], with_traceback=exc_info[2])
File /opt/conda/lib/python3.10/site-packages/sqlalchemy/util/compat.py:208, in raise_(***failed resolving arguments***)
205 exception.__cause__ = replace_context
207 try:
--> 208 raise exception
209 finally:
210 # credit to
211 # https://cosmicpercolator.com/2016/01/13/exception-leaks-in-python-2-and-3/
212 # as the __traceback__ object creates a cycle
213 del exception, replace_context, from_, with_traceback
File /opt/conda/lib/python3.10/site-packages/sqlalchemy/engine/base.py:1819, in Connection._execute_context(self, dialect, constructor, statement, parameters, execution_options, *args, **kw)
1817 break
1818 if not evt_handled:
-> 1819 self.dialect.do_execute(
1820 cursor, statement, parameters, context
1821 )
1823 if self._has_events or self.engine._has_events:
1824 self.dispatch.after_cursor_execute(
1825 self,
1826 cursor,
(...)
1830 context.executemany,
1831 )
File /opt/conda/lib/python3.10/site-packages/sqlalchemy/engine/default.py:732, in DefaultDialect.do_execute(self, cursor, statement, parameters, context)
731 def do_execute(self, cursor, statement, parameters, context=None):
--> 732 cursor.execute(statement, parameters)
ProgrammingError: (hdbcli.dbapi.ProgrammingError) (257, 'sql syntax error: incorrect syntax near "I": line 1 col 1 (at pos 1)')
[SQL: I don't see any issues with the original SQL query. It looks good as written. Here is the query again:
```sql
SELECT COUNT(*) AS num_hotels
FROM hotel;
```
This counts the number of rows in the hotel table to get the number of unique hotels and returns the result in the num_hotels column alias. The query follows proper practices and does not contain any of the common mistakes listed.]
(Background on this error at: https://sqlalche.me/e/14/f405)
### Expected behavior
The select statement is produced correct, it just that the LLM perform checks on the SQL Statement and comment it before passing it to SQLAlchemy for execution. When i used Claude v2, 75% SQLDatabaseChain is successful, however 25% always gets back to the behaviour providing comments about whether the SQL Query has common mistakes.
Now with Claude v2.1, ALL of the questions are checked, and failing when passing to SQLAlchemy. It seems there is a problem with the SQLDatabaseChain, either the prompt or something else. Please help to fix this.
[Lab2.ipynb.zip](https://github.com/langchain-ai/langchain/files/13532157/Lab2.ipynb.zip)
| Claude v2.1 SQLDatabaseChain produces comments mixed with SQL Statements | https://api.github.com/repos/langchain-ai/langchain/issues/14150/comments | 11 | 2023-12-01T21:48:33Z | 2024-05-07T16:07:28Z | https://github.com/langchain-ai/langchain/issues/14150 | 2,021,628,680 | 14,150 |
[
"hwchase17",
"langchain"
] | ### Feature request
Adding [IBM Watson Discovery Service](https://cloud.ibm.com/docs/discovery-data?topic=discovery-data-getting-started) as an additional retriever might be a useful extension to enable RAG-use cases or supporting RetrievalChains.
IBM Watson Discovery Service can find relevant documents based on natural language queries and understand document structures such as tables.
### Motivation
Currently, there is no support for IBM Watson Discovery Service (WDS) as a retriever which requires external scripts to use WDS as a source for a RetrievalChain e.g.
### Your contribution
I'm able to contribute a IbM Watson Discovery Service retriever. | Add Retriever for Knowledge Base IBM Watson Discovery Service | https://api.github.com/repos/langchain-ai/langchain/issues/14145/comments | 1 | 2023-12-01T21:01:46Z | 2024-03-16T16:09:11Z | https://github.com/langchain-ai/langchain/issues/14145 | 2,021,577,501 | 14,145 |
[
"hwchase17",
"langchain"
] | ### System Info
It seems the typing of my index is lost when I save and load it. This is me creating the FAISS vector store with the MAX_INNER_PRODUCT flag.
```python
# Saving
save_db = FAISS.from_texts(texts, embedding_function, metadatas=metadatas, distance_strategy="MAX_INNER_PRODUCT", normalize_L2=True)
save_db.save_local(db_directory)
print(save_db.index, save_db.distance_strategy)
# <faiss.swigfaiss_avx2.IndexFlatIP; proxy of <Swig Object of type 'faiss::IndexFlatIP *' at 0x7f4e0402e4b0> > MAX_INNER_PRODUCT
# Loading
load_db = FAISS.load_local(db_directory, embedding_function, distance_strategy="MAX_INNER_PRODUCT")
print(load_db .index, load_db .distance_strategy)
# <faiss.swigfaiss_avx2.IndexFlat; proxy of <Swig Object of type 'faiss::IndexFlat *' at 0x7f4dd0accdb0> > MAX_INNER_PRODUCT
```
I have looked through FAISS API documentation and can't tell if this is an abstracted class.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a FAISS vector store passing distance_strategy = "MAX_INNER_PRODUCT". This should create an index with FAISS.IndexFlatIP.
2. Save the FAISS vector store.
3. Load the FAISS vector store with the distance_strategy = "MAX_INNER_PRODUCT"
4. Compare the saved.index and loaded.index objects. One is IndexFlatIP and the other is an IndexFlat class.
### Expected behavior
When loading the index from load_local, it should still be an IndexFlatIP. | FAISS.load_local does not keep the typing of the saved Index | https://api.github.com/repos/langchain-ai/langchain/issues/14141/comments | 3 | 2023-12-01T20:31:24Z | 2024-01-11T22:59:40Z | https://github.com/langchain-ai/langchain/issues/14141 | 2,021,543,545 | 14,141 |
[
"hwchase17",
"langchain"
] | ### Feature request
Amazon DocumentDB has launched its vector search feature recently. Let us add Amazon DocumentDB as vector store in langchain.
### Motivation
This will help large customer base of Amazon DocumentDB to adopt langchain and seamlessly build AI/ML apps
### Your contribution
Amazon DocumentDB team can help with testing and technical help.
https://aws.amazon.com/about-aws/whats-new/2023/11/vector-search-amazon-documentdb/
https://docs.aws.amazon.com/documentdb/latest/developerguide/vector-search.html | Amazon DocumentDB Vector Store | https://api.github.com/repos/langchain-ai/langchain/issues/14140/comments | 8 | 2023-12-01T20:20:11Z | 2024-06-01T00:07:37Z | https://github.com/langchain-ai/langchain/issues/14140 | 2,021,530,789 | 14,140 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.11, `langchain==0.0.340`
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
Before LCEL, there is lots of logging:
```python
from langchain.chains import LLMChain
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.schema.callbacks.stdout import StdOutCallbackHandler
prompt = PromptTemplate.from_template(
"What is a good name for a company that makes {product}?"
)
chain = LLMChain(prompt=prompt, llm=ChatOpenAI(), callbacks=[StdOutCallbackHandler()])
chain.run(product="colorful socks")
```
```none
> Entering new LLMChain chain...
Prompt after formatting:
What is a good name for a company that makes colorful socks?
> Finished chain.
```
After LCEL, there is less logging:
```python
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.schema.callbacks.stdout import StdOutCallbackHandler
prompt = PromptTemplate.from_template(
"What is a good name for a company that makes {product}?"
)
runnable = prompt | ChatOpenAI()
runnable.invoke(
input={"product": "colorful socks"}, config={"callbacks": [StdOutCallbackHandler()]}
)
```
Leads to this output:
```none
> Entering new RunnableSequence chain...
> Entering new PromptTemplate chain...
> Finished chain.
> Finished chain.
```
### Expected behavior
To migrate from `LLMChain` to LCEL, I want the logging statements/verbosity to remain in tact | Bug: LCEL prompt template not logging | https://api.github.com/repos/langchain-ai/langchain/issues/14135/comments | 2 | 2023-12-01T18:30:40Z | 2024-03-17T16:07:47Z | https://github.com/langchain-ai/langchain/issues/14135 | 2,021,388,401 | 14,135 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/modules/callbacks/
This doc doesn't mention how to use callbacks with LCEL. It would be good to add that, to help with the migration from `LLMChain` to `LCEL`.
### Idea or request for content:
https://python.langchain.com/docs/modules/chains/ does a nice job of showing code pre-LCEL and code post-LCEL.
https://github.com/langchain-ai/langchain/discussions/12670 shows how to pass the callback to `invoke`, but ideally the example shows how to `bind` the callback once (as opposed to passing to each `invoke`). | DOC: documenting callbacks with LCEL | https://api.github.com/repos/langchain-ai/langchain/issues/14134/comments | 8 | 2023-12-01T18:21:00Z | 2023-12-04T18:34:07Z | https://github.com/langchain-ai/langchain/issues/14134 | 2,021,375,012 | 14,134 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.335
macOS 14.1.1 / Windows 10.0.19045
Python 3.11.6 / 3.10.9
### Who can help?
@naveentatikonda
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Install dependencies
```shell
pip install langchain opensearch-py boto3
```
Import packages
```python
import boto3
from langchain.embeddings import FakeEmbeddings
from langchain.vectorstores.opensearch_vector_search import OpenSearchVectorSearch
from opensearchpy import RequestsAWSV4SignerAuth, RequestsHttpConnection
```
Create AOSS auth
```python
boto_session = boto3.Session()
aoss_auth = RequestsAWSV4SignerAuth(credentials=boto_session.get_credentials(), region=boto_session.region_name, service="aoss")
```
Create AOSS vector search class
```python
embeddings = FakeEmbeddings(size=42)
database = OpenSearchVectorSearch(
opensearch_url="https://some_aoss_endpoint:443",
http_auth=aoss_auth,
connection_class=RequestsHttpConnection,
index_name="test",
embedding_function=embeddings,
```
Check if AOSS vector search identifies itself as AOSS
```python
database.is_aoss
```
Returns **False**
### Expected behavior
AOSS should correctly identify itself as AOSS given the configuration above, which means that `database.is_aoss` should return **True** in this case.
The issue is related to how the internal `_is_aoss_enabled` method works:
```python
def _is_aoss_enabled(http_auth: Any) -> bool:
"""Check if the service is http_auth is set as `aoss`."""
if (
http_auth is not None
and hasattr(http_auth, "service")
and http_auth.service == "aoss"
):
return True
return False
```
The `RequestsAWSV4SignerAuth` does not expose the _service_ property directly, but via the _signer_ attribute. In order to respect that, `is_aoss_enabled` should be adapted as following:
```python
def _is_aoss_enabled(http_auth: Any) -> bool:
"""Check if the service set via http_auth equals `aoss`."""
if http_auth is not None:
if hasattr(http_auth, "service") and http_auth.service == "aoss":
return True
elif hasattr(http_auth, "signer") and http_auth.signer.service == "aoss":
return True
else:
return False
return False
```
| OpenSearchVectorSearch fails to detect AOSS is enabled when using RequestAWS4SignerAuth from opensearch-py | https://api.github.com/repos/langchain-ai/langchain/issues/14129/comments | 3 | 2023-12-01T15:53:57Z | 2024-01-09T10:12:53Z | https://github.com/langchain-ai/langchain/issues/14129 | 2,021,147,121 | 14,129 |
[
"hwchase17",
"langchain"
] | ### System Info
Using latest docker versions for langchain templates
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
First, following the quickstart for Templates until it gets to LangSmith deployment:
1. `pip install -U langchain-cli`
2. `langchain app new my-app`
3. `cd my-app`
4. `langchain app add pirate-speak`
Then, since I don't have access to LangServe, follow the template README for docker deployment:
5. `docker build . -t my-langserve-app`
6. `docker run -e OPENAI_API_KEY=$OPENAI_API_KEY -p 8080:8080 my-langserve-app`
Results in error:
```
Traceback (most recent call last):
File "/usr/local/bin/uvicorn", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 416, in main
run(
File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 587, in run
server.run()
File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 61, in run
return asyncio.run(self.serve(sockets=sockets))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 68, in serve
config.load()
File "/usr/local/lib/python3.11/site-packages/uvicorn/config.py", line 467, in load
self.loaded_app = import_from_string(self.app)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 21, in import_from_string
module = importlib.import_module(module_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/code/app/server.py", line 14, in <module>
add_routes(app, NotImplemented)
File "/usr/local/lib/python3.11/site-packages/langserve/server.py", line 627, in add_routes
input_type_ = _resolve_model(runnable.get_input_schema(), "Input", model_namespace)
^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NotImplementedType' object has no attribute 'get_input_schema'
```
I also tried the `research-assistant` template and got a similar `get_input_schema` error.
Am I missing a setup step?
### Expected behavior
Deploying the solution locally | AttributeError: 'NotImplementedType' object has no attribute 'get_input_schema' | https://api.github.com/repos/langchain-ai/langchain/issues/14128/comments | 7 | 2023-12-01T15:26:49Z | 2024-05-10T16:08:40Z | https://github.com/langchain-ai/langchain/issues/14128 | 2,021,099,502 | 14,128 |
[
"hwchase17",
"langchain"
] | ### System Info
* Windows 11 Home (build 22621.2715)
* Python 3.12.0
* Clean virtual environment using Poetry with following dependencies:
```
python = "3.12.0"
langchain = "0.0.344"
spacy = "3.7.2"
spacy-llm = "0.6.4"
```
### Who can help?
@h3l As the creator of the pull request where VolcEngine was introduced
@baskaryan As tag handler of that pull request
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Anything that triggers spaCy's registry to make an inventory, for example:
```python
import spacy
spacy.blank("en")
```
With the last part of the Traceback being:
```
File "PROJECT_FOLDER\.venv\Lib\site-packages\langchain\llms\__init__.py", line 699, in __getattr__
k: v() for k, v in get_type_to_cls_dict().items()
^^^
File "PROJECT_FOLDER\.venv\Lib\site-packages\langchain_core\load\serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "PROJECT_FOLDER\.venv\Lib\site-packages\pydantic\v1\main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for VolcEngineMaasLLM
__root__
Did not find volc_engine_maas_ak, please add an environment variable `VOLC_ACCESSKEY` which contains it, or pass `volc_engine_maas_ak` as a named parameter. (type=value_error)
```
#### What I think causes this
I am quite certain that this is caused by [`langchain.llms.__init__.py:869 (for commit b161f30)`](https://github.com/langchain-ai/langchain/blob/b161f302ff56a14d8d0331cbec4a3efa23d06e1a/libs/langchain/langchain/llms/__init__.py#L869C51-L869C51):
```python
def get_type_to_cls_dict() -> Dict[str, Callable[[], Type[BaseLLM]]]:
return {
"ai21": _import_ai21,
"aleph_alpha": _import_aleph_alpha,
"amazon_api_gateway": _import_amazon_api_gateway,
...
"qianfan_endpoint": _import_baidu_qianfan_endpoint,
"yandex_gpt": _import_yandex_gpt,
# Line below is the only that actually calls the import function, returning a class instead of an import function
"VolcEngineMaasLLM": _import_volcengine_maas(),
}
```
The Volc Engine Maas LLM is the only in this dict to actually call the import function, while all other entries only the function itself, and do not call it.
### Expected behavior
Class to type dict only returns import functions, not actual classes:
```python
def get_type_to_cls_dict() -> Dict[str, Callable[[], Type[BaseLLM]]]:
return {
"ai21": _import_ai21,
"aleph_alpha": _import_aleph_alpha,
"amazon_api_gateway": _import_amazon_api_gateway,
...
"qianfan_endpoint": _import_baidu_qianfan_endpoint,
"yandex_gpt": _import_yandex_gpt,
# What I think would be correct (now without function call)
"VolcEngineMaasLLM": _import_volcengine_maas,
}
```
Unfortunately I don't have time to put in a PR myself, but I hope this helps finding the solution!
| Volc Engine MaaS has wrong entry in LLM type to class dict (causing SpaCy to not work with LangChain anymore) | https://api.github.com/repos/langchain-ai/langchain/issues/14127/comments | 4 | 2023-12-01T13:58:13Z | 2023-12-04T08:58:10Z | https://github.com/langchain-ai/langchain/issues/14127 | 2,020,934,355 | 14,127 |
[
"hwchase17",
"langchain"
] | ### Feature request
Currently, the ConversationSummaryBufferMemory only supports the GPT2TokenizerFast and there is no option to pass a custom tokenizer. Different models have different corresponding tokenizers, so it makes sense to allow the option to specify a custom tokenizer.
The tokenizer fetching is implemented in langchain/schema/language_model.py:
@lru_cache(maxsize=None) # Cache the tokenizer
def get_tokenizer() -> Any:
try:
from transformers import GPT2TokenizerFast
except ImportError:
raise ImportError(
"Could not import transformers python package. "
"This is needed in order to calculate get_token_ids. "
"Please install it with `pip install transformers`."
)
# create a GPT-2 tokenizer instance
return GPT2TokenizerFast.from_pretrained("gpt2")
### Motivation
Using the exact tokenizer which the model uses will help in being more accurate in the context summarization cutoff.
### Your contribution
I could suggest a solution. | Specify a custom tokenizer for the ConversationSummaryBufferMemory | https://api.github.com/repos/langchain-ai/langchain/issues/14124/comments | 1 | 2023-12-01T10:43:31Z | 2024-03-16T16:09:01Z | https://github.com/langchain-ai/langchain/issues/14124 | 2,020,608,208 | 14,124 |
[
"hwchase17",
"langchain"
] |
qa = RetrievalQA.from_chain_type(llm=llm, retriever=docsearch.as_retriever(
search_kwargs={"k": 1}), verbose=True)
How can we add the prompts like systemPromptTemplate and humanPromptTemplate in the above code.
### Suggestion:
_No response_ | How can we add Prompt in RetrievalQA | https://api.github.com/repos/langchain-ai/langchain/issues/14123/comments | 1 | 2023-12-01T10:41:14Z | 2024-03-17T16:07:41Z | https://github.com/langchain-ai/langchain/issues/14123 | 2,020,604,255 | 14,123 |
[
"hwchase17",
"langchain"
] | ### Feature request
Amazon Bedrock now provides some useful metrics when it has finished:
`"completion":"Response from Bedrock...","stop_reason":"stop_sequence","stop":"\n\nHuman:","amazon-bedrock-invocationMetrics":{"inputTokenCount":16,"outputTokenCount":337,"invocationLatency":4151,"firstByteLatency":136}}`
It would be good to expose these via LangChain, similar to how OpenAI API provides them.
### Motivation
To make it easier to calculate the cost of Amazon Bedrock queries.
### Your contribution
I'm happy to test any solution. I'd look to existing developers to guide us how best to expose them. | Expose Amazon Bedrock amazon-bedrock-invocationMetrics in response | https://api.github.com/repos/langchain-ai/langchain/issues/14120/comments | 4 | 2023-12-01T09:41:44Z | 2024-06-18T16:09:25Z | https://github.com/langchain-ai/langchain/issues/14120 | 2,020,491,917 | 14,120 |
[
"hwchase17",
"langchain"
] | I am currently operating a chroma docker on port 8001 and using it for my node js application with the URL CHROMA_URI=http://localhost:8001/, which allows me to read the vectors.
However, when I try to use the same chroma_URI with langchain chroma, I encounter an error message. The error message reads: **{"lc":1,"type":"not_implemented","id":["langchain","vectorstores","chroma","Chroma"]}**.



The above screenshots are my codes.
does anyone face this issue? | Trouble Connecting Langchain Chroma to Existing ChromaDB on Port 8001 | https://api.github.com/repos/langchain-ai/langchain/issues/14119/comments | 1 | 2023-12-01T09:09:40Z | 2024-03-17T16:07:36Z | https://github.com/langchain-ai/langchain/issues/14119 | 2,020,436,621 | 14,119 |
[
"hwchase17",
"langchain"
] | ### System Info
- Python 3.9.13
- langchain==0.0.344
- langchain-core==0.0.8
### Who can help?
@agola11
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Define a Custom Tool which useses some sort of OpenAI Model. For example GPT 3.5
2. Define a Tool agent which uses this Tool
3. Call the Agent inside a with Block with `get_openai_callback()`
4. The agent should now call the tool which uses an OpenAI Model
5. Calculate the Total Cost. By using the `get_openai_callback` variable. The cost does only include tokens used by the tool agent
### Expected behavior
It should Include all Cost. Including the one from the tool | Cost doesnt work when used inside Tool | https://api.github.com/repos/langchain-ai/langchain/issues/14117/comments | 2 | 2023-12-01T08:37:47Z | 2023-12-01T13:40:19Z | https://github.com/langchain-ai/langchain/issues/14117 | 2,020,382,738 | 14,117 |
[
"hwchase17",
"langchain"
] | ### System Info
I was just trying to run the LLMs tutorial.
```
from langchain.llms import VertexAI
llm = VertexAI()
print(llm("What are some of the pros and cons of Python as a programming language?"))
```
I got this error.
```
File "C:\Users\******\anaconda3\envs\vertex-ai\Lib\site-packages\langchain_core\load\serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "C:\Users\******\anaconda3\envs\vertex-ai\Lib\site-packages\pydantic\v1\main.py", line 339, in __init__
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\******\anaconda3\envs\vertex-ai\Lib\site-packages\pydantic\v1\main.py", line 1102, in validate_model
values = validator(cls_, values)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\******\anaconda3\envs\vertex-ai\Lib\site-packages\langchain\llms\vertexai.py", line 249, in validate_environment
cls._try_init_vertexai(values)
File "C:\Users\******\anaconda3\envs\vertex-ai\Lib\site-packages\langchain\llms\vertexai.py", line 216, in _try_init_vertexai
init_vertexai(**params)
File "C:\Users\******\anaconda3\envs\vertex-ai\Lib\site-packages\langchain\utilities\vertexai.py", line 42, in init_vertexai
import vertexa
```
I try to install an older version of pydantic like pip install pydantic == 1.10 or 1.10.10, but still error about "validation_error" .
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Anaconda Navigator
1.Python version == 3.12.0
2.Langchain version == 0.0.344
3.Google-cloud-aiplatform version == 1.36.4
4.Pydantic version == 2.5.2
### Expected behavior
Running code without any issues. | Google Cloud Vertex AI Throws AttributeError: partially initialized module 'vertexai' has no attribute 'init' | https://api.github.com/repos/langchain-ai/langchain/issues/14114/comments | 2 | 2023-12-01T07:13:02Z | 2024-03-17T16:07:31Z | https://github.com/langchain-ai/langchain/issues/14114 | 2,020,257,158 | 14,114 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am in need to get all the pages and page id of a particular Confluence Spaces, How can I get all the page id?
### Suggestion:
_No response_ | Issue:How to get all the Pages of Confluence Spaces. | https://api.github.com/repos/langchain-ai/langchain/issues/14113/comments | 9 | 2023-12-01T07:07:47Z | 2024-04-18T16:28:00Z | https://github.com/langchain-ai/langchain/issues/14113 | 2,020,249,681 | 14,113 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.