status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
233
| body
stringlengths 0
186k
โ | issue_url
stringlengths 38
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown | updated_file
stringlengths 7
188
| chunk_content
stringlengths 1
1.03M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,257 | Github integration | ### Feature request
Would be amazing to scan and get all the contents from the Github API, such as PRs, Issues and Discussions.
### Motivation
this would allows to ask questions on the history of the project, issues that other users might have found, and much more!
### Your contribution
Not really a python developer here, would take me a while to figure out all the changes required. | https://github.com/langchain-ai/langchain/issues/5257 | https://github.com/langchain-ai/langchain/pull/5408 | 0b3e0dd1d2fb81eeca76b960bb2376bd666608cd | 8259f9b7facae95236dd5156e2a14d87a0e1f90c | "2023-05-25T16:27:21Z" | python | "2023-05-30T03:11:21Z" | langchain/document_loaders/__init__.py | "CollegeConfidentialLoader",
"ConfluenceLoader",
"DataFrameLoader",
"DiffbotLoader",
"DirectoryLoader",
"DiscordChatLoader",
"DocugamiLoader",
"Docx2txtLoader",
"DuckDBLoader",
"EverNoteLoader",
"FacebookChatLoader",
"GCSDirectoryLoader",
"GCSFileLoader",
"GitLoader",
"GitbookLoader",
"GoogleApiClient",
"GoogleApiYoutubeLoader",
"GoogleDriveLoader",
"GutenbergLoader",
"HNLoader",
"HuggingFaceDatasetLoader",
"HuggingFaceDatasetLoader",
"IFixitLoader",
"IMSDbLoader",
"ImageCaptionLoader",
"JoplinLoader",
"JSONLoader",
"MWDumpLoader",
"MastodonTootsLoader",
"MathpixPDFLoader", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,257 | Github integration | ### Feature request
Would be amazing to scan and get all the contents from the Github API, such as PRs, Issues and Discussions.
### Motivation
this would allows to ask questions on the history of the project, issues that other users might have found, and much more!
### Your contribution
Not really a python developer here, would take me a while to figure out all the changes required. | https://github.com/langchain-ai/langchain/issues/5257 | https://github.com/langchain-ai/langchain/pull/5408 | 0b3e0dd1d2fb81eeca76b960bb2376bd666608cd | 8259f9b7facae95236dd5156e2a14d87a0e1f90c | "2023-05-25T16:27:21Z" | python | "2023-05-30T03:11:21Z" | langchain/document_loaders/__init__.py | "ModernTreasuryLoader",
"NotebookLoader",
"NotionDBLoader",
"NotionDirectoryLoader",
"ObsidianLoader",
"OneDriveLoader",
"OnlinePDFLoader",
"OutlookMessageLoader",
"PDFMinerLoader",
"PDFMinerPDFasHTMLLoader",
"PDFPlumberLoader",
"PagedPDFSplitter",
"PlaywrightURLLoader",
"PyMuPDFLoader",
"PyPDFDirectoryLoader",
"PyPDFLoader",
"PyPDFium2Loader",
"PythonLoader",
"ReadTheDocsLoader",
"RedditPostsLoader",
"RoamLoader",
"S3DirectoryLoader",
"S3FileLoader",
"SRTLoader",
"SeleniumURLLoader",
"SitemapLoader",
"SlackDirectoryLoader",
"TelegramChatFileLoader",
"TelegramChatApiLoader",
"SpreedlyLoader", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,257 | Github integration | ### Feature request
Would be amazing to scan and get all the contents from the Github API, such as PRs, Issues and Discussions.
### Motivation
this would allows to ask questions on the history of the project, issues that other users might have found, and much more!
### Your contribution
Not really a python developer here, would take me a while to figure out all the changes required. | https://github.com/langchain-ai/langchain/issues/5257 | https://github.com/langchain-ai/langchain/pull/5408 | 0b3e0dd1d2fb81eeca76b960bb2376bd666608cd | 8259f9b7facae95236dd5156e2a14d87a0e1f90c | "2023-05-25T16:27:21Z" | python | "2023-05-30T03:11:21Z" | langchain/document_loaders/__init__.py | "StripeLoader",
"TextLoader",
"TomlLoader",
"TrelloLoader",
"TwitterTweetLoader",
"UnstructuredAPIFileIOLoader",
"UnstructuredAPIFileLoader",
"UnstructuredEPubLoader",
"UnstructuredEmailLoader",
"UnstructuredFileIOLoader",
"UnstructuredFileLoader",
"UnstructuredHTMLLoader",
"UnstructuredImageLoader",
"UnstructuredMarkdownLoader",
"UnstructuredODTLoader",
"UnstructuredPDFLoader",
"UnstructuredPowerPointLoader",
"UnstructuredRTFLoader",
"UnstructuredURLLoader",
"UnstructuredWordDocumentLoader",
"WeatherDataLoader",
"WebBaseLoader",
"WhatsAppChatLoader",
"WikipediaLoader",
"YoutubeLoader",
"TelegramChatLoader",
"ToMarkdownLoader",
"PsychicLoader",
] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,337 | Add MongoDBAtlasVectorSearch vectorstore | ### Feature request
MongoDB Atlas is a fully managed DBaaS, powered by the MongoDB database. It also enables Lucene (collocated with the mongod process) for full-text search - this is know as Atlas Search. The PR has to allow Langchain users from using the functionality related to the MongoDB Atlas Vector Search feature where you can store your embeddings in MongoDB documents and create a Lucene vector index to perform a KNN search.
### Motivation
There is currently no way in Langchain to connect to MongoDB Atlas and perform a KNN search.
### Your contribution
I am submitting a PR for this issue soon. | https://github.com/langchain-ai/langchain/issues/5337 | https://github.com/langchain-ai/langchain/pull/5338 | c4b502a47051f50c6e24b824d3db622748458d13 | a61b7f7e7c76ae8667e40cd29cfe30a3868d7dd8 | "2023-05-27T11:41:39Z" | python | "2023-05-30T14:59:01Z" | langchain/vectorstores/__init__.py | """Wrappers on top of vector stores."""
from langchain.vectorstores.analyticdb import AnalyticDB
from langchain.vectorstores.annoy import Annoy
from langchain.vectorstores.atlas import AtlasDB
from langchain.vectorstores.base import VectorStore
from langchain.vectorstores.chroma import Chroma
from langchain.vectorstores.deeplake import DeepLake
from langchain.vectorstores.docarray import DocArrayHnswSearch, DocArrayInMemorySearch
from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch
from langchain.vectorstores.faiss import FAISS
from langchain.vectorstores.lancedb import LanceDB
from langchain.vectorstores.milvus import Milvus
from langchain.vectorstores.myscale import MyScale, MyScaleSettings
from langchain.vectorstores.opensearch_vector_search import OpenSearchVectorSearch
from langchain.vectorstores.pinecone import Pinecone
from langchain.vectorstores.qdrant import Qdrant
from langchain.vectorstores.redis import Redis
from langchain.vectorstores.sklearn import SKLearnVectorStore
from langchain.vectorstores.supabase import SupabaseVectorStore
from langchain.vectorstores.tair import Tair
from langchain.vectorstores.typesense import Typesense
from langchain.vectorstores.vectara import Vectara |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,337 | Add MongoDBAtlasVectorSearch vectorstore | ### Feature request
MongoDB Atlas is a fully managed DBaaS, powered by the MongoDB database. It also enables Lucene (collocated with the mongod process) for full-text search - this is know as Atlas Search. The PR has to allow Langchain users from using the functionality related to the MongoDB Atlas Vector Search feature where you can store your embeddings in MongoDB documents and create a Lucene vector index to perform a KNN search.
### Motivation
There is currently no way in Langchain to connect to MongoDB Atlas and perform a KNN search.
### Your contribution
I am submitting a PR for this issue soon. | https://github.com/langchain-ai/langchain/issues/5337 | https://github.com/langchain-ai/langchain/pull/5338 | c4b502a47051f50c6e24b824d3db622748458d13 | a61b7f7e7c76ae8667e40cd29cfe30a3868d7dd8 | "2023-05-27T11:41:39Z" | python | "2023-05-30T14:59:01Z" | langchain/vectorstores/__init__.py | from langchain.vectorstores.weaviate import Weaviate
from langchain.vectorstores.zilliz import Zilliz
__all__ = [
"Redis",
"ElasticVectorSearch",
"FAISS",
"VectorStore",
"Pinecone",
"Weaviate",
"Qdrant",
"Milvus",
"Zilliz",
"Chroma",
"OpenSearchVectorSearch",
"AtlasDB",
"DeepLake",
"Annoy",
"MyScale",
"MyScaleSettings",
"SKLearnVectorStore",
"SupabaseVectorStore",
"AnalyticDB",
"Vectara",
"Tair",
"LanceDB",
"DocArrayHnswSearch",
"DocArrayInMemorySearch",
"Typesense",
] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,605 | Embeddings normalization and similarity metric | I am new to using Langchain and attempting to make it work with a locally running LLM (Alpaca) and Embeddings model (Sentence Transformer). When configuring the sentence transformer model with `HuggingFaceEmbeddings` no arguments can be passed to the encode method of the model, specifically `normalize_embeddings=True`. Neither can I specify the distance metric that I want to use in the `similarity_search` method irrespective of what vector store I am using. So it seems to me I can only create unnormalized embeddings with huggingface models and only use L2 distance as the similarity metric by default. Whereas I want to use the cosine similarity metric or have normalized embeddings and then use the dot product/L2 distance.
If I am wrong here can someone point me in the right direction. If not are there any plans to implement this? | https://github.com/langchain-ai/langchain/issues/3605 | https://github.com/langchain-ai/langchain/pull/5450 | e09afb4b4445c99bebabca4b2beb150ba3a37c5c | c1807d84086c92d1aea2eb7be181204e72ae10d0 | "2023-04-26T18:02:20Z" | python | "2023-05-30T18:57:04Z" | langchain/embeddings/huggingface.py | """Wrapper around HuggingFace embedding models."""
from typing import Any, Dict, List, Optional
from pydantic import BaseModel, Extra, Field
from langchain.embeddings.base import Embeddings
DEFAULT_MODEL_NAME = "sentence-transformers/all-mpnet-base-v2"
DEFAULT_INSTRUCT_MODEL = "hkunlp/instructor-large"
DEFAULT_EMBED_INSTRUCTION = "Represent the document for retrieval: "
DEFAULT_QUERY_INSTRUCTION = (
"Represent the question for retrieving supporting documents: "
)
class HuggingFaceEmbeddings(BaseModel, Embeddings):
"""Wrapper around sentence_transformers embedding models.
To use, you should have the ``sentence_transformers`` python package installed.
Example: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,605 | Embeddings normalization and similarity metric | I am new to using Langchain and attempting to make it work with a locally running LLM (Alpaca) and Embeddings model (Sentence Transformer). When configuring the sentence transformer model with `HuggingFaceEmbeddings` no arguments can be passed to the encode method of the model, specifically `normalize_embeddings=True`. Neither can I specify the distance metric that I want to use in the `similarity_search` method irrespective of what vector store I am using. So it seems to me I can only create unnormalized embeddings with huggingface models and only use L2 distance as the similarity metric by default. Whereas I want to use the cosine similarity metric or have normalized embeddings and then use the dot product/L2 distance.
If I am wrong here can someone point me in the right direction. If not are there any plans to implement this? | https://github.com/langchain-ai/langchain/issues/3605 | https://github.com/langchain-ai/langchain/pull/5450 | e09afb4b4445c99bebabca4b2beb150ba3a37c5c | c1807d84086c92d1aea2eb7be181204e72ae10d0 | "2023-04-26T18:02:20Z" | python | "2023-05-30T18:57:04Z" | langchain/embeddings/huggingface.py | .. code-block:: python
from langchain.embeddings import HuggingFaceEmbeddings
model_name = "sentence-transformers/all-mpnet-base-v2"
model_kwargs = {'device': 'cpu'}
hf = HuggingFaceEmbeddings(model_name=model_name, model_kwargs=model_kwargs)
"""
client: Any
model_name: str = DEFAULT_MODEL_NAME
"""Model name to use."""
cache_folder: Optional[str] = None
"""Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Key word arguments to pass to the model."""
encode_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Key word arguments to pass when calling the `encode` method of the model."""
def __init__(self, **kwargs: Any):
"""Initialize the sentence_transformer."""
super().__init__(**kwargs)
try:
import sentence_transformers
except ImportError as exc:
raise ImportError(
"Could not import sentence_transformers python package. "
"Please install it with `pip install sentence_transformers`."
) from exc
self.client = sentence_transformers.SentenceTransformer(
self.model_name, cache_folder=self.cache_folder, **self.model_kwargs
)
class Config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,605 | Embeddings normalization and similarity metric | I am new to using Langchain and attempting to make it work with a locally running LLM (Alpaca) and Embeddings model (Sentence Transformer). When configuring the sentence transformer model with `HuggingFaceEmbeddings` no arguments can be passed to the encode method of the model, specifically `normalize_embeddings=True`. Neither can I specify the distance metric that I want to use in the `similarity_search` method irrespective of what vector store I am using. So it seems to me I can only create unnormalized embeddings with huggingface models and only use L2 distance as the similarity metric by default. Whereas I want to use the cosine similarity metric or have normalized embeddings and then use the dot product/L2 distance.
If I am wrong here can someone point me in the right direction. If not are there any plans to implement this? | https://github.com/langchain-ai/langchain/issues/3605 | https://github.com/langchain-ai/langchain/pull/5450 | e09afb4b4445c99bebabca4b2beb150ba3a37c5c | c1807d84086c92d1aea2eb7be181204e72ae10d0 | "2023-04-26T18:02:20Z" | python | "2023-05-30T18:57:04Z" | langchain/embeddings/huggingface.py | """Configuration for this pydantic object."""
extra = Extra.forbid
def embed_documents(self, texts: List[str]) -> List[List[float]]:
"""Compute doc embeddings using a HuggingFace transformer model.
Args:
texts: The list of texts to embed.
Returns:
List of embeddings, one for each text.
"""
texts = list(map(lambda x: x.replace("\n", " "), texts))
embeddings = self.client.encode(texts, **self.encode_kwargs)
return embeddings.tolist()
def embed_query(self, text: str) -> List[float]:
"""Compute query embeddings using a HuggingFace transformer model.
Args:
text: The text to embed.
Returns:
Embeddings for the text.
"""
text = text.replace("\n", " ")
embedding = self.client.encode(text, **self.encode_kwargs)
return embedding.tolist()
class HuggingFaceInstructEmbeddings(BaseModel, Embeddings): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,605 | Embeddings normalization and similarity metric | I am new to using Langchain and attempting to make it work with a locally running LLM (Alpaca) and Embeddings model (Sentence Transformer). When configuring the sentence transformer model with `HuggingFaceEmbeddings` no arguments can be passed to the encode method of the model, specifically `normalize_embeddings=True`. Neither can I specify the distance metric that I want to use in the `similarity_search` method irrespective of what vector store I am using. So it seems to me I can only create unnormalized embeddings with huggingface models and only use L2 distance as the similarity metric by default. Whereas I want to use the cosine similarity metric or have normalized embeddings and then use the dot product/L2 distance.
If I am wrong here can someone point me in the right direction. If not are there any plans to implement this? | https://github.com/langchain-ai/langchain/issues/3605 | https://github.com/langchain-ai/langchain/pull/5450 | e09afb4b4445c99bebabca4b2beb150ba3a37c5c | c1807d84086c92d1aea2eb7be181204e72ae10d0 | "2023-04-26T18:02:20Z" | python | "2023-05-30T18:57:04Z" | langchain/embeddings/huggingface.py | """Wrapper around sentence_transformers embedding models.
To use, you should have the ``sentence_transformers``
and ``InstructorEmbedding`` python packages installed.
Example:
.. code-block:: python
from langchain.embeddings import HuggingFaceInstructEmbeddings
model_name = "hkunlp/instructor-large"
model_kwargs = {'device': 'cpu'}
hf = HuggingFaceInstructEmbeddings(
model_name=model_name, model_kwargs=model_kwargs
)
"""
client: Any
model_name: str = DEFAULT_INSTRUCT_MODEL
"""Model name to use."""
cache_folder: Optional[str] = None
"""Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Key word arguments to pass to the model."""
embed_instruction: str = DEFAULT_EMBED_INSTRUCTION
"""Instruction to use for embedding documents."""
query_instruction: str = DEFAULT_QUERY_INSTRUCTION
"""Instruction to use for embedding query."""
def __init__(self, **kwargs: Any): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,605 | Embeddings normalization and similarity metric | I am new to using Langchain and attempting to make it work with a locally running LLM (Alpaca) and Embeddings model (Sentence Transformer). When configuring the sentence transformer model with `HuggingFaceEmbeddings` no arguments can be passed to the encode method of the model, specifically `normalize_embeddings=True`. Neither can I specify the distance metric that I want to use in the `similarity_search` method irrespective of what vector store I am using. So it seems to me I can only create unnormalized embeddings with huggingface models and only use L2 distance as the similarity metric by default. Whereas I want to use the cosine similarity metric or have normalized embeddings and then use the dot product/L2 distance.
If I am wrong here can someone point me in the right direction. If not are there any plans to implement this? | https://github.com/langchain-ai/langchain/issues/3605 | https://github.com/langchain-ai/langchain/pull/5450 | e09afb4b4445c99bebabca4b2beb150ba3a37c5c | c1807d84086c92d1aea2eb7be181204e72ae10d0 | "2023-04-26T18:02:20Z" | python | "2023-05-30T18:57:04Z" | langchain/embeddings/huggingface.py | """Initialize the sentence_transformer."""
super().__init__(**kwargs)
try:
from InstructorEmbedding import INSTRUCTOR
self.client = INSTRUCTOR(
self.model_name, cache_folder=self.cache_folder, **self.model_kwargs
)
except ImportError as e:
raise ValueError("Dependencies for InstructorEmbedding not found.") from e
class Config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,605 | Embeddings normalization and similarity metric | I am new to using Langchain and attempting to make it work with a locally running LLM (Alpaca) and Embeddings model (Sentence Transformer). When configuring the sentence transformer model with `HuggingFaceEmbeddings` no arguments can be passed to the encode method of the model, specifically `normalize_embeddings=True`. Neither can I specify the distance metric that I want to use in the `similarity_search` method irrespective of what vector store I am using. So it seems to me I can only create unnormalized embeddings with huggingface models and only use L2 distance as the similarity metric by default. Whereas I want to use the cosine similarity metric or have normalized embeddings and then use the dot product/L2 distance.
If I am wrong here can someone point me in the right direction. If not are there any plans to implement this? | https://github.com/langchain-ai/langchain/issues/3605 | https://github.com/langchain-ai/langchain/pull/5450 | e09afb4b4445c99bebabca4b2beb150ba3a37c5c | c1807d84086c92d1aea2eb7be181204e72ae10d0 | "2023-04-26T18:02:20Z" | python | "2023-05-30T18:57:04Z" | langchain/embeddings/huggingface.py | """Configuration for this pydantic object."""
extra = Extra.forbid
def embed_documents(self, texts: List[str]) -> List[List[float]]:
"""Compute doc embeddings using a HuggingFace instruct model.
Args:
texts: The list of texts to embed.
Returns:
List of embeddings, one for each text.
"""
instruction_pairs = [[self.embed_instruction, text] for text in texts]
embeddings = self.client.encode(instruction_pairs)
return embeddings.tolist()
def embed_query(self, text: str) -> List[float]:
"""Compute query embeddings using a HuggingFace instruct model.
Args:
text: The text to embed.
Returns:
Embeddings for the text.
"""
instruction_pair = [self.query_instruction, text]
embedding = self.client.encode([instruction_pair])[0]
return embedding.tolist() |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,605 | Embeddings normalization and similarity metric | I am new to using Langchain and attempting to make it work with a locally running LLM (Alpaca) and Embeddings model (Sentence Transformer). When configuring the sentence transformer model with `HuggingFaceEmbeddings` no arguments can be passed to the encode method of the model, specifically `normalize_embeddings=True`. Neither can I specify the distance metric that I want to use in the `similarity_search` method irrespective of what vector store I am using. So it seems to me I can only create unnormalized embeddings with huggingface models and only use L2 distance as the similarity metric by default. Whereas I want to use the cosine similarity metric or have normalized embeddings and then use the dot product/L2 distance.
If I am wrong here can someone point me in the right direction. If not are there any plans to implement this? | https://github.com/langchain-ai/langchain/issues/3605 | https://github.com/langchain-ai/langchain/pull/5450 | e09afb4b4445c99bebabca4b2beb150ba3a37c5c | c1807d84086c92d1aea2eb7be181204e72ae10d0 | "2023-04-26T18:02:20Z" | python | "2023-05-30T18:57:04Z" | tests/integration_tests/embeddings/test_huggingface.py | """Test huggingface embeddings."""
from langchain.embeddings.huggingface import (
HuggingFaceEmbeddings,
HuggingFaceInstructEmbeddings,
)
def test_huggingface_embedding_documents() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,605 | Embeddings normalization and similarity metric | I am new to using Langchain and attempting to make it work with a locally running LLM (Alpaca) and Embeddings model (Sentence Transformer). When configuring the sentence transformer model with `HuggingFaceEmbeddings` no arguments can be passed to the encode method of the model, specifically `normalize_embeddings=True`. Neither can I specify the distance metric that I want to use in the `similarity_search` method irrespective of what vector store I am using. So it seems to me I can only create unnormalized embeddings with huggingface models and only use L2 distance as the similarity metric by default. Whereas I want to use the cosine similarity metric or have normalized embeddings and then use the dot product/L2 distance.
If I am wrong here can someone point me in the right direction. If not are there any plans to implement this? | https://github.com/langchain-ai/langchain/issues/3605 | https://github.com/langchain-ai/langchain/pull/5450 | e09afb4b4445c99bebabca4b2beb150ba3a37c5c | c1807d84086c92d1aea2eb7be181204e72ae10d0 | "2023-04-26T18:02:20Z" | python | "2023-05-30T18:57:04Z" | tests/integration_tests/embeddings/test_huggingface.py | """Test huggingface embeddings."""
documents = ["foo bar"]
embedding = HuggingFaceEmbeddings()
output = embedding.embed_documents(documents)
assert len(output) == 1
assert len(output[0]) == 768
def test_huggingface_embedding_query() -> None:
"""Test huggingface embeddings."""
document = "foo bar"
embedding = HuggingFaceEmbeddings(encode_kwargs={"batch_size": 16})
output = embedding.embed_query(document)
assert len(output) == 768
def test_huggingface_instructor_embedding_documents() -> None:
"""Test huggingface embeddings."""
documents = ["foo bar"]
embedding = HuggingFaceInstructEmbeddings()
output = embedding.embed_documents(documents)
assert len(output) == 1
assert len(output[0]) == 768
def test_huggingface_instructor_embedding_query() -> None:
"""Test huggingface embeddings."""
query = "foo bar"
embedding = HuggingFaceInstructEmbeddings()
output = embedding.embed_query(query)
assert len(output) == 768 |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,423 | SQLDatabaseToolkit doesn't work well with Postgresql, it will truncate the last double quotation marks in the SQL | ### System Info
Langchain: 0.0.184
Python: 3.10.9
Platform: Windows 10 with Jupyter lab
### Who can help?
@vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
SQLDatabaseToolkit works well if the SQL doesn't include the double quotation marks at the end, if there is, it will truncate the last double quotation marks, resulting in an endless loop.
Below is the initial code snapshot.

And when I executed it.

The LLM generates the correct SQL, but the toolkit truncats the last double quotation marks.
### Expected behavior
Won't truncate the last double quotation marks for PostgreSql. | https://github.com/langchain-ai/langchain/issues/5423 | https://github.com/langchain-ai/langchain/pull/5432 | c1807d84086c92d1aea2eb7be181204e72ae10d0 | 1d861dc37a63a41ae2e0983f2ee418efde968ce3 | "2023-05-30T04:02:36Z" | python | "2023-05-30T19:58:47Z" | langchain/agents/mrkl/output_parser.py | import re
from typing import Union
from langchain.agents.agent import AgentOutputParser
from langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS
from langchain.schema import AgentAction, AgentFinish, OutputParserException
FINAL_ANSWER_ACTION = "Final Answer:"
class MRKLOutputParser(AgentOutputParser):
def get_format_instructions(self) -> str:
return FORMAT_INSTRUCTIONS
def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
if FINAL_ANSWER_ACTION in text:
return AgentFinish(
{"output": text.split(FINAL_ANSWER_ACTION)[-1].strip()}, text
)
regex = ( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,423 | SQLDatabaseToolkit doesn't work well with Postgresql, it will truncate the last double quotation marks in the SQL | ### System Info
Langchain: 0.0.184
Python: 3.10.9
Platform: Windows 10 with Jupyter lab
### Who can help?
@vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
SQLDatabaseToolkit works well if the SQL doesn't include the double quotation marks at the end, if there is, it will truncate the last double quotation marks, resulting in an endless loop.
Below is the initial code snapshot.

And when I executed it.

The LLM generates the correct SQL, but the toolkit truncats the last double quotation marks.
### Expected behavior
Won't truncate the last double quotation marks for PostgreSql. | https://github.com/langchain-ai/langchain/issues/5423 | https://github.com/langchain-ai/langchain/pull/5432 | c1807d84086c92d1aea2eb7be181204e72ae10d0 | 1d861dc37a63a41ae2e0983f2ee418efde968ce3 | "2023-05-30T04:02:36Z" | python | "2023-05-30T19:58:47Z" | langchain/agents/mrkl/output_parser.py | r"Action\s*\d*\s*:[\s]*(.*?)[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
)
match = re.search(regex, text, re.DOTALL)
if not match:
if not re.search(r"Action\s*\d*\s*:[\s]*(.*?)", text, re.DOTALL):
raise OutputParserException(
f"Could not parse LLM output: `{text}`",
observation="Invalid Format: Missing 'Action:' after 'Thought:'",
llm_output=text,
send_to_llm=True,
)
elif not re.search(
r"[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)", text, re.DOTALL
):
raise OutputParserException(
f"Could not parse LLM output: `{text}`",
observation="Invalid Format:"
" Missing 'Action Input:' after 'Action:'",
llm_output=text,
send_to_llm=True,
)
else:
raise OutputParserException(f"Could not parse LLM output: `{text}`")
action = match.group(1).strip()
action_input = match.group(2)
return AgentAction(action, action_input.strip(" ").strip('"'), text)
@property
def _type(self) -> str:
return "mrkl" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,423 | SQLDatabaseToolkit doesn't work well with Postgresql, it will truncate the last double quotation marks in the SQL | ### System Info
Langchain: 0.0.184
Python: 3.10.9
Platform: Windows 10 with Jupyter lab
### Who can help?
@vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
SQLDatabaseToolkit works well if the SQL doesn't include the double quotation marks at the end, if there is, it will truncate the last double quotation marks, resulting in an endless loop.
Below is the initial code snapshot.

And when I executed it.

The LLM generates the correct SQL, but the toolkit truncats the last double quotation marks.
### Expected behavior
Won't truncate the last double quotation marks for PostgreSql. | https://github.com/langchain-ai/langchain/issues/5423 | https://github.com/langchain-ai/langchain/pull/5432 | c1807d84086c92d1aea2eb7be181204e72ae10d0 | 1d861dc37a63a41ae2e0983f2ee418efde968ce3 | "2023-05-30T04:02:36Z" | python | "2023-05-30T19:58:47Z" | tests/unit_tests/agents/test_mrkl.py | """Test MRKL functionality."""
from typing import Tuple
import pytest
from langchain.agents.mrkl.base import ZeroShotAgent
from langchain.agents.mrkl.output_parser import MRKLOutputParser
from langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS, PREFIX, SUFFIX
from langchain.agents.tools import Tool
from langchain.prompts import PromptTemplate
from langchain.schema import AgentAction, OutputParserException
from tests.unit_tests.llms.fake_llm import FakeLLM
def get_action_and_input(text: str) -> Tuple[str, str]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,423 | SQLDatabaseToolkit doesn't work well with Postgresql, it will truncate the last double quotation marks in the SQL | ### System Info
Langchain: 0.0.184
Python: 3.10.9
Platform: Windows 10 with Jupyter lab
### Who can help?
@vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
SQLDatabaseToolkit works well if the SQL doesn't include the double quotation marks at the end, if there is, it will truncate the last double quotation marks, resulting in an endless loop.
Below is the initial code snapshot.

And when I executed it.

The LLM generates the correct SQL, but the toolkit truncats the last double quotation marks.
### Expected behavior
Won't truncate the last double quotation marks for PostgreSql. | https://github.com/langchain-ai/langchain/issues/5423 | https://github.com/langchain-ai/langchain/pull/5432 | c1807d84086c92d1aea2eb7be181204e72ae10d0 | 1d861dc37a63a41ae2e0983f2ee418efde968ce3 | "2023-05-30T04:02:36Z" | python | "2023-05-30T19:58:47Z" | tests/unit_tests/agents/test_mrkl.py | output = MRKLOutputParser().parse(text)
if isinstance(output, AgentAction):
return output.tool, str(output.tool_input)
else:
return "Final Answer", output.return_values["output"]
def test_get_action_and_input() -> None:
"""Test getting an action from text."""
llm_output = (
"Thought: I need to search for NBA\n" "Action: Search\n" "Action Input: NBA"
)
action, action_input = get_action_and_input(llm_output)
assert action == "Search"
assert action_input == "NBA"
def test_get_action_and_input_whitespace() -> None:
"""Test getting an action from text."""
llm_output = "Thought: I need to search for NBA\nAction: Search \nAction Input: NBA"
action, action_input = get_action_and_input(llm_output)
assert action == "Search"
assert action_input == "NBA"
def test_get_action_and_input_newline() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,423 | SQLDatabaseToolkit doesn't work well with Postgresql, it will truncate the last double quotation marks in the SQL | ### System Info
Langchain: 0.0.184
Python: 3.10.9
Platform: Windows 10 with Jupyter lab
### Who can help?
@vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
SQLDatabaseToolkit works well if the SQL doesn't include the double quotation marks at the end, if there is, it will truncate the last double quotation marks, resulting in an endless loop.
Below is the initial code snapshot.

And when I executed it.

The LLM generates the correct SQL, but the toolkit truncats the last double quotation marks.
### Expected behavior
Won't truncate the last double quotation marks for PostgreSql. | https://github.com/langchain-ai/langchain/issues/5423 | https://github.com/langchain-ai/langchain/pull/5432 | c1807d84086c92d1aea2eb7be181204e72ae10d0 | 1d861dc37a63a41ae2e0983f2ee418efde968ce3 | "2023-05-30T04:02:36Z" | python | "2023-05-30T19:58:47Z" | tests/unit_tests/agents/test_mrkl.py | """Test getting an action from text where Action Input is a code snippet."""
llm_output = (
"Now I need to write a unittest for the function.\n\n"
"Action: Python\nAction Input:\n```\nimport unittest\n\nunittest.main()\n```"
)
action, action_input = get_action_and_input(llm_output)
assert action == "Python"
assert action_input == "```\nimport unittest\n\nunittest.main()\n```"
def test_get_action_and_input_newline_after_keyword() -> None:
"""Test getting an action and action input from the text
when there is a new line before the action
(after the keywords "Action:" and "Action Input:")
"""
llm_output = """
I can use the `ls` command to list the contents of the directory \
and `grep` to search for the specific file.
Action:
Terminal
Action Input:
ls -l ~/.bashrc.d/
"""
action, action_input = get_action_and_input(llm_output)
assert action == "Terminal"
assert action_input == "ls -l ~/.bashrc.d/\n"
def test_get_final_answer() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,423 | SQLDatabaseToolkit doesn't work well with Postgresql, it will truncate the last double quotation marks in the SQL | ### System Info
Langchain: 0.0.184
Python: 3.10.9
Platform: Windows 10 with Jupyter lab
### Who can help?
@vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
SQLDatabaseToolkit works well if the SQL doesn't include the double quotation marks at the end, if there is, it will truncate the last double quotation marks, resulting in an endless loop.
Below is the initial code snapshot.

And when I executed it.

The LLM generates the correct SQL, but the toolkit truncats the last double quotation marks.
### Expected behavior
Won't truncate the last double quotation marks for PostgreSql. | https://github.com/langchain-ai/langchain/issues/5423 | https://github.com/langchain-ai/langchain/pull/5432 | c1807d84086c92d1aea2eb7be181204e72ae10d0 | 1d861dc37a63a41ae2e0983f2ee418efde968ce3 | "2023-05-30T04:02:36Z" | python | "2023-05-30T19:58:47Z" | tests/unit_tests/agents/test_mrkl.py | """Test getting final answer."""
llm_output = (
"Thought: I need to search for NBA\n"
"Action: Search\n"
"Action Input: NBA\n"
"Observation: founded in 1994\n"
"Thought: I can now answer the question\n"
"Final Answer: 1994"
)
action, action_input = get_action_and_input(llm_output)
assert action == "Final Answer"
assert action_input == "1994"
def test_get_final_answer_new_line() -> None:
"""Test getting final answer."""
llm_output = (
"Thought: I need to search for NBA\n"
"Action: Search\n"
"Action Input: NBA\n"
"Observation: founded in 1994\n"
"Thought: I can now answer the question\n"
"Final Answer:\n1994"
)
action, action_input = get_action_and_input(llm_output)
assert action == "Final Answer"
assert action_input == "1994"
def test_get_final_answer_multiline() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,423 | SQLDatabaseToolkit doesn't work well with Postgresql, it will truncate the last double quotation marks in the SQL | ### System Info
Langchain: 0.0.184
Python: 3.10.9
Platform: Windows 10 with Jupyter lab
### Who can help?
@vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
SQLDatabaseToolkit works well if the SQL doesn't include the double quotation marks at the end, if there is, it will truncate the last double quotation marks, resulting in an endless loop.
Below is the initial code snapshot.

And when I executed it.

The LLM generates the correct SQL, but the toolkit truncats the last double quotation marks.
### Expected behavior
Won't truncate the last double quotation marks for PostgreSql. | https://github.com/langchain-ai/langchain/issues/5423 | https://github.com/langchain-ai/langchain/pull/5432 | c1807d84086c92d1aea2eb7be181204e72ae10d0 | 1d861dc37a63a41ae2e0983f2ee418efde968ce3 | "2023-05-30T04:02:36Z" | python | "2023-05-30T19:58:47Z" | tests/unit_tests/agents/test_mrkl.py | """Test getting final answer that is multiline."""
llm_output = (
"Thought: I need to search for NBA\n"
"Action: Search\n"
"Action Input: NBA\n"
"Observation: founded in 1994 and 1993\n"
"Thought: I can now answer the question\n"
"Final Answer: 1994\n1993"
)
action, action_input = get_action_and_input(llm_output)
assert action == "Final Answer"
assert action_input == "1994\n1993"
def test_bad_action_input_line() -> None:
"""Test handling when no action input found."""
llm_output = "Thought: I need to search for NBA\n" "Action: Search\n" "Thought: NBA"
with pytest.raises(OutputParserException) as e_info:
get_action_and_input(llm_output)
assert e_info.value.observation is not None
def test_bad_action_line() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,423 | SQLDatabaseToolkit doesn't work well with Postgresql, it will truncate the last double quotation marks in the SQL | ### System Info
Langchain: 0.0.184
Python: 3.10.9
Platform: Windows 10 with Jupyter lab
### Who can help?
@vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
SQLDatabaseToolkit works well if the SQL doesn't include the double quotation marks at the end, if there is, it will truncate the last double quotation marks, resulting in an endless loop.
Below is the initial code snapshot.

And when I executed it.

The LLM generates the correct SQL, but the toolkit truncats the last double quotation marks.
### Expected behavior
Won't truncate the last double quotation marks for PostgreSql. | https://github.com/langchain-ai/langchain/issues/5423 | https://github.com/langchain-ai/langchain/pull/5432 | c1807d84086c92d1aea2eb7be181204e72ae10d0 | 1d861dc37a63a41ae2e0983f2ee418efde968ce3 | "2023-05-30T04:02:36Z" | python | "2023-05-30T19:58:47Z" | tests/unit_tests/agents/test_mrkl.py | """Test handling when no action found."""
llm_output = (
"Thought: I need to search for NBA\n" "Thought: Search\n" "Action Input: NBA"
)
with pytest.raises(OutputParserException) as e_info:
get_action_and_input(llm_output)
assert e_info.value.observation is not None
def test_from_chains() -> None:
"""Test initializing from chains."""
chain_configs = [
Tool(name="foo", func=lambda x: "foo", description="foobar1"),
Tool(name="bar", func=lambda x: "bar", description="foobar2"),
]
agent = ZeroShotAgent.from_llm_and_tools(FakeLLM(), chain_configs)
expected_tools_prompt = "foo: foobar1\nbar: foobar2"
expected_tool_names = "foo, bar"
expected_template = "\n\n".join(
[
PREFIX,
expected_tools_prompt,
FORMAT_INSTRUCTIONS.format(tool_names=expected_tool_names),
SUFFIX,
]
)
prompt = agent.llm_chain.prompt
assert isinstance(prompt, PromptTemplate)
assert prompt.template == expected_template |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,465 | Google BigQuery Loader doesn't take credentials | ### Feature request
I would like to be able to provide credentials to the bigquery.client object
### Motivation
I cannot access protected datasets without use of a service account or other credentials
### Your contribution
I will submit a PR. | https://github.com/langchain-ai/langchain/issues/5465 | https://github.com/langchain-ai/langchain/pull/5466 | eab4b4ccd7e1ca4dcfdf4c400250494e4503fcb1 | 199cc700a344a2b15dff3a8924746a5ceb1aad7e | "2023-05-30T21:18:13Z" | python | "2023-05-30T23:25:22Z" | langchain/document_loaders/bigquery.py | from typing import List, Optional
from langchain.docstore.document import Document
from langchain.document_loaders.base import BaseLoader
class BigQueryLoader(BaseLoader):
"""Loads a query result from BigQuery into a list of documents.
Each document represents one row of the result. The `page_content_columns`
are written into the `page_content` of the document. The `metadata_columns`
are written into the `metadata` of the document. By default, all columns
are written into the `page_content` and none into the `metadata`.
"""
def __init__(
self,
query: str,
project: Optional[str] = None,
page_content_columns: Optional[List[str]] = None,
metadata_columns: Optional[List[str]] = None,
):
self.query = query
self.project = project
self.page_content_columns = page_content_columns
self.metadata_columns = metadata_columns
def load(self) -> List[Document]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,465 | Google BigQuery Loader doesn't take credentials | ### Feature request
I would like to be able to provide credentials to the bigquery.client object
### Motivation
I cannot access protected datasets without use of a service account or other credentials
### Your contribution
I will submit a PR. | https://github.com/langchain-ai/langchain/issues/5465 | https://github.com/langchain-ai/langchain/pull/5466 | eab4b4ccd7e1ca4dcfdf4c400250494e4503fcb1 | 199cc700a344a2b15dff3a8924746a5ceb1aad7e | "2023-05-30T21:18:13Z" | python | "2023-05-30T23:25:22Z" | langchain/document_loaders/bigquery.py | try:
from google.cloud import bigquery
except ImportError as ex:
raise ValueError(
"Could not import google-cloud-bigquery python package. "
"Please install it with `pip install google-cloud-bigquery`."
) from ex
bq_client = bigquery.Client(self.project)
query_result = bq_client.query(self.query).result()
docs: List[Document] = []
page_content_columns = self.page_content_columns
metadata_columns = self.metadata_columns
if page_content_columns is None:
page_content_columns = [column.name for column in query_result.schema]
if metadata_columns is None:
metadata_columns = []
for row in query_result:
page_content = "\n".join(
f"{k}: {v}" for k, v in row.items() if k in page_content_columns
)
metadata = {k: v for k, v in row.items() if k in metadata_columns}
doc = Document(page_content=page_content, metadata=metadata)
docs.append(doc)
return docs |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | langchain/vectorstores/chroma.py | """Wrapper around ChromaDB embeddings platform."""
from __future__ import annotations
import logging
import uuid
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Tuple, Type
import numpy as np
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.utils import xor_args
from langchain.vectorstores.base import VectorStore
from langchain.vectorstores.utils import maximal_marginal_relevance
if TYPE_CHECKING:
import chromadb
import chromadb.config
logger = logging.getLogger()
DEFAULT_K = 4
def _results_to_docs(results: Any) -> List[Document]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | langchain/vectorstores/chroma.py | return [doc for doc, _ in _results_to_docs_and_scores(results)]
def _results_to_docs_and_scores(results: Any) -> List[Tuple[Document, float]]:
return [
(Document(page_content=result[0], metadata=result[1] or {}), result[2])
for result in zip(
results["documents"][0],
results["metadatas"][0],
results["distances"][0],
)
]
class Chroma(VectorStore):
"""Wrapper around ChromaDB embeddings platform.
To use, you should have the ``chromadb`` python package installed.
Example:
.. code-block:: python
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Chroma("langchain_store", embeddings.embed_query)
"""
_LANGCHAIN_DEFAULT_COLLECTION_NAME = "langchain"
def __init__( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | langchain/vectorstores/chroma.py | self,
collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,
embedding_function: Optional[Embeddings] = None,
persist_directory: Optional[str] = None,
client_settings: Optional[chromadb.config.Settings] = None,
collection_metadata: Optional[Dict] = None,
client: Optional[chromadb.Client] = None,
) -> None:
"""Initialize with Chroma client."""
try:
import chromadb |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | langchain/vectorstores/chroma.py | import chromadb.config
except ImportError:
raise ValueError(
"Could not import chromadb python package. "
"Please install it with `pip install chromadb`."
)
if client is not None:
self._client = client
else:
if client_settings:
self._client_settings = client_settings
else:
self._client_settings = chromadb.config.Settings()
if persist_directory is not None:
self._client_settings = chromadb.config.Settings(
chroma_db_impl="duckdb+parquet",
persist_directory=persist_directory,
)
self._client = chromadb.Client(self._client_settings)
self._embedding_function = embedding_function
self._persist_directory = persist_directory
self._collection = self._client.get_or_create_collection(
name=collection_name,
embedding_function=self._embedding_function.embed_documents
if self._embedding_function is not None
else None,
metadata=collection_metadata,
)
@xor_args(("query_texts", "query_embeddings"))
def __query_collection( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | langchain/vectorstores/chroma.py | self,
query_texts: Optional[List[str]] = None,
query_embeddings: Optional[List[List[float]]] = None,
n_results: int = 4,
where: Optional[Dict[str, str]] = None,
**kwargs: Any,
) -> List[Document]:
"""Query the chroma collection."""
try:
import chromadb
except ImportError:
raise ValueError(
"Could not import chromadb python package. "
"Please install it with `pip install chromadb`."
)
for i in range(n_results, 0, -1):
try:
return self._collection.query(
query_texts=query_texts,
query_embeddings=query_embeddings,
n_results=i,
where=where,
**kwargs,
)
except chromadb.errors.NotEnoughElementsException:
logger.error(
f"Chroma collection {self._collection.name} "
f"contains fewer than {i} elements." |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | langchain/vectorstores/chroma.py | )
raise chromadb.errors.NotEnoughElementsException(
f"No documents found for Chroma collection {self._collection.name}"
)
def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts (Iterable[str]): Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional): Optional list of metadatas.
ids (Optional[List[str]], optional): Optional list of IDs.
Returns:
List[str]: List of IDs of the added texts.
"""
if ids is None:
ids = [str(uuid.uuid1()) for _ in texts]
embeddings = None
if self._embedding_function is not None:
embeddings = self._embedding_function.embed_documents(list(texts))
self._collection.add(
metadatas=metadatas, embeddings=embeddings, documents=texts, ids=ids
)
return ids
def similarity_search( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | langchain/vectorstores/chroma.py | self,
query: str,
k: int = DEFAULT_K,
filter: Optional[Dict[str, str]] = None,
**kwargs: Any,
) -> List[Document]:
"""Run similarity search with Chroma.
Args:
query (str): Query text to search for.
k (int): Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
Returns:
List[Document]: List of documents most similar to the query text.
"""
docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)
return [doc for doc, _ in docs_and_scores]
def similarity_search_by_vector( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | langchain/vectorstores/chroma.py | self,
embedding: List[float],
k: int = DEFAULT_K,
filter: Optional[Dict[str, str]] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs most similar to embedding vector.
Args:
embedding (str): Embedding to look up documents similar to.
k (int): Number of Documents to return. Defaults to 4.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
Returns:
List of Documents most similar to the query vector.
"""
results = self.__query_collection(
query_embeddings=embedding, n_results=k, where=filter
)
return _results_to_docs(results)
def similarity_search_with_score(
self,
query: str,
k: int = DEFAULT_K,
filter: Optional[Dict[str, str]] = None,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Run similarity search with Chroma with distance.
Args: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | langchain/vectorstores/chroma.py | query (str): Query text to search for.
k (int): Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
Returns:
List[Tuple[Document, float]]: List of documents most similar to the query
text with distance in float.
"""
if self._embedding_function is None:
results = self.__query_collection(
query_texts=[query], n_results=k, where=filter
)
else:
query_embedding = self._embedding_function.embed_query(query)
results = self.__query_collection(
query_embeddings=[query_embedding], n_results=k, where=filter
)
return _results_to_docs_and_scores(results)
def max_marginal_relevance_search_by_vector(
self,
embedding: List[float],
k: int = DEFAULT_K,
fetch_k: int = 20,
lambda_mult: float = 0.5,
filter: Optional[Dict[str, str]] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | langchain/vectorstores/chroma.py | embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
Returns:
List of Documents selected by maximal marginal relevance.
"""
results = self.__query_collection(
query_embeddings=embedding,
n_results=fetch_k,
where=filter,
include=["metadatas", "documents", "distances", "embeddings"],
)
mmr_selected = maximal_marginal_relevance(
np.array(embedding, dtype=np.float32),
results["embeddings"][0],
k=k,
lambda_mult=lambda_mult,
)
candidates = _results_to_docs(results)
selected_results = [r for i, r in enumerate(candidates) if i in mmr_selected]
return selected_results
def max_marginal_relevance_search(
self,
query: str,
k: int = DEFAULT_K, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | langchain/vectorstores/chroma.py | fetch_k: int = 20,
lambda_mult: float = 0.5,
filter: Optional[Dict[str, str]] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
Returns:
List of Documents selected by maximal marginal relevance.
"""
if self._embedding_function is None:
raise ValueError(
"For MMR search, you must specify an embedding function on" "creation."
)
embedding = self._embedding_function.embed_query(query)
docs = self.max_marginal_relevance_search_by_vector(
embedding, k, fetch_k, lambda_mul=lambda_mult, filter=filter
)
return docs
def delete_collection(self) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | langchain/vectorstores/chroma.py | """Delete the collection."""
self._client.delete_collection(self._collection.name)
def get(self, include: Optional[List[str]] = None) -> Dict[str, Any]:
"""Gets the collection.
Args:
include (Optional[List[str]]): List of fields to include from db.
Defaults to None.
"""
if include is not None:
return self._collection.get(include=include)
else:
return self._collection.get()
def persist(self) -> None:
"""Persist the collection.
This can be used to explicitly persist the data to disk.
It will also be called automatically when the object is destroyed.
"""
if self._persist_directory is None:
raise ValueError(
"You must specify a persist_directory on"
"creation to persist the collection."
)
self._client.persist()
def update_document(self, document_id: str, document: Document) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | langchain/vectorstores/chroma.py | """Update a document in the collection.
Args:
document_id (str): ID of the document to update.
document (Document): Document to update.
"""
text = document.page_content
metadata = document.metadata
if self._embedding_function is None:
raise ValueError(
"For update, you must specify an embedding function on creation."
)
embeddings = self._embedding_function.embed_documents(list(text))
self._collection.update(
ids=[document_id],
embeddings=[embeddings[0]],
documents=[text],
metadatas=[metadata],
)
@classmethod
def from_texts( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | langchain/vectorstores/chroma.py | cls: Type[Chroma],
texts: List[str],
embedding: Optional[Embeddings] = None,
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,
persist_directory: Optional[str] = None,
client_settings: Optional[chromadb.config.Settings] = None,
client: Optional[chromadb.Client] = None,
**kwargs: Any,
) -> Chroma:
"""Create a Chroma vectorstore from a raw documents. |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | langchain/vectorstores/chroma.py | If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Args:
texts (List[str]): List of texts to add to the collection.
collection_name (str): Name of the collection to create.
persist_directory (Optional[str]): Directory to persist the collection.
embedding (Optional[Embeddings]): Embedding function. Defaults to None.
metadatas (Optional[List[dict]]): List of metadatas. Defaults to None.
ids (Optional[List[str]]): List of document IDs. Defaults to None.
client_settings (Optional[chromadb.config.Settings]): Chroma client settings
Returns:
Chroma: Chroma vectorstore.
"""
chroma_collection = cls(
collection_name=collection_name,
embedding_function=embedding,
persist_directory=persist_directory,
client_settings=client_settings,
client=client,
)
chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
return chroma_collection
@classmethod
def from_documents(
cls: Type[Chroma],
documents: List[Document],
embedding: Optional[Embeddings] = None,
ids: Optional[List[str]] = None,
collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,
persist_directory: Optional[str] = None, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | langchain/vectorstores/chroma.py | client_settings: Optional[chromadb.config.Settings] = None,
client: Optional[chromadb.Client] = None,
**kwargs: Any,
) -> Chroma:
"""Create a Chroma vectorstore from a list of documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Args:
collection_name (str): Name of the collection to create.
persist_directory (Optional[str]): Directory to persist the collection.
ids (Optional[List[str]]): List of document IDs. Defaults to None.
documents (List[Document]): List of documents to add to the vectorstore.
embedding (Optional[Embeddings]): Embedding function. Defaults to None.
client_settings (Optional[chromadb.config.Settings]): Chroma client settings
Returns:
Chroma: Chroma vectorstore.
"""
texts = [doc.page_content for doc in documents]
metadatas = [doc.metadata for doc in documents]
return cls.from_texts(
texts=texts,
embedding=embedding,
metadatas=metadatas,
ids=ids,
collection_name=collection_name,
persist_directory=persist_directory,
client_settings=client_settings,
client=client,
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | tests/integration_tests/vectorstores/test_chroma.py | """Test Chroma functionality."""
import pytest
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
def test_chroma() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | tests/integration_tests/vectorstores/test_chroma.py | """Test end to end construction and search."""
texts = ["foo", "bar", "baz"]
docsearch = Chroma.from_texts(
collection_name="test_collection", texts=texts, embedding=FakeEmbeddings()
)
output = docsearch.similarity_search("foo", k=1)
assert output == [Document(page_content="foo")]
@pytest.mark.asyncio
async def test_chroma_async() -> None:
"""Test end to end construction and search."""
texts = ["foo", "bar", "baz"]
docsearch = Chroma.from_texts(
collection_name="test_collection", texts=texts, embedding=FakeEmbeddings()
)
output = await docsearch.asimilarity_search("foo", k=1)
assert output == [Document(page_content="foo")]
def test_chroma_with_metadatas() -> None:
"""Test end to end construction and search."""
texts = ["foo", "bar", "baz"]
metadatas = [{"page": str(i)} for i in range(len(texts))]
docsearch = Chroma.from_texts(
collection_name="test_collection",
texts=texts,
embedding=FakeEmbeddings(),
metadatas=metadatas,
)
output = docsearch.similarity_search("foo", k=1)
assert output == [Document(page_content="foo", metadata={"page": "0"})]
def test_chroma_with_metadatas_with_scores() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | tests/integration_tests/vectorstores/test_chroma.py | """Test end to end construction and scored search."""
texts = ["foo", "bar", "baz"]
metadatas = [{"page": str(i)} for i in range(len(texts))]
docsearch = Chroma.from_texts(
collection_name="test_collection",
texts=texts,
embedding=FakeEmbeddings(),
metadatas=metadatas,
)
output = docsearch.similarity_search_with_score("foo", k=1)
assert output == [(Document(page_content="foo", metadata={"page": "0"}), 0.0)]
def test_chroma_search_filter() -> None:
"""Test end to end construction and search with metadata filtering."""
texts = ["far", "bar", "baz"]
metadatas = [{"first_letter": "{}".format(text[0])} for text in texts]
docsearch = Chroma.from_texts(
collection_name="test_collection",
texts=texts,
embedding=FakeEmbeddings(),
metadatas=metadatas,
)
output = docsearch.similarity_search("far", k=1, filter={"first_letter": "f"})
assert output == [Document(page_content="far", metadata={"first_letter": "f"})]
output = docsearch.similarity_search("far", k=1, filter={"first_letter": "b"})
assert output == [Document(page_content="bar", metadata={"first_letter": "b"})]
def test_chroma_search_filter_with_scores() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | tests/integration_tests/vectorstores/test_chroma.py | """Test end to end construction and scored search with metadata filtering."""
texts = ["far", "bar", "baz"]
metadatas = [{"first_letter": "{}".format(text[0])} for text in texts]
docsearch = Chroma.from_texts(
collection_name="test_collection",
texts=texts,
embedding=FakeEmbeddings(),
metadatas=metadatas,
)
output = docsearch.similarity_search_with_score(
"far", k=1, filter={"first_letter": "f"}
)
assert output == [
(Document(page_content="far", metadata={"first_letter": "f"}), 0.0)
]
output = docsearch.similarity_search_with_score(
"far", k=1, filter={"first_letter": "b"}
)
assert output == [
(Document(page_content="bar", metadata={"first_letter": "b"}), 1.0)
]
def test_chroma_with_persistence() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | tests/integration_tests/vectorstores/test_chroma.py | """Test end to end construction and search, with persistence."""
chroma_persist_dir = "./tests/persist_dir"
collection_name = "test_collection"
texts = ["foo", "bar", "baz"]
docsearch = Chroma.from_texts(
collection_name=collection_name,
texts=texts,
embedding=FakeEmbeddings(),
persist_directory=chroma_persist_dir,
)
output = docsearch.similarity_search("foo", k=1)
assert output == [Document(page_content="foo")]
docsearch.persist()
docsearch = Chroma(
collection_name=collection_name,
embedding_function=FakeEmbeddings(),
persist_directory=chroma_persist_dir,
)
output = docsearch.similarity_search("foo", k=1)
docsearch.delete_collection()
def test_chroma_mmr() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | tests/integration_tests/vectorstores/test_chroma.py | """Test end to end construction and search."""
texts = ["foo", "bar", "baz"]
docsearch = Chroma.from_texts(
collection_name="test_collection", texts=texts, embedding=FakeEmbeddings()
)
output = docsearch.max_marginal_relevance_search("foo", k=1)
assert output == [Document(page_content="foo")]
def test_chroma_mmr_by_vector() -> None:
"""Test end to end construction and search."""
texts = ["foo", "bar", "baz"]
embeddings = FakeEmbeddings()
docsearch = Chroma.from_texts(
collection_name="test_collection", texts=texts, embedding=embeddings
)
embedded_query = embeddings.embed_query("foo")
output = docsearch.max_marginal_relevance_search_by_vector(embedded_query, k=1)
assert output == [Document(page_content="foo")]
def test_chroma_with_include_parameter() -> None:
"""Test end to end construction and include parameter."""
texts = ["foo", "bar", "baz"]
docsearch = Chroma.from_texts(
collection_name="test_collection", texts=texts, embedding=FakeEmbeddings()
)
output = docsearch.get(include=["embeddings"])
assert output["embeddings"] is not None
output = docsearch.get()
assert output["embeddings"] is None
def test_chroma_update_document() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,582 | Chroma.update_document bug | ### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/langchain/vectorstores/chroma.py#LL359C70-L359C70
### Who can help?
Related to @dev2049 vectorstores
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.docstore.document import Document
from langchain.vectorstores import Chroma
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
# Initial document content and id
initial_content = "foo"
document_id = "doc1"
# Create an instance of Document with initial content and metadata
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
# Initialize a Chroma instance with the original document
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
# Define updated content for the document
updated_content = "updated foo"
# Create a new Document instance with the updated content and the same id
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
# Update the document in the Chroma instance
docsearch.update_document(document_id=document_id, document=updated_doc)
docsearch_peek = docsearch._collection.peek()
new_embedding = docsearch_peek['embeddings'][docsearch_peek['ids'].index(document_id)]
assert new_embedding \
== docsearch._embedding_function.embed_documents([updated_content[0]])[0] \
== docsearch._embedding_function.embed_documents(list(updated_content))[0] \
== docsearch._embedding_function.embed_documents(['u'])[0]
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
```
### Expected behavior
The last assertion should be true
```
assert new_embedding == docsearch._embedding_function.embed_documents([updated_content])[0]
``` | https://github.com/langchain-ai/langchain/issues/5582 | https://github.com/langchain-ai/langchain/pull/5584 | 3c6fa9126aa6422084e8c064eda06292d40ac517 | c5a7a85a4e6cd307f83b2e455d466722d75940b2 | "2023-06-01T23:13:30Z" | python | "2023-06-02T18:12:48Z" | tests/integration_tests/vectorstores/test_chroma.py | """Test the update_document function in the Chroma class."""
initial_content = "foo"
document_id = "doc1"
original_doc = Document(page_content=initial_content, metadata={"page": "0"})
docsearch = Chroma.from_documents(
collection_name="test_collection",
documents=[original_doc],
embedding=FakeEmbeddings(),
ids=[document_id],
)
updated_content = "updated foo"
updated_doc = Document(page_content=updated_content, metadata={"page": "0"})
docsearch.update_document(document_id=document_id, document=updated_doc)
output = docsearch.similarity_search(updated_content, k=1)
assert output == [Document(page_content=updated_content, metadata={"page": "0"})] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,623 | cannot import name 'FigmaFileLoader' | ### System Info
langchain==0.0.189
os:windows11
python=3.10.11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import FigmaFileLoader
### Expected behavior
expected:
load the module
error:
ImportError: cannot import name 'FigmaFileLoader' from 'langchain.document_loaders' (C:\Users\xxx\AppData\Local\miniconda3\envs\xxx\lib\site-packages\langchain\document_loaders\__init__.py)
comments:
checked the langchain\document_loaders\__init__.py and there is no reference to FigmaFileLoader
| https://github.com/langchain-ai/langchain/issues/5623 | https://github.com/langchain-ai/langchain/pull/5636 | 20ec1173f40a13cba73d79cc0efa4653d2489d65 | 9a7488a5ce65aaf727464f02a10811719b517f11 | "2023-06-02T16:39:41Z" | python | "2023-06-02T21:58:41Z" | langchain/document_loaders/__init__.py | """All different types of document loaders."""
from langchain.document_loaders.airbyte_json import AirbyteJSONLoader
from langchain.document_loaders.apify_dataset import ApifyDatasetLoader
from langchain.document_loaders.arxiv import ArxivLoader
from langchain.document_loaders.azlyrics import AZLyricsLoader
from langchain.document_loaders.azure_blob_storage_container import (
AzureBlobStorageContainerLoader,
)
from langchain.document_loaders.azure_blob_storage_file import (
AzureBlobStorageFileLoader,
)
from langchain.document_loaders.bibtex import BibtexLoader
from langchain.document_loaders.bigquery import BigQueryLoader
from langchain.document_loaders.bilibili import BiliBiliLoader
from langchain.document_loaders.blackboard import BlackboardLoader
from langchain.document_loaders.blockchain import BlockchainDocumentLoader
from langchain.document_loaders.chatgpt import ChatGPTLoader
from langchain.document_loaders.college_confidential import CollegeConfidentialLoader
from langchain.document_loaders.confluence import ConfluenceLoader
from langchain.document_loaders.conllu import CoNLLULoader
from langchain.document_loaders.csv_loader import CSVLoader |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,623 | cannot import name 'FigmaFileLoader' | ### System Info
langchain==0.0.189
os:windows11
python=3.10.11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import FigmaFileLoader
### Expected behavior
expected:
load the module
error:
ImportError: cannot import name 'FigmaFileLoader' from 'langchain.document_loaders' (C:\Users\xxx\AppData\Local\miniconda3\envs\xxx\lib\site-packages\langchain\document_loaders\__init__.py)
comments:
checked the langchain\document_loaders\__init__.py and there is no reference to FigmaFileLoader
| https://github.com/langchain-ai/langchain/issues/5623 | https://github.com/langchain-ai/langchain/pull/5636 | 20ec1173f40a13cba73d79cc0efa4653d2489d65 | 9a7488a5ce65aaf727464f02a10811719b517f11 | "2023-06-02T16:39:41Z" | python | "2023-06-02T21:58:41Z" | langchain/document_loaders/__init__.py | from langchain.document_loaders.dataframe import DataFrameLoader
from langchain.document_loaders.diffbot import DiffbotLoader
from langchain.document_loaders.directory import DirectoryLoader
from langchain.document_loaders.discord import DiscordChatLoader
from langchain.document_loaders.docugami import DocugamiLoader
from langchain.document_loaders.duckdb_loader import DuckDBLoader
from langchain.document_loaders.email import (
OutlookMessageLoader,
UnstructuredEmailLoader,
)
from langchain.document_loaders.epub import UnstructuredEPubLoader
from langchain.document_loaders.evernote import EverNoteLoader
from langchain.document_loaders.facebook_chat import FacebookChatLoader
from langchain.document_loaders.gcs_directory import GCSDirectoryLoader
from langchain.document_loaders.gcs_file import GCSFileLoader
from langchain.document_loaders.git import GitLoader
from langchain.document_loaders.gitbook import GitbookLoader
from langchain.document_loaders.github import GitHubIssuesLoader
from langchain.document_loaders.googledrive import GoogleDriveLoader
from langchain.document_loaders.gutenberg import GutenbergLoader
from langchain.document_loaders.hn import HNLoader
from langchain.document_loaders.html import UnstructuredHTMLLoader
from langchain.document_loaders.html_bs import BSHTMLLoader
from langchain.document_loaders.hugging_face_dataset import HuggingFaceDatasetLoader
from langchain.document_loaders.ifixit import IFixitLoader
from langchain.document_loaders.image import UnstructuredImageLoader
from langchain.document_loaders.image_captions import ImageCaptionLoader
from langchain.document_loaders.imsdb import IMSDbLoader
from langchain.document_loaders.joplin import JoplinLoader
from langchain.document_loaders.json_loader import JSONLoader |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,623 | cannot import name 'FigmaFileLoader' | ### System Info
langchain==0.0.189
os:windows11
python=3.10.11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import FigmaFileLoader
### Expected behavior
expected:
load the module
error:
ImportError: cannot import name 'FigmaFileLoader' from 'langchain.document_loaders' (C:\Users\xxx\AppData\Local\miniconda3\envs\xxx\lib\site-packages\langchain\document_loaders\__init__.py)
comments:
checked the langchain\document_loaders\__init__.py and there is no reference to FigmaFileLoader
| https://github.com/langchain-ai/langchain/issues/5623 | https://github.com/langchain-ai/langchain/pull/5636 | 20ec1173f40a13cba73d79cc0efa4653d2489d65 | 9a7488a5ce65aaf727464f02a10811719b517f11 | "2023-06-02T16:39:41Z" | python | "2023-06-02T21:58:41Z" | langchain/document_loaders/__init__.py | from langchain.document_loaders.markdown import UnstructuredMarkdownLoader
from langchain.document_loaders.mastodon import MastodonTootsLoader
from langchain.document_loaders.max_compute import MaxComputeLoader
from langchain.document_loaders.mediawikidump import MWDumpLoader
from langchain.document_loaders.modern_treasury import ModernTreasuryLoader
from langchain.document_loaders.notebook import NotebookLoader
from langchain.document_loaders.notion import NotionDirectoryLoader
from langchain.document_loaders.notiondb import NotionDBLoader
from langchain.document_loaders.obsidian import ObsidianLoader
from langchain.document_loaders.odt import UnstructuredODTLoader
from langchain.document_loaders.onedrive import OneDriveLoader
from langchain.document_loaders.pdf import (
MathpixPDFLoader,
OnlinePDFLoader,
PDFMinerLoader,
PDFMinerPDFasHTMLLoader,
PDFPlumberLoader,
PyMuPDFLoader,
PyPDFDirectoryLoader,
PyPDFium2Loader,
PyPDFLoader,
UnstructuredPDFLoader,
)
from langchain.document_loaders.powerpoint import UnstructuredPowerPointLoader
from langchain.document_loaders.psychic import PsychicLoader
from langchain.document_loaders.pyspark_dataframe import PySparkDataFrameLoader
from langchain.document_loaders.python import PythonLoader
from langchain.document_loaders.readthedocs import ReadTheDocsLoader
from langchain.document_loaders.reddit import RedditPostsLoader
from langchain.document_loaders.roam import RoamLoader |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,623 | cannot import name 'FigmaFileLoader' | ### System Info
langchain==0.0.189
os:windows11
python=3.10.11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import FigmaFileLoader
### Expected behavior
expected:
load the module
error:
ImportError: cannot import name 'FigmaFileLoader' from 'langchain.document_loaders' (C:\Users\xxx\AppData\Local\miniconda3\envs\xxx\lib\site-packages\langchain\document_loaders\__init__.py)
comments:
checked the langchain\document_loaders\__init__.py and there is no reference to FigmaFileLoader
| https://github.com/langchain-ai/langchain/issues/5623 | https://github.com/langchain-ai/langchain/pull/5636 | 20ec1173f40a13cba73d79cc0efa4653d2489d65 | 9a7488a5ce65aaf727464f02a10811719b517f11 | "2023-06-02T16:39:41Z" | python | "2023-06-02T21:58:41Z" | langchain/document_loaders/__init__.py | from langchain.document_loaders.rtf import UnstructuredRTFLoader
from langchain.document_loaders.s3_directory import S3DirectoryLoader
from langchain.document_loaders.s3_file import S3FileLoader
from langchain.document_loaders.sitemap import SitemapLoader
from langchain.document_loaders.slack_directory import SlackDirectoryLoader
from langchain.document_loaders.spreedly import SpreedlyLoader
from langchain.document_loaders.srt import SRTLoader
from langchain.document_loaders.stripe import StripeLoader
from langchain.document_loaders.telegram import (
TelegramChatApiLoader,
TelegramChatFileLoader,
)
from langchain.document_loaders.text import TextLoader
from langchain.document_loaders.tomarkdown import ToMarkdownLoader
from langchain.document_loaders.toml import TomlLoader
from langchain.document_loaders.trello import TrelloLoader
from langchain.document_loaders.twitter import TwitterTweetLoader
from langchain.document_loaders.unstructured import (
UnstructuredAPIFileIOLoader,
UnstructuredAPIFileLoader,
UnstructuredFileIOLoader,
UnstructuredFileLoader,
)
from langchain.document_loaders.url import UnstructuredURLLoader
from langchain.document_loaders.url_playwright import PlaywrightURLLoader
from langchain.document_loaders.url_selenium import SeleniumURLLoader
from langchain.document_loaders.weather import WeatherDataLoader
from langchain.document_loaders.web_base import WebBaseLoader
from langchain.document_loaders.whatsapp_chat import WhatsAppChatLoader
from langchain.document_loaders.wikipedia import WikipediaLoader |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,623 | cannot import name 'FigmaFileLoader' | ### System Info
langchain==0.0.189
os:windows11
python=3.10.11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import FigmaFileLoader
### Expected behavior
expected:
load the module
error:
ImportError: cannot import name 'FigmaFileLoader' from 'langchain.document_loaders' (C:\Users\xxx\AppData\Local\miniconda3\envs\xxx\lib\site-packages\langchain\document_loaders\__init__.py)
comments:
checked the langchain\document_loaders\__init__.py and there is no reference to FigmaFileLoader
| https://github.com/langchain-ai/langchain/issues/5623 | https://github.com/langchain-ai/langchain/pull/5636 | 20ec1173f40a13cba73d79cc0efa4653d2489d65 | 9a7488a5ce65aaf727464f02a10811719b517f11 | "2023-06-02T16:39:41Z" | python | "2023-06-02T21:58:41Z" | langchain/document_loaders/__init__.py | from langchain.document_loaders.word_document import (
Docx2txtLoader,
UnstructuredWordDocumentLoader,
)
from langchain.document_loaders.youtube import (
GoogleApiClient,
GoogleApiYoutubeLoader,
YoutubeLoader,
)
PagedPDFSplitter = PyPDFLoader
TelegramChatLoader = TelegramChatFileLoader
__all__ = [
"AZLyricsLoader",
"AirbyteJSONLoader",
"ApifyDatasetLoader",
"ArxivLoader",
"AzureBlobStorageContainerLoader",
"AzureBlobStorageFileLoader",
"BSHTMLLoader",
"BibtexLoader",
"BigQueryLoader",
"BiliBiliLoader",
"BlackboardLoader",
"BlockchainDocumentLoader",
"CSVLoader",
"ChatGPTLoader",
"CoNLLULoader",
"CollegeConfidentialLoader",
"ConfluenceLoader",
"DataFrameLoader", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,623 | cannot import name 'FigmaFileLoader' | ### System Info
langchain==0.0.189
os:windows11
python=3.10.11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import FigmaFileLoader
### Expected behavior
expected:
load the module
error:
ImportError: cannot import name 'FigmaFileLoader' from 'langchain.document_loaders' (C:\Users\xxx\AppData\Local\miniconda3\envs\xxx\lib\site-packages\langchain\document_loaders\__init__.py)
comments:
checked the langchain\document_loaders\__init__.py and there is no reference to FigmaFileLoader
| https://github.com/langchain-ai/langchain/issues/5623 | https://github.com/langchain-ai/langchain/pull/5636 | 20ec1173f40a13cba73d79cc0efa4653d2489d65 | 9a7488a5ce65aaf727464f02a10811719b517f11 | "2023-06-02T16:39:41Z" | python | "2023-06-02T21:58:41Z" | langchain/document_loaders/__init__.py | "DiffbotLoader",
"DirectoryLoader",
"DiscordChatLoader",
"DocugamiLoader",
"Docx2txtLoader",
"DuckDBLoader",
"EverNoteLoader",
"FacebookChatLoader",
"GCSDirectoryLoader",
"GCSFileLoader",
"GitLoader",
"GitHubIssuesLoader",
"GitbookLoader",
"GoogleApiClient",
"GoogleApiYoutubeLoader",
"GoogleDriveLoader",
"GutenbergLoader",
"HNLoader",
"HuggingFaceDatasetLoader",
"HuggingFaceDatasetLoader",
"IFixitLoader",
"IMSDbLoader",
"ImageCaptionLoader",
"JoplinLoader",
"JSONLoader",
"MWDumpLoader",
"MastodonTootsLoader",
"MathpixPDFLoader",
"MaxComputeLoader",
"ModernTreasuryLoader", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,623 | cannot import name 'FigmaFileLoader' | ### System Info
langchain==0.0.189
os:windows11
python=3.10.11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import FigmaFileLoader
### Expected behavior
expected:
load the module
error:
ImportError: cannot import name 'FigmaFileLoader' from 'langchain.document_loaders' (C:\Users\xxx\AppData\Local\miniconda3\envs\xxx\lib\site-packages\langchain\document_loaders\__init__.py)
comments:
checked the langchain\document_loaders\__init__.py and there is no reference to FigmaFileLoader
| https://github.com/langchain-ai/langchain/issues/5623 | https://github.com/langchain-ai/langchain/pull/5636 | 20ec1173f40a13cba73d79cc0efa4653d2489d65 | 9a7488a5ce65aaf727464f02a10811719b517f11 | "2023-06-02T16:39:41Z" | python | "2023-06-02T21:58:41Z" | langchain/document_loaders/__init__.py | "NotebookLoader",
"NotionDBLoader",
"NotionDirectoryLoader",
"ObsidianLoader",
"OneDriveLoader",
"OnlinePDFLoader",
"OutlookMessageLoader",
"PDFMinerLoader",
"PDFMinerPDFasHTMLLoader",
"PDFPlumberLoader",
"PagedPDFSplitter",
"PlaywrightURLLoader",
"PyMuPDFLoader",
"PyPDFDirectoryLoader",
"PyPDFLoader",
"PyPDFium2Loader",
"PySparkDataFrameLoader",
"PythonLoader",
"ReadTheDocsLoader",
"RedditPostsLoader",
"RoamLoader",
"S3DirectoryLoader",
"S3FileLoader",
"SRTLoader",
"SeleniumURLLoader",
"SitemapLoader",
"SlackDirectoryLoader",
"TelegramChatFileLoader",
"TelegramChatApiLoader",
"SpreedlyLoader", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,623 | cannot import name 'FigmaFileLoader' | ### System Info
langchain==0.0.189
os:windows11
python=3.10.11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import FigmaFileLoader
### Expected behavior
expected:
load the module
error:
ImportError: cannot import name 'FigmaFileLoader' from 'langchain.document_loaders' (C:\Users\xxx\AppData\Local\miniconda3\envs\xxx\lib\site-packages\langchain\document_loaders\__init__.py)
comments:
checked the langchain\document_loaders\__init__.py and there is no reference to FigmaFileLoader
| https://github.com/langchain-ai/langchain/issues/5623 | https://github.com/langchain-ai/langchain/pull/5636 | 20ec1173f40a13cba73d79cc0efa4653d2489d65 | 9a7488a5ce65aaf727464f02a10811719b517f11 | "2023-06-02T16:39:41Z" | python | "2023-06-02T21:58:41Z" | langchain/document_loaders/__init__.py | "StripeLoader",
"TextLoader",
"TomlLoader",
"TrelloLoader",
"TwitterTweetLoader",
"UnstructuredAPIFileIOLoader",
"UnstructuredAPIFileLoader",
"UnstructuredEPubLoader",
"UnstructuredEmailLoader",
"UnstructuredFileIOLoader",
"UnstructuredFileLoader",
"UnstructuredHTMLLoader",
"UnstructuredImageLoader",
"UnstructuredMarkdownLoader",
"UnstructuredODTLoader",
"UnstructuredPDFLoader",
"UnstructuredPowerPointLoader",
"UnstructuredRTFLoader",
"UnstructuredURLLoader",
"UnstructuredWordDocumentLoader",
"WeatherDataLoader",
"WebBaseLoader",
"WhatsAppChatLoader",
"WikipediaLoader",
"YoutubeLoader",
"TelegramChatLoader",
"ToMarkdownLoader",
"PsychicLoader",
] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,191 | Support personal access token (PAT) in ConfluenceLoader | ### Issue you'd like to raise.
The [Atlassian API](https://atlassian-python-api.readthedocs.io/) (including Confluence) supports just passing a PAT (as token=<PAT>) to authenticate as a user, unfortunately the LangChain abstraction doesn't.
### Suggestion:
Add an optional "token" parameter to ConfluenceLoader and use it to authenticate within as an alternative to api_key/password/oauth. | https://github.com/langchain-ai/langchain/issues/5191 | https://github.com/langchain-ai/langchain/pull/5385 | b81f98b8a66999117246fbc134fc09d64a04e230 | ae2cf1f598360e1fc83839fdcd363378d663c936 | "2023-05-24T11:15:54Z" | python | "2023-06-03T21:57:49Z" | langchain/document_loaders/confluence.py | """Load Data from a Confluence Space"""
import logging
from io import BytesIO
from typing import Any, Callable, List, Optional, Union
from tenacity import (
before_sleep_log,
retry,
stop_after_attempt,
wait_exponential,
)
from langchain.docstore.document import Document
from langchain.document_loaders.base import BaseLoader
logger = logging.getLogger(__name__)
class ConfluenceLoader(BaseLoader): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,191 | Support personal access token (PAT) in ConfluenceLoader | ### Issue you'd like to raise.
The [Atlassian API](https://atlassian-python-api.readthedocs.io/) (including Confluence) supports just passing a PAT (as token=<PAT>) to authenticate as a user, unfortunately the LangChain abstraction doesn't.
### Suggestion:
Add an optional "token" parameter to ConfluenceLoader and use it to authenticate within as an alternative to api_key/password/oauth. | https://github.com/langchain-ai/langchain/issues/5191 | https://github.com/langchain-ai/langchain/pull/5385 | b81f98b8a66999117246fbc134fc09d64a04e230 | ae2cf1f598360e1fc83839fdcd363378d663c936 | "2023-05-24T11:15:54Z" | python | "2023-06-03T21:57:49Z" | langchain/document_loaders/confluence.py | """
Load Confluence pages. Port of https://llamahub.ai/l/confluence
This currently supports both username/api_key and Oauth2 login.
Specify a list page_ids and/or space_key to load in the corresponding pages into
Document objects, if both are specified the union of both sets will be returned.
You can also specify a boolean `include_attachments` to include attachments, this
is set to False by default, if set to True all attachments will be downloaded and
ConfluenceReader will extract the text from the attachments and add it to the
Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG,
SVG, Word and Excel.
Hint: space_key and page_id can both be found in the URL of a page in Confluence
- https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id>
Example:
.. code-block:: python
from langchain.document_loaders import ConfluenceLoader
loader = ConfluenceLoader(
url="https://yoursite.atlassian.com/wiki",
username="me",
api_key="12345"
)
documents = loader.load(space_key="SPACE",limit=50)
:param url: _description_
:type url: str
:param api_key: _description_, defaults to None
:type api_key: str, optional
:param username: _description_, defaults to None
:type username: str, optional |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,191 | Support personal access token (PAT) in ConfluenceLoader | ### Issue you'd like to raise.
The [Atlassian API](https://atlassian-python-api.readthedocs.io/) (including Confluence) supports just passing a PAT (as token=<PAT>) to authenticate as a user, unfortunately the LangChain abstraction doesn't.
### Suggestion:
Add an optional "token" parameter to ConfluenceLoader and use it to authenticate within as an alternative to api_key/password/oauth. | https://github.com/langchain-ai/langchain/issues/5191 | https://github.com/langchain-ai/langchain/pull/5385 | b81f98b8a66999117246fbc134fc09d64a04e230 | ae2cf1f598360e1fc83839fdcd363378d663c936 | "2023-05-24T11:15:54Z" | python | "2023-06-03T21:57:49Z" | langchain/document_loaders/confluence.py | :param oauth2: _description_, defaults to {}
:type oauth2: dict, optional
:param cloud: _description_, defaults to True
:type cloud: bool, optional
:param number_of_retries: How many times to retry, defaults to 3
:type number_of_retries: Optional[int], optional
:param min_retry_seconds: defaults to 2
:type min_retry_seconds: Optional[int], optional
:param max_retry_seconds: defaults to 10
:type max_retry_seconds: Optional[int], optional
:param confluence_kwargs: additional kwargs to initialize confluence with
:type confluence_kwargs: dict, optional
:raises ValueError: Errors while validating input
:raises ImportError: Required dependencies not installed.
"""
def __init__(
self,
url: str,
api_key: Optional[str] = None,
username: Optional[str] = None,
oauth2: Optional[dict] = None,
cloud: Optional[bool] = True,
number_of_retries: Optional[int] = 3,
min_retry_seconds: Optional[int] = 2,
max_retry_seconds: Optional[int] = 10,
confluence_kwargs: Optional[dict] = None,
):
confluence_kwargs = confluence_kwargs or {}
errors = ConfluenceLoader.validate_init_args(url, api_key, username, oauth2)
if errors: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,191 | Support personal access token (PAT) in ConfluenceLoader | ### Issue you'd like to raise.
The [Atlassian API](https://atlassian-python-api.readthedocs.io/) (including Confluence) supports just passing a PAT (as token=<PAT>) to authenticate as a user, unfortunately the LangChain abstraction doesn't.
### Suggestion:
Add an optional "token" parameter to ConfluenceLoader and use it to authenticate within as an alternative to api_key/password/oauth. | https://github.com/langchain-ai/langchain/issues/5191 | https://github.com/langchain-ai/langchain/pull/5385 | b81f98b8a66999117246fbc134fc09d64a04e230 | ae2cf1f598360e1fc83839fdcd363378d663c936 | "2023-05-24T11:15:54Z" | python | "2023-06-03T21:57:49Z" | langchain/document_loaders/confluence.py | raise ValueError(f"Error(s) while validating input: {errors}")
self.base_url = url
self.number_of_retries = number_of_retries
self.min_retry_seconds = min_retry_seconds
self.max_retry_seconds = max_retry_seconds
try:
from atlassian import Confluence
except ImportError:
raise ImportError(
"`atlassian` package not found, please run "
"`pip install atlassian-python-api`"
)
if oauth2:
self.confluence = Confluence(
url=url, oauth2=oauth2, cloud=cloud, **confluence_kwargs
)
else:
self.confluence = Confluence(
url=url,
username=username,
password=api_key,
cloud=cloud,
**confluence_kwargs,
)
@staticmethod
def validate_init_args(
url: Optional[str] = None,
api_key: Optional[str] = None,
username: Optional[str] = None,
oauth2: Optional[dict] = None, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,191 | Support personal access token (PAT) in ConfluenceLoader | ### Issue you'd like to raise.
The [Atlassian API](https://atlassian-python-api.readthedocs.io/) (including Confluence) supports just passing a PAT (as token=<PAT>) to authenticate as a user, unfortunately the LangChain abstraction doesn't.
### Suggestion:
Add an optional "token" parameter to ConfluenceLoader and use it to authenticate within as an alternative to api_key/password/oauth. | https://github.com/langchain-ai/langchain/issues/5191 | https://github.com/langchain-ai/langchain/pull/5385 | b81f98b8a66999117246fbc134fc09d64a04e230 | ae2cf1f598360e1fc83839fdcd363378d663c936 | "2023-05-24T11:15:54Z" | python | "2023-06-03T21:57:49Z" | langchain/document_loaders/confluence.py | ) -> Union[List, None]:
"""Validates proper combinations of init arguments"""
errors = []
if url is None:
errors.append("Must provide `base_url`")
if (api_key and not username) or (username and not api_key):
errors.append(
"If one of `api_key` or `username` is provided, "
"the other must be as well."
)
if (api_key or username) and oauth2:
errors.append(
"Cannot provide a value for `api_key` and/or "
"`username` and provide a value for `oauth2`"
)
if oauth2 and oauth2.keys() != [
"access_token",
"access_token_secret",
"consumer_key",
"key_cert",
]:
errors.append(
"You have either ommited require keys or added extra "
"keys to the oauth2 dictionary. key values should be "
"`['access_token', 'access_token_secret', 'consumer_key', 'key_cert']`"
)
if errors:
return errors
return None
def load( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,191 | Support personal access token (PAT) in ConfluenceLoader | ### Issue you'd like to raise.
The [Atlassian API](https://atlassian-python-api.readthedocs.io/) (including Confluence) supports just passing a PAT (as token=<PAT>) to authenticate as a user, unfortunately the LangChain abstraction doesn't.
### Suggestion:
Add an optional "token" parameter to ConfluenceLoader and use it to authenticate within as an alternative to api_key/password/oauth. | https://github.com/langchain-ai/langchain/issues/5191 | https://github.com/langchain-ai/langchain/pull/5385 | b81f98b8a66999117246fbc134fc09d64a04e230 | ae2cf1f598360e1fc83839fdcd363378d663c936 | "2023-05-24T11:15:54Z" | python | "2023-06-03T21:57:49Z" | langchain/document_loaders/confluence.py | self,
space_key: Optional[str] = None,
page_ids: Optional[List[str]] = None,
label: Optional[str] = None,
cql: Optional[str] = None,
include_restricted_content: bool = False,
include_archived_content: bool = False,
include_attachments: bool = False,
include_comments: bool = False,
limit: Optional[int] = 50,
max_pages: Optional[int] = 1000,
) -> List[Document]:
"""
:param space_key: Space key retrieved from a confluence URL, defaults to None
:type space_key: Optional[str], optional
:param page_ids: List of specific page IDs to load, defaults to None
:type page_ids: Optional[List[str]], optional
:param label: Get all pages with this label, defaults to None
:type label: Optional[str], optional
:param cql: CQL Expression, defaults to None
:type cql: Optional[str], optional
:param include_restricted_content: defaults to False
:type include_restricted_content: bool, optional
:param include_archived_content: Whether to include archived content,
defaults to False
:type include_archived_content: bool, optional
:param include_attachments: defaults to False
:type include_attachments: bool, optional |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,191 | Support personal access token (PAT) in ConfluenceLoader | ### Issue you'd like to raise.
The [Atlassian API](https://atlassian-python-api.readthedocs.io/) (including Confluence) supports just passing a PAT (as token=<PAT>) to authenticate as a user, unfortunately the LangChain abstraction doesn't.
### Suggestion:
Add an optional "token" parameter to ConfluenceLoader and use it to authenticate within as an alternative to api_key/password/oauth. | https://github.com/langchain-ai/langchain/issues/5191 | https://github.com/langchain-ai/langchain/pull/5385 | b81f98b8a66999117246fbc134fc09d64a04e230 | ae2cf1f598360e1fc83839fdcd363378d663c936 | "2023-05-24T11:15:54Z" | python | "2023-06-03T21:57:49Z" | langchain/document_loaders/confluence.py | :param include_comments: defaults to False
:type include_comments: bool, optional
:param limit: Maximum number of pages to retrieve per request, defaults to 50
:type limit: int, optional
:param max_pages: Maximum number of pages to retrieve in total, defaults 1000
:type max_pages: int, optional
:raises ValueError: _description_
:raises ImportError: _description_
:return: _description_
:rtype: List[Document]
"""
if not space_key and not page_ids and not label and not cql:
raise ValueError(
"Must specify at least one among `space_key`, `page_ids`, "
"`label`, `cql` parameters."
)
docs = []
if space_key:
pages = self.paginate_request(
self.confluence.get_all_pages_from_space,
space=space_key,
limit=limit,
max_pages=max_pages,
status="any" if include_archived_content else "current",
expand="body.storage.value",
)
docs += self.process_pages(
pages, include_restricted_content, include_attachments, include_comments
)
if label: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,191 | Support personal access token (PAT) in ConfluenceLoader | ### Issue you'd like to raise.
The [Atlassian API](https://atlassian-python-api.readthedocs.io/) (including Confluence) supports just passing a PAT (as token=<PAT>) to authenticate as a user, unfortunately the LangChain abstraction doesn't.
### Suggestion:
Add an optional "token" parameter to ConfluenceLoader and use it to authenticate within as an alternative to api_key/password/oauth. | https://github.com/langchain-ai/langchain/issues/5191 | https://github.com/langchain-ai/langchain/pull/5385 | b81f98b8a66999117246fbc134fc09d64a04e230 | ae2cf1f598360e1fc83839fdcd363378d663c936 | "2023-05-24T11:15:54Z" | python | "2023-06-03T21:57:49Z" | langchain/document_loaders/confluence.py | pages = self.paginate_request(
self.confluence.get_all_pages_by_label,
label=label,
limit=limit,
max_pages=max_pages,
)
ids_by_label = [page["id"] for page in pages]
if page_ids:
page_ids = list(set(page_ids + ids_by_label))
else:
page_ids = list(set(ids_by_label))
if cql:
pages = self.paginate_request(
self.confluence.cql,
cql=cql,
limit=limit,
max_pages=max_pages,
include_archived_spaces=include_archived_content,
expand="body.storage.value",
)
docs += self.process_pages(
pages, include_restricted_content, include_attachments, include_comments
)
if page_ids:
for page_id in page_ids:
get_page = retry(
reraise=True,
stop=stop_after_attempt(
self.number_of_retries
), |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,191 | Support personal access token (PAT) in ConfluenceLoader | ### Issue you'd like to raise.
The [Atlassian API](https://atlassian-python-api.readthedocs.io/) (including Confluence) supports just passing a PAT (as token=<PAT>) to authenticate as a user, unfortunately the LangChain abstraction doesn't.
### Suggestion:
Add an optional "token" parameter to ConfluenceLoader and use it to authenticate within as an alternative to api_key/password/oauth. | https://github.com/langchain-ai/langchain/issues/5191 | https://github.com/langchain-ai/langchain/pull/5385 | b81f98b8a66999117246fbc134fc09d64a04e230 | ae2cf1f598360e1fc83839fdcd363378d663c936 | "2023-05-24T11:15:54Z" | python | "2023-06-03T21:57:49Z" | langchain/document_loaders/confluence.py | wait=wait_exponential(
multiplier=1,
min=self.min_retry_seconds,
max=self.max_retry_seconds,
),
before_sleep=before_sleep_log(logger, logging.WARNING),
)(self.confluence.get_page_by_id)
page = get_page(page_id=page_id, expand="body.storage.value")
if not include_restricted_content and not self.is_public_page(page):
continue
doc = self.process_page(page, include_attachments, include_comments)
docs.append(doc)
return docs
def paginate_request(self, retrieval_method: Callable, **kwargs: Any) -> List:
"""Paginate the various methods to retrieve groups of pages.
Unfortunately, due to page size, sometimes the Confluence API
doesn't match the limit value. If `limit` is >100 confluence
seems to cap the response to 100. Also, due to the Atlassian Python
package, we don't get the "next" values from the "_links" key because
they only return the value from the results key. So here, the pagination
starts from 0 and goes until the max_pages, getting the `limit` number
of pages with each request. We have to manually check if there
are more docs based on the length of the returned list of pages, rather than
just checking for the presence of a `next` key in the response like this page
would have you do:
https://developer.atlassian.com/server/confluence/pagination-in-the-rest-api/
:param retrieval_method: Function used to retrieve docs
:type retrieval_method: callable
:return: List of documents
:rtype: List |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,191 | Support personal access token (PAT) in ConfluenceLoader | ### Issue you'd like to raise.
The [Atlassian API](https://atlassian-python-api.readthedocs.io/) (including Confluence) supports just passing a PAT (as token=<PAT>) to authenticate as a user, unfortunately the LangChain abstraction doesn't.
### Suggestion:
Add an optional "token" parameter to ConfluenceLoader and use it to authenticate within as an alternative to api_key/password/oauth. | https://github.com/langchain-ai/langchain/issues/5191 | https://github.com/langchain-ai/langchain/pull/5385 | b81f98b8a66999117246fbc134fc09d64a04e230 | ae2cf1f598360e1fc83839fdcd363378d663c936 | "2023-05-24T11:15:54Z" | python | "2023-06-03T21:57:49Z" | langchain/document_loaders/confluence.py | """
max_pages = kwargs.pop("max_pages")
docs: List[dict] = []
while len(docs) < max_pages:
get_pages = retry(
reraise=True,
stop=stop_after_attempt(
self.number_of_retries
),
wait=wait_exponential(
multiplier=1,
min=self.min_retry_seconds,
max=self.max_retry_seconds,
),
before_sleep=before_sleep_log(logger, logging.WARNING),
)(retrieval_method)
batch = get_pages(**kwargs, start=len(docs))
if not batch:
break
docs.extend(batch)
return docs[:max_pages]
def is_public_page(self, page: dict) -> bool:
"""Check if a page is publicly accessible."""
restrictions = self.confluence.get_all_restrictions_for_content(page["id"])
return (
page["status"] == "current"
and not restrictions["read"]["restrictions"]["user"]["results"]
and not restrictions["read"]["restrictions"]["group"]["results"]
)
def process_pages( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,191 | Support personal access token (PAT) in ConfluenceLoader | ### Issue you'd like to raise.
The [Atlassian API](https://atlassian-python-api.readthedocs.io/) (including Confluence) supports just passing a PAT (as token=<PAT>) to authenticate as a user, unfortunately the LangChain abstraction doesn't.
### Suggestion:
Add an optional "token" parameter to ConfluenceLoader and use it to authenticate within as an alternative to api_key/password/oauth. | https://github.com/langchain-ai/langchain/issues/5191 | https://github.com/langchain-ai/langchain/pull/5385 | b81f98b8a66999117246fbc134fc09d64a04e230 | ae2cf1f598360e1fc83839fdcd363378d663c936 | "2023-05-24T11:15:54Z" | python | "2023-06-03T21:57:49Z" | langchain/document_loaders/confluence.py | self,
pages: List[dict],
include_restricted_content: bool,
include_attachments: bool,
include_comments: bool,
) -> List[Document]:
"""Process a list of pages into a list of documents."""
docs = []
for page in pages:
if not include_restricted_content and not self.is_public_page(page):
continue
doc = self.process_page(page, include_attachments, include_comments)
docs.append(doc)
return docs
def process_page(
self,
page: dict,
include_attachments: bool,
include_comments: bool,
) -> Document:
try:
from bs4 import BeautifulSoup
except ImportError:
raise ImportError(
"`beautifulsoup4` package not found, please run "
"`pip install beautifulsoup4`"
)
if include_attachments:
attachment_texts = self.process_attachment(page["id"])
else: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,191 | Support personal access token (PAT) in ConfluenceLoader | ### Issue you'd like to raise.
The [Atlassian API](https://atlassian-python-api.readthedocs.io/) (including Confluence) supports just passing a PAT (as token=<PAT>) to authenticate as a user, unfortunately the LangChain abstraction doesn't.
### Suggestion:
Add an optional "token" parameter to ConfluenceLoader and use it to authenticate within as an alternative to api_key/password/oauth. | https://github.com/langchain-ai/langchain/issues/5191 | https://github.com/langchain-ai/langchain/pull/5385 | b81f98b8a66999117246fbc134fc09d64a04e230 | ae2cf1f598360e1fc83839fdcd363378d663c936 | "2023-05-24T11:15:54Z" | python | "2023-06-03T21:57:49Z" | langchain/document_loaders/confluence.py | attachment_texts = []
text = BeautifulSoup(page["body"]["storage"]["value"], "lxml").get_text(
" ", strip=True
) + "".join(attachment_texts)
if include_comments:
comments = self.confluence.get_page_comments(
page["id"], expand="body.view.value", depth="all"
)["results"]
comment_texts = [
BeautifulSoup(comment["body"]["view"]["value"], "lxml").get_text(
" ", strip=True
)
for comment in comments
]
text = text + "".join(comment_texts)
return Document(
page_content=text,
metadata={
"title": page["title"],
"id": page["id"],
"source": self.base_url.strip("/") + page["_links"]["webui"],
},
)
def process_attachment(self, page_id: str) -> List[str]:
try:
from PIL import Image
except ImportError:
raise ImportError(
"`Pillow` package not found, " "please run `pip install Pillow`"
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,191 | Support personal access token (PAT) in ConfluenceLoader | ### Issue you'd like to raise.
The [Atlassian API](https://atlassian-python-api.readthedocs.io/) (including Confluence) supports just passing a PAT (as token=<PAT>) to authenticate as a user, unfortunately the LangChain abstraction doesn't.
### Suggestion:
Add an optional "token" parameter to ConfluenceLoader and use it to authenticate within as an alternative to api_key/password/oauth. | https://github.com/langchain-ai/langchain/issues/5191 | https://github.com/langchain-ai/langchain/pull/5385 | b81f98b8a66999117246fbc134fc09d64a04e230 | ae2cf1f598360e1fc83839fdcd363378d663c936 | "2023-05-24T11:15:54Z" | python | "2023-06-03T21:57:49Z" | langchain/document_loaders/confluence.py | attachments = self.confluence.get_attachments_from_content(page_id)["results"]
texts = []
for attachment in attachments:
media_type = attachment["metadata"]["mediaType"]
absolute_url = self.base_url + attachment["_links"]["download"]
title = attachment["title"]
if media_type == "application/pdf":
text = title + self.process_pdf(absolute_url)
elif (
media_type == "image/png"
or media_type == "image/jpg"
or media_type == "image/jpeg"
):
text = title + self.process_image(absolute_url)
elif (
media_type == "application/vnd.openxmlformats-officedocument"
".wordprocessingml.document"
):
text = title + self.process_doc(absolute_url)
elif media_type == "application/vnd.ms-excel":
text = title + self.process_xls(absolute_url)
elif media_type == "image/svg+xml":
text = title + self.process_svg(absolute_url)
else:
continue
texts.append(text)
return texts
def process_pdf(self, link: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,191 | Support personal access token (PAT) in ConfluenceLoader | ### Issue you'd like to raise.
The [Atlassian API](https://atlassian-python-api.readthedocs.io/) (including Confluence) supports just passing a PAT (as token=<PAT>) to authenticate as a user, unfortunately the LangChain abstraction doesn't.
### Suggestion:
Add an optional "token" parameter to ConfluenceLoader and use it to authenticate within as an alternative to api_key/password/oauth. | https://github.com/langchain-ai/langchain/issues/5191 | https://github.com/langchain-ai/langchain/pull/5385 | b81f98b8a66999117246fbc134fc09d64a04e230 | ae2cf1f598360e1fc83839fdcd363378d663c936 | "2023-05-24T11:15:54Z" | python | "2023-06-03T21:57:49Z" | langchain/document_loaders/confluence.py | try:
import pytesseract
from pdf2image import convert_from_bytes
except ImportError:
raise ImportError(
"`pytesseract` or `pdf2image` package not found, "
"please run `pip install pytesseract pdf2image`"
)
response = self.confluence.request(path=link, absolute=True)
text = ""
if (
response.status_code != 200
or response.content == b""
or response.content is None
):
return text
try:
images = convert_from_bytes(response.content)
except ValueError:
return text
for i, image in enumerate(images):
image_text = pytesseract.image_to_string(image)
text += f"Page {i + 1}:\n{image_text}\n\n"
return text
def process_image(self, link: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,191 | Support personal access token (PAT) in ConfluenceLoader | ### Issue you'd like to raise.
The [Atlassian API](https://atlassian-python-api.readthedocs.io/) (including Confluence) supports just passing a PAT (as token=<PAT>) to authenticate as a user, unfortunately the LangChain abstraction doesn't.
### Suggestion:
Add an optional "token" parameter to ConfluenceLoader and use it to authenticate within as an alternative to api_key/password/oauth. | https://github.com/langchain-ai/langchain/issues/5191 | https://github.com/langchain-ai/langchain/pull/5385 | b81f98b8a66999117246fbc134fc09d64a04e230 | ae2cf1f598360e1fc83839fdcd363378d663c936 | "2023-05-24T11:15:54Z" | python | "2023-06-03T21:57:49Z" | langchain/document_loaders/confluence.py | try:
import pytesseract
from PIL import Image
except ImportError:
raise ImportError(
"`pytesseract` or `Pillow` package not found, "
"please run `pip install pytesseract Pillow`"
)
response = self.confluence.request(path=link, absolute=True)
text = ""
if (
response.status_code != 200
or response.content == b""
or response.content is None
):
return text
try:
image = Image.open(BytesIO(response.content))
except OSError:
return text
return pytesseract.image_to_string(image)
def process_doc(self, link: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,191 | Support personal access token (PAT) in ConfluenceLoader | ### Issue you'd like to raise.
The [Atlassian API](https://atlassian-python-api.readthedocs.io/) (including Confluence) supports just passing a PAT (as token=<PAT>) to authenticate as a user, unfortunately the LangChain abstraction doesn't.
### Suggestion:
Add an optional "token" parameter to ConfluenceLoader and use it to authenticate within as an alternative to api_key/password/oauth. | https://github.com/langchain-ai/langchain/issues/5191 | https://github.com/langchain-ai/langchain/pull/5385 | b81f98b8a66999117246fbc134fc09d64a04e230 | ae2cf1f598360e1fc83839fdcd363378d663c936 | "2023-05-24T11:15:54Z" | python | "2023-06-03T21:57:49Z" | langchain/document_loaders/confluence.py | try:
import docx2txt
except ImportError:
raise ImportError(
"`docx2txt` package not found, please run `pip install docx2txt`"
)
response = self.confluence.request(path=link, absolute=True)
text = ""
if (
response.status_code != 200
or response.content == b""
or response.content is None
):
return text
file_data = BytesIO(response.content)
return docx2txt.process(file_data)
def process_xls(self, link: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,191 | Support personal access token (PAT) in ConfluenceLoader | ### Issue you'd like to raise.
The [Atlassian API](https://atlassian-python-api.readthedocs.io/) (including Confluence) supports just passing a PAT (as token=<PAT>) to authenticate as a user, unfortunately the LangChain abstraction doesn't.
### Suggestion:
Add an optional "token" parameter to ConfluenceLoader and use it to authenticate within as an alternative to api_key/password/oauth. | https://github.com/langchain-ai/langchain/issues/5191 | https://github.com/langchain-ai/langchain/pull/5385 | b81f98b8a66999117246fbc134fc09d64a04e230 | ae2cf1f598360e1fc83839fdcd363378d663c936 | "2023-05-24T11:15:54Z" | python | "2023-06-03T21:57:49Z" | langchain/document_loaders/confluence.py | try:
import xlrd
except ImportError:
raise ImportError("`xlrd` package not found, please run `pip install xlrd`")
response = self.confluence.request(path=link, absolute=True)
text = ""
if (
response.status_code != 200
or response.content == b""
or response.content is None
):
return text
workbook = xlrd.open_workbook(file_contents=response.content)
for sheet in workbook.sheets():
text += f"{sheet.name}:\n"
for row in range(sheet.nrows):
for col in range(sheet.ncols):
text += f"{sheet.cell_value(row, col)}\t"
text += "\n"
text += "\n"
return text
def process_svg(self, link: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,191 | Support personal access token (PAT) in ConfluenceLoader | ### Issue you'd like to raise.
The [Atlassian API](https://atlassian-python-api.readthedocs.io/) (including Confluence) supports just passing a PAT (as token=<PAT>) to authenticate as a user, unfortunately the LangChain abstraction doesn't.
### Suggestion:
Add an optional "token" parameter to ConfluenceLoader and use it to authenticate within as an alternative to api_key/password/oauth. | https://github.com/langchain-ai/langchain/issues/5191 | https://github.com/langchain-ai/langchain/pull/5385 | b81f98b8a66999117246fbc134fc09d64a04e230 | ae2cf1f598360e1fc83839fdcd363378d663c936 | "2023-05-24T11:15:54Z" | python | "2023-06-03T21:57:49Z" | langchain/document_loaders/confluence.py | try:
import pytesseract
from PIL import Image
from reportlab.graphics import renderPM
from svglib.svglib import svg2rlg
except ImportError:
raise ImportError(
"`pytesseract`, `Pillow`, `reportlab` or `svglib` package not found, "
"please run `pip install pytesseract Pillow reportlab svglib`"
)
response = self.confluence.request(path=link, absolute=True)
text = ""
if (
response.status_code != 200
or response.content == b""
or response.content is None
):
return text
drawing = svg2rlg(BytesIO(response.content))
img_data = BytesIO()
renderPM.drawToFile(drawing, img_data, fmt="PNG")
img_data.seek(0)
image = Image.open(img_data)
return pytesseract.image_to_string(image) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,400 | Add the ability to pass the prompt through to Executor Agents for enrichment during PlanAndExecute | ### Feature request
Add the ability to pass the original prompt through to the ExecutorAgent so that the original explicit context is not lost during a PlanAndExecute run.
### Motivation
PlanAndExecute agents can create a plan of steps dependent on context given in the original prompt. However, this context is lost after the plan is created and is being executed.
However, often the plan is formed in a way which refers to the prior context, losing information. For example, I gave the following prompt, and gave the agent access only to the PythonREPL tool:
```py
prompt = (
f"Task: Analyse the customer data available in the database with path '{db_path}'. Tell me the average "
"sales by month."
)
```
In the above example, `db_path` is a fully formed string which can be passed directly to `sqlalchemy.create_engine`.
The first step in the plan formed was: `Connect to the database using the given path`. This would ordinarily be fine, however, the context of the "given path" was lost, as it was not part of the reformed prompt passed to the executor. Optionally including the original prompt in the template should assist with this.
### Your contribution
I will be submitting a PR shortly with a proposed solution :) | https://github.com/langchain-ai/langchain/issues/5400 | https://github.com/langchain-ai/langchain/pull/5401 | ae2cf1f598360e1fc83839fdcd363378d663c936 | 1f4abb265a9fd6c520835c3bebe8243b077495b5 | "2023-05-29T13:19:30Z" | python | "2023-06-03T21:59:09Z" | langchain/experimental/plan_and_execute/agent_executor.py | from typing import Any, Dict, List, Optional
from pydantic import Field
from langchain.callbacks.manager import CallbackManagerForChainRun
from langchain.chains.base import Chain
from langchain.experimental.plan_and_execute.executors.base import BaseExecutor
from langchain.experimental.plan_and_execute.planners.base import BasePlanner
from langchain.experimental.plan_and_execute.schema import (
BaseStepContainer,
ListStepContainer,
)
class PlanAndExecute(Chain):
planner: BasePlanner
executor: BaseExecutor
step_container: BaseStepContainer = Field(default_factory=ListStepContainer)
input_key: str = "input"
output_key: str = "output"
@property
def input_keys(self) -> List[str]:
return [self.input_key]
@property
def output_keys(self) -> List[str]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,400 | Add the ability to pass the prompt through to Executor Agents for enrichment during PlanAndExecute | ### Feature request
Add the ability to pass the original prompt through to the ExecutorAgent so that the original explicit context is not lost during a PlanAndExecute run.
### Motivation
PlanAndExecute agents can create a plan of steps dependent on context given in the original prompt. However, this context is lost after the plan is created and is being executed.
However, often the plan is formed in a way which refers to the prior context, losing information. For example, I gave the following prompt, and gave the agent access only to the PythonREPL tool:
```py
prompt = (
f"Task: Analyse the customer data available in the database with path '{db_path}'. Tell me the average "
"sales by month."
)
```
In the above example, `db_path` is a fully formed string which can be passed directly to `sqlalchemy.create_engine`.
The first step in the plan formed was: `Connect to the database using the given path`. This would ordinarily be fine, however, the context of the "given path" was lost, as it was not part of the reformed prompt passed to the executor. Optionally including the original prompt in the template should assist with this.
### Your contribution
I will be submitting a PR shortly with a proposed solution :) | https://github.com/langchain-ai/langchain/issues/5400 | https://github.com/langchain-ai/langchain/pull/5401 | ae2cf1f598360e1fc83839fdcd363378d663c936 | 1f4abb265a9fd6c520835c3bebe8243b077495b5 | "2023-05-29T13:19:30Z" | python | "2023-06-03T21:59:09Z" | langchain/experimental/plan_and_execute/agent_executor.py | return [self.output_key]
def _call(
self,
inputs: Dict[str, Any],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
plan = self.planner.plan(
inputs,
callbacks=run_manager.get_child() if run_manager else None,
)
if run_manager:
run_manager.on_text(str(plan), verbose=self.verbose)
for step in plan.steps:
_new_inputs = {"previous_steps": self.step_container, "current_step": step}
new_inputs = {**_new_inputs, **inputs}
response = self.executor.step(
new_inputs,
callbacks=run_manager.get_child() if run_manager else None,
)
if run_manager:
run_manager.on_text(
f"*****\n\nStep: {step.value}", verbose=self.verbose
)
run_manager.on_text(
f"\n\nResponse: {response.response}", verbose=self.verbose
)
self.step_container.add_step(step, response)
return {self.output_key: self.step_container.get_final_response()} |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,400 | Add the ability to pass the prompt through to Executor Agents for enrichment during PlanAndExecute | ### Feature request
Add the ability to pass the original prompt through to the ExecutorAgent so that the original explicit context is not lost during a PlanAndExecute run.
### Motivation
PlanAndExecute agents can create a plan of steps dependent on context given in the original prompt. However, this context is lost after the plan is created and is being executed.
However, often the plan is formed in a way which refers to the prior context, losing information. For example, I gave the following prompt, and gave the agent access only to the PythonREPL tool:
```py
prompt = (
f"Task: Analyse the customer data available in the database with path '{db_path}'. Tell me the average "
"sales by month."
)
```
In the above example, `db_path` is a fully formed string which can be passed directly to `sqlalchemy.create_engine`.
The first step in the plan formed was: `Connect to the database using the given path`. This would ordinarily be fine, however, the context of the "given path" was lost, as it was not part of the reformed prompt passed to the executor. Optionally including the original prompt in the template should assist with this.
### Your contribution
I will be submitting a PR shortly with a proposed solution :) | https://github.com/langchain-ai/langchain/issues/5400 | https://github.com/langchain-ai/langchain/pull/5401 | ae2cf1f598360e1fc83839fdcd363378d663c936 | 1f4abb265a9fd6c520835c3bebe8243b077495b5 | "2023-05-29T13:19:30Z" | python | "2023-06-03T21:59:09Z" | langchain/experimental/plan_and_execute/executors/agent_executor.py | from typing import List
from langchain.agents.agent import AgentExecutor
from langchain.agents.structured_chat.base import StructuredChatAgent
from langchain.base_language import BaseLanguageModel
from langchain.experimental.plan_and_execute.executors.base import ChainExecutor
from langchain.tools import BaseTool
HUMAN_MESSAGE_TEMPLATE = """Previous steps: {previous_steps}
Current objective: {current_step}
{agent_scratchpad}"""
def load_agent_executor(
llm: BaseLanguageModel, tools: List[BaseTool], verbose: bool = False
) -> ChainExecutor:
agent = StructuredChatAgent.from_llm_and_tools(
llm,
tools,
human_message_template=HUMAN_MESSAGE_TEMPLATE,
input_variables=["previous_steps", "current_step", "agent_scratchpad"],
)
agent_executor = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=verbose
)
return ChainExecutor(chain=agent_executor) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,433 | FinalStreamingStdOutCallbackHandler not working with ChatOpenAI LLM | ### System Info
Hi :)
I tested the new callback stream handler `FinalStreamingStdOutCallbackHandler` and noticed an issue with it.
I copied the code from the documentation and made just one change - use `ChatOpenAI` instead of `OpenAI`
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`llm = ChatOpenAI(streaming=True, callbacks=[FinalStreamingStdOutCallbackHandler()], temperature=0)` here is my only change
`tools = load_tools(["wikipedia", "llm-math"], llm=llm)`
`agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=False)`
`agent.run("It's 2023 now. How many years ago did Konrad Adenauer become Chancellor of Germany.")`
### Expected behavior
The code above returns the response from the agent but does not stream it. In my project, I must use the `ChatOpenAI` LLM, so I would appreciate it if someone could fix this issue, please. | https://github.com/langchain-ai/langchain/issues/5433 | https://github.com/langchain-ai/langchain/pull/5497 | 1f4abb265a9fd6c520835c3bebe8243b077495b5 | 44ad9628c9828e220540dd77680611741a6ed087 | "2023-05-30T10:51:06Z" | python | "2023-06-03T22:05:58Z" | langchain/callbacks/streaming_stdout_final_only.py | """Callback Handler streams to stdout on new llm token."""
import sys
from typing import Any, Dict, List, Optional
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
DEFAULT_ANSWER_PREFIX_TOKENS = ["\nFinal", " Answer", ":"]
class FinalStreamingStdOutCallbackHandler(StreamingStdOutCallbackHandler):
"""Callback handler for streaming in agents.
Only works with agents using LLMs that support streaming.
Only the final output of the agent will be streamed.
"""
def __init__(self, answer_prefix_tokens: Optional[List[str]] = None) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,433 | FinalStreamingStdOutCallbackHandler not working with ChatOpenAI LLM | ### System Info
Hi :)
I tested the new callback stream handler `FinalStreamingStdOutCallbackHandler` and noticed an issue with it.
I copied the code from the documentation and made just one change - use `ChatOpenAI` instead of `OpenAI`
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`llm = ChatOpenAI(streaming=True, callbacks=[FinalStreamingStdOutCallbackHandler()], temperature=0)` here is my only change
`tools = load_tools(["wikipedia", "llm-math"], llm=llm)`
`agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=False)`
`agent.run("It's 2023 now. How many years ago did Konrad Adenauer become Chancellor of Germany.")`
### Expected behavior
The code above returns the response from the agent but does not stream it. In my project, I must use the `ChatOpenAI` LLM, so I would appreciate it if someone could fix this issue, please. | https://github.com/langchain-ai/langchain/issues/5433 | https://github.com/langchain-ai/langchain/pull/5497 | 1f4abb265a9fd6c520835c3bebe8243b077495b5 | 44ad9628c9828e220540dd77680611741a6ed087 | "2023-05-30T10:51:06Z" | python | "2023-06-03T22:05:58Z" | langchain/callbacks/streaming_stdout_final_only.py | super().__init__()
if answer_prefix_tokens is None:
answer_prefix_tokens = DEFAULT_ANSWER_PREFIX_TOKENS
self.answer_prefix_tokens = answer_prefix_tokens
self.last_tokens = [""] * len(answer_prefix_tokens)
self.answer_reached = False
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None:
"""Run when LLM starts running."""
self.answer_reached = False
def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
"""Run on new LLM token. Only available when streaming is enabled."""
self.last_tokens.append(token)
if len(self.last_tokens) > len(self.answer_prefix_tokens):
self.last_tokens.pop(0)
if self.last_tokens == self.answer_prefix_tokens:
self.answer_reached = True
return
if self.answer_reached:
sys.stdout.write(token)
sys.stdout.flush() |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,545 | Issue: Improve Error Messaging When APOC Procedures Fail in Neo4jGraph | ### Issue you'd like to raise.
In the current implementation, when an APOC procedure fails, a generic error message is raised stating: "Could not use APOC procedures. Please install the APOC plugin in Neo4j." This message can lead to user confusion as it suggests the APOC plugin is not installed when in reality it may be installed but not correctly configured or permitted to run certain procedures.
This issue is encountered specifically when the refresh_schema function calls apoc.meta.data(). The function apoc.meta.data() isn't allowed to run under default configurations in the Neo4j database, thus leading to the mentioned error message.
Here is the code snippet where the issue arises:
```
# Set schema
try:
self.refresh_schema()
except neo4j.exceptions.ClientError
raise ValueError(
"Could not use APOC procedures. "
"Please install the APOC plugin in Neo4j."
)
```
### Suggestion:
To improve the user experience, I propose that the error message should be made more specific. Instead of merely advising users to install the APOC plugin, it would be beneficial to indicate that certain procedures may not be configured or whitelisted to run by default and to guide the users to check their configurations.
I believe this will save users time when troubleshooting and will reduce the potential for confusion. | https://github.com/langchain-ai/langchain/issues/5545 | https://github.com/langchain-ai/langchain/pull/5547 | 33ea606f455f195d74f09ac654e03da8850ecb9b | 3e45b8306555a48b5838ed7dd33b1a4c615bdd18 | "2023-06-01T08:04:16Z" | python | "2023-06-03T23:56:39Z" | langchain/graphs/neo4j_graph.py | from typing import Any, Dict, List
node_properties_query = """
CALL apoc.meta.data()
YIELD label, other, elementType, type, property
WHERE NOT type = "RELATIONSHIP" AND elementType = "node"
WITH label AS nodeLabels, collect({property:property, type:type}) AS properties
RETURN {labels: nodeLabels, properties: properties} AS output
"""
rel_properties_query = """
CALL apoc.meta.data()
YIELD label, other, elementType, type, property
WHERE NOT type = "RELATIONSHIP" AND elementType = "relationship"
WITH label AS nodeLabels, collect({property:property, type:type}) AS properties
RETURN {type: nodeLabels, properties: properties} AS output
"""
rel_query = """
CALL apoc.meta.data()
YIELD label, other, elementType, type, property
WHERE type = "RELATIONSHIP" AND elementType = "node"
RETURN "(:" + label + ")-[:" + property + "]->(:" + toString(other[0]) + ")" AS output
"""
class Neo4jGraph:
"""Neo4j wrapper for graph operations."""
def __init__(
self, url: str, username: str, password: str, database: str = "neo4j"
) -> None:
"""Create a new Neo4j graph wrapper instance."""
try:
import neo4j
except ImportError: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,545 | Issue: Improve Error Messaging When APOC Procedures Fail in Neo4jGraph | ### Issue you'd like to raise.
In the current implementation, when an APOC procedure fails, a generic error message is raised stating: "Could not use APOC procedures. Please install the APOC plugin in Neo4j." This message can lead to user confusion as it suggests the APOC plugin is not installed when in reality it may be installed but not correctly configured or permitted to run certain procedures.
This issue is encountered specifically when the refresh_schema function calls apoc.meta.data(). The function apoc.meta.data() isn't allowed to run under default configurations in the Neo4j database, thus leading to the mentioned error message.
Here is the code snippet where the issue arises:
```
# Set schema
try:
self.refresh_schema()
except neo4j.exceptions.ClientError
raise ValueError(
"Could not use APOC procedures. "
"Please install the APOC plugin in Neo4j."
)
```
### Suggestion:
To improve the user experience, I propose that the error message should be made more specific. Instead of merely advising users to install the APOC plugin, it would be beneficial to indicate that certain procedures may not be configured or whitelisted to run by default and to guide the users to check their configurations.
I believe this will save users time when troubleshooting and will reduce the potential for confusion. | https://github.com/langchain-ai/langchain/issues/5545 | https://github.com/langchain-ai/langchain/pull/5547 | 33ea606f455f195d74f09ac654e03da8850ecb9b | 3e45b8306555a48b5838ed7dd33b1a4c615bdd18 | "2023-06-01T08:04:16Z" | python | "2023-06-03T23:56:39Z" | langchain/graphs/neo4j_graph.py | raise ValueError(
"Could not import neo4j python package. "
"Please install it with `pip install neo4j`."
)
self._driver = neo4j.GraphDatabase.driver(url, auth=(username, password))
self._database = database
self.schema = ""
try:
self._driver.verify_connectivity()
except neo4j.exceptions.ServiceUnavailable:
raise ValueError(
"Could not connect to Neo4j database. "
"Please ensure that the url is correct"
)
except neo4j.exceptions.AuthError:
raise ValueError(
"Could not connect to Neo4j database. "
"Please ensure that the username and password are correct"
)
try:
self.refresh_schema()
except neo4j.exceptions.ClientError:
raise ValueError(
"Could not use APOC procedures. "
"Please install the APOC plugin in Neo4j."
)
@property
def get_schema(self) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,545 | Issue: Improve Error Messaging When APOC Procedures Fail in Neo4jGraph | ### Issue you'd like to raise.
In the current implementation, when an APOC procedure fails, a generic error message is raised stating: "Could not use APOC procedures. Please install the APOC plugin in Neo4j." This message can lead to user confusion as it suggests the APOC plugin is not installed when in reality it may be installed but not correctly configured or permitted to run certain procedures.
This issue is encountered specifically when the refresh_schema function calls apoc.meta.data(). The function apoc.meta.data() isn't allowed to run under default configurations in the Neo4j database, thus leading to the mentioned error message.
Here is the code snippet where the issue arises:
```
# Set schema
try:
self.refresh_schema()
except neo4j.exceptions.ClientError
raise ValueError(
"Could not use APOC procedures. "
"Please install the APOC plugin in Neo4j."
)
```
### Suggestion:
To improve the user experience, I propose that the error message should be made more specific. Instead of merely advising users to install the APOC plugin, it would be beneficial to indicate that certain procedures may not be configured or whitelisted to run by default and to guide the users to check their configurations.
I believe this will save users time when troubleshooting and will reduce the potential for confusion. | https://github.com/langchain-ai/langchain/issues/5545 | https://github.com/langchain-ai/langchain/pull/5547 | 33ea606f455f195d74f09ac654e03da8850ecb9b | 3e45b8306555a48b5838ed7dd33b1a4c615bdd18 | "2023-06-01T08:04:16Z" | python | "2023-06-03T23:56:39Z" | langchain/graphs/neo4j_graph.py | """Returns the schema of the Neo4j database"""
return self.schema
def query(self, query: str, params: dict = {}) -> List[Dict[str, Any]]:
"""Query Neo4j database."""
from neo4j.exceptions import CypherSyntaxError
with self._driver.session(database=self._database) as session:
try:
data = session.run(query, params)
return [r.data() for r in data][:50]
except CypherSyntaxError as e:
raise ValueError("Generated Cypher Statement is not valid\n" f"{e}")
def refresh_schema(self) -> None:
"""
Refreshes the Neo4j graph schema information.
"""
node_properties = self.query(node_properties_query)
relationships_properties = self.query(rel_properties_query)
relationships = self.query(rel_query)
self.schema = f"""
Node properties are the following:
{[el['output'] for el in node_properties]}
Relationship properties are the following:
{[el['output'] for el in relationships_properties]}
The relationships are the following:
{[el['output'] for el in relationships]}
""" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,651 | AttributeError: 'LLModel' object has no attribute 'model_type' (gpt4all) | ### System Info
run on docker image with python:3.11.3-bullseye in MAC m1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
My docker image
```
FROM python:3.11.3-bullseye
WORKDIR /src
COPY src /src
RUN python -m pip install --upgrade pip
RUN apt-get update -y
RUN apt install cmake -y
RUN git clone --recurse-submodules https://github.com/nomic-ai/gpt4all
RUN cd gpt4all/gpt4all-backend/ && mkdir build && cd build && cmake .. && cmake --build . --parallel
RUN cd gpt4all/gpt4all-bindings/python && pip3 install -e .
RUN pip install -r requirements.txt
RUN chmod +x app/start_app.sh
EXPOSE 8501
ENTRYPOINT ["/bin/bash"]
CMD ["app/start_app.sh"]
```
where star_app.sh is run python file that have this line
`llm = GPT4All(model=llm_path, backend='gptj', verbose=True, streaming=True, n_threads=os.cpu_count(),temp=temp)`
llm_path is path of gpt4all model
### Expected behavior
Got this error when try to use gpt4all
```
AttributeError: 'LLModel' object has no attribute 'model_type'
Traceback:
File "/src/app/utils.py", line 20, in get_chain
llm = GPT4All(model=llm_path, backend='gptj', verbose=True, streaming=True, n_threads=os.cpu_count(),temp=temp)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pydantic/main.py", line 339, in __init__
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pydantic/main.py", line 1102, in validate_model
values = validator(cls_, values)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/llms/gpt4all.py", line 156, in validate_environment
values["backend"] = values["client"].model.model_type
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``` | https://github.com/langchain-ai/langchain/issues/5651 | https://github.com/langchain-ai/langchain/pull/5657 | 6a3ceaa3771a725046af3c02cf4c15a3e18ec54a | 8fea0529c1be9c9f5308a9b5a51f8381067a269a | "2023-06-03T10:37:42Z" | python | "2023-06-04T14:21:16Z" | langchain/llms/gpt4all.py | """Wrapper for the GPT4All model."""
from functools import partial
from typing import Any, Dict, List, Mapping, Optional, Set
from pydantic import Extra, Field, root_validator
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.llms.base import LLM
from langchain.llms.utils import enforce_stop_tokens
class GPT4All(LLM): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,651 | AttributeError: 'LLModel' object has no attribute 'model_type' (gpt4all) | ### System Info
run on docker image with python:3.11.3-bullseye in MAC m1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
My docker image
```
FROM python:3.11.3-bullseye
WORKDIR /src
COPY src /src
RUN python -m pip install --upgrade pip
RUN apt-get update -y
RUN apt install cmake -y
RUN git clone --recurse-submodules https://github.com/nomic-ai/gpt4all
RUN cd gpt4all/gpt4all-backend/ && mkdir build && cd build && cmake .. && cmake --build . --parallel
RUN cd gpt4all/gpt4all-bindings/python && pip3 install -e .
RUN pip install -r requirements.txt
RUN chmod +x app/start_app.sh
EXPOSE 8501
ENTRYPOINT ["/bin/bash"]
CMD ["app/start_app.sh"]
```
where star_app.sh is run python file that have this line
`llm = GPT4All(model=llm_path, backend='gptj', verbose=True, streaming=True, n_threads=os.cpu_count(),temp=temp)`
llm_path is path of gpt4all model
### Expected behavior
Got this error when try to use gpt4all
```
AttributeError: 'LLModel' object has no attribute 'model_type'
Traceback:
File "/src/app/utils.py", line 20, in get_chain
llm = GPT4All(model=llm_path, backend='gptj', verbose=True, streaming=True, n_threads=os.cpu_count(),temp=temp)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pydantic/main.py", line 339, in __init__
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pydantic/main.py", line 1102, in validate_model
values = validator(cls_, values)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/llms/gpt4all.py", line 156, in validate_environment
values["backend"] = values["client"].model.model_type
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``` | https://github.com/langchain-ai/langchain/issues/5651 | https://github.com/langchain-ai/langchain/pull/5657 | 6a3ceaa3771a725046af3c02cf4c15a3e18ec54a | 8fea0529c1be9c9f5308a9b5a51f8381067a269a | "2023-06-03T10:37:42Z" | python | "2023-06-04T14:21:16Z" | langchain/llms/gpt4all.py | r"""Wrapper around GPT4All language models.
To use, you should have the ``gpt4all`` python package installed, the
pre-trained model file, and the model's config information.
Example:
.. code-block:: python
from langchain.llms import GPT4All
model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)
# Simplest invocation
response = model("Once upon a time, ")
"""
model: str
"""Path to the pre-trained GPT4All model file."""
backend: Optional[str] = Field(None, alias="backend")
n_ctx: int = Field(512, alias="n_ctx")
"""Token context window."""
n_parts: int = Field(-1, alias="n_parts")
"""Number of parts to split the model into.
If -1, the number of parts is automatically determined."""
seed: int = Field(0, alias="seed")
"""Seed. If -1, a random seed is used."""
f16_kv: bool = Field(False, alias="f16_kv")
"""Use half-precision for key/value cache."""
logits_all: bool = Field(False, alias="logits_all")
"""Return logits for all tokens, not just the last token."""
vocab_only: bool = Field(False, alias="vocab_only")
"""Only load the vocabulary, no weights."""
use_mlock: bool = Field(False, alias="use_mlock")
"""Force system to keep model in RAM."""
embedding: bool = Field(False, alias="embedding") |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,651 | AttributeError: 'LLModel' object has no attribute 'model_type' (gpt4all) | ### System Info
run on docker image with python:3.11.3-bullseye in MAC m1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
My docker image
```
FROM python:3.11.3-bullseye
WORKDIR /src
COPY src /src
RUN python -m pip install --upgrade pip
RUN apt-get update -y
RUN apt install cmake -y
RUN git clone --recurse-submodules https://github.com/nomic-ai/gpt4all
RUN cd gpt4all/gpt4all-backend/ && mkdir build && cd build && cmake .. && cmake --build . --parallel
RUN cd gpt4all/gpt4all-bindings/python && pip3 install -e .
RUN pip install -r requirements.txt
RUN chmod +x app/start_app.sh
EXPOSE 8501
ENTRYPOINT ["/bin/bash"]
CMD ["app/start_app.sh"]
```
where star_app.sh is run python file that have this line
`llm = GPT4All(model=llm_path, backend='gptj', verbose=True, streaming=True, n_threads=os.cpu_count(),temp=temp)`
llm_path is path of gpt4all model
### Expected behavior
Got this error when try to use gpt4all
```
AttributeError: 'LLModel' object has no attribute 'model_type'
Traceback:
File "/src/app/utils.py", line 20, in get_chain
llm = GPT4All(model=llm_path, backend='gptj', verbose=True, streaming=True, n_threads=os.cpu_count(),temp=temp)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pydantic/main.py", line 339, in __init__
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pydantic/main.py", line 1102, in validate_model
values = validator(cls_, values)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/llms/gpt4all.py", line 156, in validate_environment
values["backend"] = values["client"].model.model_type
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``` | https://github.com/langchain-ai/langchain/issues/5651 | https://github.com/langchain-ai/langchain/pull/5657 | 6a3ceaa3771a725046af3c02cf4c15a3e18ec54a | 8fea0529c1be9c9f5308a9b5a51f8381067a269a | "2023-06-03T10:37:42Z" | python | "2023-06-04T14:21:16Z" | langchain/llms/gpt4all.py | """Use embedding mode only."""
n_threads: Optional[int] = Field(4, alias="n_threads")
"""Number of threads to use."""
n_predict: Optional[int] = 256
"""The maximum number of tokens to generate."""
temp: Optional[float] = 0.8
"""The temperature to use for sampling."""
top_p: Optional[float] = 0.95
"""The top-p value to use for sampling."""
top_k: Optional[int] = 40
"""The top-k value to use for sampling."""
echo: Optional[bool] = False
"""Whether to echo the prompt."""
stop: Optional[List[str]] = []
"""A list of strings to stop generation when encountered."""
repeat_last_n: Optional[int] = 64
"Last n tokens to penalize"
repeat_penalty: Optional[float] = 1.3
"""The penalty to apply to repeated tokens."""
n_batch: int = Field(1, alias="n_batch")
"""Batch size for prompt processing."""
streaming: bool = False
"""Whether to stream the results or not."""
context_erase: float = 0.5
"""Leave (n_ctx * context_erase) tokens
starting from beginning if the context has run out."""
allow_download: bool = False
"""If model does not exist in ~/.cache/gpt4all/, download it."""
client: Any = None
class Config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,651 | AttributeError: 'LLModel' object has no attribute 'model_type' (gpt4all) | ### System Info
run on docker image with python:3.11.3-bullseye in MAC m1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
My docker image
```
FROM python:3.11.3-bullseye
WORKDIR /src
COPY src /src
RUN python -m pip install --upgrade pip
RUN apt-get update -y
RUN apt install cmake -y
RUN git clone --recurse-submodules https://github.com/nomic-ai/gpt4all
RUN cd gpt4all/gpt4all-backend/ && mkdir build && cd build && cmake .. && cmake --build . --parallel
RUN cd gpt4all/gpt4all-bindings/python && pip3 install -e .
RUN pip install -r requirements.txt
RUN chmod +x app/start_app.sh
EXPOSE 8501
ENTRYPOINT ["/bin/bash"]
CMD ["app/start_app.sh"]
```
where star_app.sh is run python file that have this line
`llm = GPT4All(model=llm_path, backend='gptj', verbose=True, streaming=True, n_threads=os.cpu_count(),temp=temp)`
llm_path is path of gpt4all model
### Expected behavior
Got this error when try to use gpt4all
```
AttributeError: 'LLModel' object has no attribute 'model_type'
Traceback:
File "/src/app/utils.py", line 20, in get_chain
llm = GPT4All(model=llm_path, backend='gptj', verbose=True, streaming=True, n_threads=os.cpu_count(),temp=temp)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pydantic/main.py", line 339, in __init__
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pydantic/main.py", line 1102, in validate_model
values = validator(cls_, values)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/llms/gpt4all.py", line 156, in validate_environment
values["backend"] = values["client"].model.model_type
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``` | https://github.com/langchain-ai/langchain/issues/5651 | https://github.com/langchain-ai/langchain/pull/5657 | 6a3ceaa3771a725046af3c02cf4c15a3e18ec54a | 8fea0529c1be9c9f5308a9b5a51f8381067a269a | "2023-06-03T10:37:42Z" | python | "2023-06-04T14:21:16Z" | langchain/llms/gpt4all.py | """Configuration for this pydantic object."""
extra = Extra.forbid
@staticmethod
def _model_param_names() -> Set[str]:
return {
"n_ctx",
"n_predict",
"top_k",
"top_p",
"temp",
"n_batch",
"repeat_penalty",
"repeat_last_n",
"context_erase",
}
def _default_params(self) -> Dict[str, Any]:
return {
"n_ctx": self.n_ctx,
"n_predict": self.n_predict,
"top_k": self.top_k,
"top_p": self.top_p,
"temp": self.temp,
"n_batch": self.n_batch,
"repeat_penalty": self.repeat_penalty,
"repeat_last_n": self.repeat_last_n,
"context_erase": self.context_erase,
}
@root_validator()
def validate_environment(cls, values: Dict) -> Dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,651 | AttributeError: 'LLModel' object has no attribute 'model_type' (gpt4all) | ### System Info
run on docker image with python:3.11.3-bullseye in MAC m1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
My docker image
```
FROM python:3.11.3-bullseye
WORKDIR /src
COPY src /src
RUN python -m pip install --upgrade pip
RUN apt-get update -y
RUN apt install cmake -y
RUN git clone --recurse-submodules https://github.com/nomic-ai/gpt4all
RUN cd gpt4all/gpt4all-backend/ && mkdir build && cd build && cmake .. && cmake --build . --parallel
RUN cd gpt4all/gpt4all-bindings/python && pip3 install -e .
RUN pip install -r requirements.txt
RUN chmod +x app/start_app.sh
EXPOSE 8501
ENTRYPOINT ["/bin/bash"]
CMD ["app/start_app.sh"]
```
where star_app.sh is run python file that have this line
`llm = GPT4All(model=llm_path, backend='gptj', verbose=True, streaming=True, n_threads=os.cpu_count(),temp=temp)`
llm_path is path of gpt4all model
### Expected behavior
Got this error when try to use gpt4all
```
AttributeError: 'LLModel' object has no attribute 'model_type'
Traceback:
File "/src/app/utils.py", line 20, in get_chain
llm = GPT4All(model=llm_path, backend='gptj', verbose=True, streaming=True, n_threads=os.cpu_count(),temp=temp)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pydantic/main.py", line 339, in __init__
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pydantic/main.py", line 1102, in validate_model
values = validator(cls_, values)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/llms/gpt4all.py", line 156, in validate_environment
values["backend"] = values["client"].model.model_type
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``` | https://github.com/langchain-ai/langchain/issues/5651 | https://github.com/langchain-ai/langchain/pull/5657 | 6a3ceaa3771a725046af3c02cf4c15a3e18ec54a | 8fea0529c1be9c9f5308a9b5a51f8381067a269a | "2023-06-03T10:37:42Z" | python | "2023-06-04T14:21:16Z" | langchain/llms/gpt4all.py | """Validate that the python package exists in the environment."""
try:
from gpt4all import GPT4All as GPT4AllModel
except ImportError:
raise ImportError(
"Could not import gpt4all python package. "
"Please install it with `pip install gpt4all`."
)
full_path = values["model"]
model_path, delimiter, model_name = full_path.rpartition("/")
model_path += delimiter
values["client"] = GPT4AllModel(
model_name,
model_path=model_path or None,
model_type=values["backend"],
allow_download=values["allow_download"],
)
if values["n_threads"] is not None:
values["client"].model.set_thread_count(values["n_threads"])
values["backend"] = values["client"].model.model_type
return values
@property
def _identifying_params(self) -> Mapping[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,651 | AttributeError: 'LLModel' object has no attribute 'model_type' (gpt4all) | ### System Info
run on docker image with python:3.11.3-bullseye in MAC m1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
My docker image
```
FROM python:3.11.3-bullseye
WORKDIR /src
COPY src /src
RUN python -m pip install --upgrade pip
RUN apt-get update -y
RUN apt install cmake -y
RUN git clone --recurse-submodules https://github.com/nomic-ai/gpt4all
RUN cd gpt4all/gpt4all-backend/ && mkdir build && cd build && cmake .. && cmake --build . --parallel
RUN cd gpt4all/gpt4all-bindings/python && pip3 install -e .
RUN pip install -r requirements.txt
RUN chmod +x app/start_app.sh
EXPOSE 8501
ENTRYPOINT ["/bin/bash"]
CMD ["app/start_app.sh"]
```
where star_app.sh is run python file that have this line
`llm = GPT4All(model=llm_path, backend='gptj', verbose=True, streaming=True, n_threads=os.cpu_count(),temp=temp)`
llm_path is path of gpt4all model
### Expected behavior
Got this error when try to use gpt4all
```
AttributeError: 'LLModel' object has no attribute 'model_type'
Traceback:
File "/src/app/utils.py", line 20, in get_chain
llm = GPT4All(model=llm_path, backend='gptj', verbose=True, streaming=True, n_threads=os.cpu_count(),temp=temp)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pydantic/main.py", line 339, in __init__
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pydantic/main.py", line 1102, in validate_model
values = validator(cls_, values)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/llms/gpt4all.py", line 156, in validate_environment
values["backend"] = values["client"].model.model_type
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``` | https://github.com/langchain-ai/langchain/issues/5651 | https://github.com/langchain-ai/langchain/pull/5657 | 6a3ceaa3771a725046af3c02cf4c15a3e18ec54a | 8fea0529c1be9c9f5308a9b5a51f8381067a269a | "2023-06-03T10:37:42Z" | python | "2023-06-04T14:21:16Z" | langchain/llms/gpt4all.py | """Get the identifying parameters."""
return {
"model": self.model,
**self._default_params(),
**{
k: v for k, v in self.__dict__.items() if k in self._model_param_names()
},
}
@property
def _llm_type(self) -> str:
"""Return the type of llm."""
return "gpt4all"
def _call( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,651 | AttributeError: 'LLModel' object has no attribute 'model_type' (gpt4all) | ### System Info
run on docker image with python:3.11.3-bullseye in MAC m1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
My docker image
```
FROM python:3.11.3-bullseye
WORKDIR /src
COPY src /src
RUN python -m pip install --upgrade pip
RUN apt-get update -y
RUN apt install cmake -y
RUN git clone --recurse-submodules https://github.com/nomic-ai/gpt4all
RUN cd gpt4all/gpt4all-backend/ && mkdir build && cd build && cmake .. && cmake --build . --parallel
RUN cd gpt4all/gpt4all-bindings/python && pip3 install -e .
RUN pip install -r requirements.txt
RUN chmod +x app/start_app.sh
EXPOSE 8501
ENTRYPOINT ["/bin/bash"]
CMD ["app/start_app.sh"]
```
where star_app.sh is run python file that have this line
`llm = GPT4All(model=llm_path, backend='gptj', verbose=True, streaming=True, n_threads=os.cpu_count(),temp=temp)`
llm_path is path of gpt4all model
### Expected behavior
Got this error when try to use gpt4all
```
AttributeError: 'LLModel' object has no attribute 'model_type'
Traceback:
File "/src/app/utils.py", line 20, in get_chain
llm = GPT4All(model=llm_path, backend='gptj', verbose=True, streaming=True, n_threads=os.cpu_count(),temp=temp)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pydantic/main.py", line 339, in __init__
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pydantic/main.py", line 1102, in validate_model
values = validator(cls_, values)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/llms/gpt4all.py", line 156, in validate_environment
values["backend"] = values["client"].model.model_type
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``` | https://github.com/langchain-ai/langchain/issues/5651 | https://github.com/langchain-ai/langchain/pull/5657 | 6a3ceaa3771a725046af3c02cf4c15a3e18ec54a | 8fea0529c1be9c9f5308a9b5a51f8381067a269a | "2023-06-03T10:37:42Z" | python | "2023-06-04T14:21:16Z" | langchain/llms/gpt4all.py | self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
) -> str:
r"""Call out to GPT4All's generate method.
Args:
prompt: The prompt to pass into the model.
stop: A list of strings to stop generation when encountered.
Returns:
The string generated by the model.
Example:
.. code-block:: python
prompt = "Once upon a time, "
response = model(prompt, n_predict=55)
"""
text_callback = None
if run_manager:
text_callback = partial(run_manager.on_llm_new_token, verbose=self.verbose)
text = ""
for token in self.client.generate(prompt, **self._default_params()):
if text_callback:
text_callback(token)
text += token
if stop is not None:
text = enforce_stop_tokens(text, stop)
return text |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,601 | OutputParsers currently allows model to hallucinate the output of an action | ### System Info
The MRKL and chat output parsers currently will allow an LLM response to generate a valid action, as well as hallucinate a "final answer" based on that response.
[Logic](https://github.com/hwchase17/langchain/blob/master/langchain/agents/chat/output_parser.py#L15)
This is because the parser is returning an AgentFinish object immediately if `FINAL_ANSWER_ACTION` is in the text, rather than checking if the text also includes a valid action. I had this appear when using the Python agent, where the LLM returned a code block as the action, but simultaneously hallucinated the output and a final answer in one response. (In this case, it was quite obvious because the code block referred to a database which does not exist)
I'm not sure if there are any situations where it is desired that a response should output an action as well as an answer?
If this is not desired behaviour, it can be easily fixable by raising an exception if a response includes both a valid action, and "final answer" rather than returning immedately from either condition.
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
````py
from langchain.agents.chat.output_parser import ChatOutputParser
parser = ChatOutputParser()
valid_action = """Action:
```
{
"action": "Python REPL",
"action_input": "print(\'Hello world!\')"
}
```
final_answer = """Final Answer: Goodbye world!"""
print(parser.parse(valid_action)) # outputs an AgentFinish
print(parser.parse(final_answer)) # outputs an AgentAction
print(parser.parse(valid_action + final_answer)) # outputs an AgentFinish, should probably raise an Exception
````
### Expected behavior
An exception should likely be raised if an LLM returns a response that both includes a final answer, and a parse-able action, rather than skipping the action and returning the final answer, since it probably hallucinated an output/observation from the action. | https://github.com/langchain-ai/langchain/issues/5601 | https://github.com/langchain-ai/langchain/pull/5609 | c112d7334d6cac3296b877250d3f575fbfd46da2 | 26ec845921425d99f222b6d21bd58eda36b2f49b | "2023-06-02T08:01:50Z" | python | "2023-06-04T21:40:49Z" | langchain/agents/chat/output_parser.py | import json
from typing import Union
from langchain.agents.agent import AgentOutputParser
from langchain.agents.chat.prompt import FORMAT_INSTRUCTIONS
from langchain.schema import AgentAction, AgentFinish, OutputParserException
FINAL_ANSWER_ACTION = "Final Answer:"
class ChatOutputParser(AgentOutputParser):
def get_format_instructions(self) -> str:
return FORMAT_INSTRUCTIONS
def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
if FINAL_ANSWER_ACTION in text:
return AgentFinish(
{"output": text.split(FINAL_ANSWER_ACTION)[-1].strip()}, text
)
try:
action = text.split("```")[1]
response = json.loads(action.strip())
return AgentAction(response["action"], response["action_input"], text)
except Exception:
raise OutputParserException(f"Could not parse LLM output: {text}")
@property
def _type(self) -> str:
return "chat" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,601 | OutputParsers currently allows model to hallucinate the output of an action | ### System Info
The MRKL and chat output parsers currently will allow an LLM response to generate a valid action, as well as hallucinate a "final answer" based on that response.
[Logic](https://github.com/hwchase17/langchain/blob/master/langchain/agents/chat/output_parser.py#L15)
This is because the parser is returning an AgentFinish object immediately if `FINAL_ANSWER_ACTION` is in the text, rather than checking if the text also includes a valid action. I had this appear when using the Python agent, where the LLM returned a code block as the action, but simultaneously hallucinated the output and a final answer in one response. (In this case, it was quite obvious because the code block referred to a database which does not exist)
I'm not sure if there are any situations where it is desired that a response should output an action as well as an answer?
If this is not desired behaviour, it can be easily fixable by raising an exception if a response includes both a valid action, and "final answer" rather than returning immedately from either condition.
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
````py
from langchain.agents.chat.output_parser import ChatOutputParser
parser = ChatOutputParser()
valid_action = """Action:
```
{
"action": "Python REPL",
"action_input": "print(\'Hello world!\')"
}
```
final_answer = """Final Answer: Goodbye world!"""
print(parser.parse(valid_action)) # outputs an AgentFinish
print(parser.parse(final_answer)) # outputs an AgentAction
print(parser.parse(valid_action + final_answer)) # outputs an AgentFinish, should probably raise an Exception
````
### Expected behavior
An exception should likely be raised if an LLM returns a response that both includes a final answer, and a parse-able action, rather than skipping the action and returning the final answer, since it probably hallucinated an output/observation from the action. | https://github.com/langchain-ai/langchain/issues/5601 | https://github.com/langchain-ai/langchain/pull/5609 | c112d7334d6cac3296b877250d3f575fbfd46da2 | 26ec845921425d99f222b6d21bd58eda36b2f49b | "2023-06-02T08:01:50Z" | python | "2023-06-04T21:40:49Z" | langchain/agents/mrkl/output_parser.py | import re
from typing import Union
from langchain.agents.agent import AgentOutputParser
from langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS
from langchain.schema import AgentAction, AgentFinish, OutputParserException
FINAL_ANSWER_ACTION = "Final Answer:"
class MRKLOutputParser(AgentOutputParser):
def get_format_instructions(self) -> str:
return FORMAT_INSTRUCTIONS
def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
if FINAL_ANSWER_ACTION in text:
return AgentFinish(
{"output": text.split(FINAL_ANSWER_ACTION)[-1].strip()}, text
)
regex = (
r"Action\s*\d*\s*:[\s]*(.*?)[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
)
match = re.search(regex, text, re.DOTALL)
if not match: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,601 | OutputParsers currently allows model to hallucinate the output of an action | ### System Info
The MRKL and chat output parsers currently will allow an LLM response to generate a valid action, as well as hallucinate a "final answer" based on that response.
[Logic](https://github.com/hwchase17/langchain/blob/master/langchain/agents/chat/output_parser.py#L15)
This is because the parser is returning an AgentFinish object immediately if `FINAL_ANSWER_ACTION` is in the text, rather than checking if the text also includes a valid action. I had this appear when using the Python agent, where the LLM returned a code block as the action, but simultaneously hallucinated the output and a final answer in one response. (In this case, it was quite obvious because the code block referred to a database which does not exist)
I'm not sure if there are any situations where it is desired that a response should output an action as well as an answer?
If this is not desired behaviour, it can be easily fixable by raising an exception if a response includes both a valid action, and "final answer" rather than returning immedately from either condition.
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
````py
from langchain.agents.chat.output_parser import ChatOutputParser
parser = ChatOutputParser()
valid_action = """Action:
```
{
"action": "Python REPL",
"action_input": "print(\'Hello world!\')"
}
```
final_answer = """Final Answer: Goodbye world!"""
print(parser.parse(valid_action)) # outputs an AgentFinish
print(parser.parse(final_answer)) # outputs an AgentAction
print(parser.parse(valid_action + final_answer)) # outputs an AgentFinish, should probably raise an Exception
````
### Expected behavior
An exception should likely be raised if an LLM returns a response that both includes a final answer, and a parse-able action, rather than skipping the action and returning the final answer, since it probably hallucinated an output/observation from the action. | https://github.com/langchain-ai/langchain/issues/5601 | https://github.com/langchain-ai/langchain/pull/5609 | c112d7334d6cac3296b877250d3f575fbfd46da2 | 26ec845921425d99f222b6d21bd58eda36b2f49b | "2023-06-02T08:01:50Z" | python | "2023-06-04T21:40:49Z" | langchain/agents/mrkl/output_parser.py | if not re.search(r"Action\s*\d*\s*:[\s]*(.*?)", text, re.DOTALL):
raise OutputParserException(
f"Could not parse LLM output: `{text}`",
observation="Invalid Format: Missing 'Action:' after 'Thought:'",
llm_output=text,
send_to_llm=True,
)
elif not re.search(
r"[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)", text, re.DOTALL
):
raise OutputParserException(
f"Could not parse LLM output: `{text}`",
observation="Invalid Format:"
" Missing 'Action Input:' after 'Action:'",
llm_output=text,
send_to_llm=True,
)
else:
raise OutputParserException(f"Could not parse LLM output: `{text}`")
action = match.group(1).strip()
action_input = match.group(2)
tool_input = action_input.strip(" ")
if tool_input.startswith("SELECT ") is False:
tool_input = tool_input.strip('"')
return AgentAction(action, tool_input, text)
@property
def _type(self) -> str:
return "mrkl" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,601 | OutputParsers currently allows model to hallucinate the output of an action | ### System Info
The MRKL and chat output parsers currently will allow an LLM response to generate a valid action, as well as hallucinate a "final answer" based on that response.
[Logic](https://github.com/hwchase17/langchain/blob/master/langchain/agents/chat/output_parser.py#L15)
This is because the parser is returning an AgentFinish object immediately if `FINAL_ANSWER_ACTION` is in the text, rather than checking if the text also includes a valid action. I had this appear when using the Python agent, where the LLM returned a code block as the action, but simultaneously hallucinated the output and a final answer in one response. (In this case, it was quite obvious because the code block referred to a database which does not exist)
I'm not sure if there are any situations where it is desired that a response should output an action as well as an answer?
If this is not desired behaviour, it can be easily fixable by raising an exception if a response includes both a valid action, and "final answer" rather than returning immedately from either condition.
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
````py
from langchain.agents.chat.output_parser import ChatOutputParser
parser = ChatOutputParser()
valid_action = """Action:
```
{
"action": "Python REPL",
"action_input": "print(\'Hello world!\')"
}
```
final_answer = """Final Answer: Goodbye world!"""
print(parser.parse(valid_action)) # outputs an AgentFinish
print(parser.parse(final_answer)) # outputs an AgentAction
print(parser.parse(valid_action + final_answer)) # outputs an AgentFinish, should probably raise an Exception
````
### Expected behavior
An exception should likely be raised if an LLM returns a response that both includes a final answer, and a parse-able action, rather than skipping the action and returning the final answer, since it probably hallucinated an output/observation from the action. | https://github.com/langchain-ai/langchain/issues/5601 | https://github.com/langchain-ai/langchain/pull/5609 | c112d7334d6cac3296b877250d3f575fbfd46da2 | 26ec845921425d99f222b6d21bd58eda36b2f49b | "2023-06-02T08:01:50Z" | python | "2023-06-04T21:40:49Z" | tests/unit_tests/agents/test_mrkl.py | """Test MRKL functionality."""
from typing import Tuple
import pytest
from langchain.agents.mrkl.base import ZeroShotAgent
from langchain.agents.mrkl.output_parser import MRKLOutputParser
from langchain.agents.mrkl.prompt import FORMAT_INSTRUCTIONS, PREFIX, SUFFIX
from langchain.agents.tools import Tool
from langchain.prompts import PromptTemplate
from langchain.schema import AgentAction, OutputParserException
from tests.unit_tests.llms.fake_llm import FakeLLM
def get_action_and_input(text: str) -> Tuple[str, str]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,601 | OutputParsers currently allows model to hallucinate the output of an action | ### System Info
The MRKL and chat output parsers currently will allow an LLM response to generate a valid action, as well as hallucinate a "final answer" based on that response.
[Logic](https://github.com/hwchase17/langchain/blob/master/langchain/agents/chat/output_parser.py#L15)
This is because the parser is returning an AgentFinish object immediately if `FINAL_ANSWER_ACTION` is in the text, rather than checking if the text also includes a valid action. I had this appear when using the Python agent, where the LLM returned a code block as the action, but simultaneously hallucinated the output and a final answer in one response. (In this case, it was quite obvious because the code block referred to a database which does not exist)
I'm not sure if there are any situations where it is desired that a response should output an action as well as an answer?
If this is not desired behaviour, it can be easily fixable by raising an exception if a response includes both a valid action, and "final answer" rather than returning immedately from either condition.
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
````py
from langchain.agents.chat.output_parser import ChatOutputParser
parser = ChatOutputParser()
valid_action = """Action:
```
{
"action": "Python REPL",
"action_input": "print(\'Hello world!\')"
}
```
final_answer = """Final Answer: Goodbye world!"""
print(parser.parse(valid_action)) # outputs an AgentFinish
print(parser.parse(final_answer)) # outputs an AgentAction
print(parser.parse(valid_action + final_answer)) # outputs an AgentFinish, should probably raise an Exception
````
### Expected behavior
An exception should likely be raised if an LLM returns a response that both includes a final answer, and a parse-able action, rather than skipping the action and returning the final answer, since it probably hallucinated an output/observation from the action. | https://github.com/langchain-ai/langchain/issues/5601 | https://github.com/langchain-ai/langchain/pull/5609 | c112d7334d6cac3296b877250d3f575fbfd46da2 | 26ec845921425d99f222b6d21bd58eda36b2f49b | "2023-06-02T08:01:50Z" | python | "2023-06-04T21:40:49Z" | tests/unit_tests/agents/test_mrkl.py | output = MRKLOutputParser().parse(text)
if isinstance(output, AgentAction):
return output.tool, str(output.tool_input)
else:
return "Final Answer", output.return_values["output"]
def test_get_action_and_input() -> None:
"""Test getting an action from text."""
llm_output = (
"Thought: I need to search for NBA\n" "Action: Search\n" "Action Input: NBA"
)
action, action_input = get_action_and_input(llm_output)
assert action == "Search"
assert action_input == "NBA"
def test_get_action_and_input_whitespace() -> None:
"""Test getting an action from text."""
llm_output = "Thought: I need to search for NBA\nAction: Search \nAction Input: NBA"
action, action_input = get_action_and_input(llm_output)
assert action == "Search"
assert action_input == "NBA"
def test_get_action_and_input_newline() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,601 | OutputParsers currently allows model to hallucinate the output of an action | ### System Info
The MRKL and chat output parsers currently will allow an LLM response to generate a valid action, as well as hallucinate a "final answer" based on that response.
[Logic](https://github.com/hwchase17/langchain/blob/master/langchain/agents/chat/output_parser.py#L15)
This is because the parser is returning an AgentFinish object immediately if `FINAL_ANSWER_ACTION` is in the text, rather than checking if the text also includes a valid action. I had this appear when using the Python agent, where the LLM returned a code block as the action, but simultaneously hallucinated the output and a final answer in one response. (In this case, it was quite obvious because the code block referred to a database which does not exist)
I'm not sure if there are any situations where it is desired that a response should output an action as well as an answer?
If this is not desired behaviour, it can be easily fixable by raising an exception if a response includes both a valid action, and "final answer" rather than returning immedately from either condition.
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
````py
from langchain.agents.chat.output_parser import ChatOutputParser
parser = ChatOutputParser()
valid_action = """Action:
```
{
"action": "Python REPL",
"action_input": "print(\'Hello world!\')"
}
```
final_answer = """Final Answer: Goodbye world!"""
print(parser.parse(valid_action)) # outputs an AgentFinish
print(parser.parse(final_answer)) # outputs an AgentAction
print(parser.parse(valid_action + final_answer)) # outputs an AgentFinish, should probably raise an Exception
````
### Expected behavior
An exception should likely be raised if an LLM returns a response that both includes a final answer, and a parse-able action, rather than skipping the action and returning the final answer, since it probably hallucinated an output/observation from the action. | https://github.com/langchain-ai/langchain/issues/5601 | https://github.com/langchain-ai/langchain/pull/5609 | c112d7334d6cac3296b877250d3f575fbfd46da2 | 26ec845921425d99f222b6d21bd58eda36b2f49b | "2023-06-02T08:01:50Z" | python | "2023-06-04T21:40:49Z" | tests/unit_tests/agents/test_mrkl.py | """Test getting an action from text where Action Input is a code snippet."""
llm_output = (
"Now I need to write a unittest for the function.\n\n"
"Action: Python\nAction Input:\n```\nimport unittest\n\nunittest.main()\n```"
)
action, action_input = get_action_and_input(llm_output)
assert action == "Python"
assert action_input == "```\nimport unittest\n\nunittest.main()\n```"
def test_get_action_and_input_newline_after_keyword() -> None:
"""Test getting an action and action input from the text
when there is a new line before the action
(after the keywords "Action:" and "Action Input:")
"""
llm_output = """
I can use the `ls` command to list the contents of the directory \
and `grep` to search for the specific file.
Action:
Terminal
Action Input:
ls -l ~/.bashrc.d/
"""
action, action_input = get_action_and_input(llm_output)
assert action == "Terminal"
assert action_input == "ls -l ~/.bashrc.d/\n"
def test_get_action_and_input_sql_query() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,601 | OutputParsers currently allows model to hallucinate the output of an action | ### System Info
The MRKL and chat output parsers currently will allow an LLM response to generate a valid action, as well as hallucinate a "final answer" based on that response.
[Logic](https://github.com/hwchase17/langchain/blob/master/langchain/agents/chat/output_parser.py#L15)
This is because the parser is returning an AgentFinish object immediately if `FINAL_ANSWER_ACTION` is in the text, rather than checking if the text also includes a valid action. I had this appear when using the Python agent, where the LLM returned a code block as the action, but simultaneously hallucinated the output and a final answer in one response. (In this case, it was quite obvious because the code block referred to a database which does not exist)
I'm not sure if there are any situations where it is desired that a response should output an action as well as an answer?
If this is not desired behaviour, it can be easily fixable by raising an exception if a response includes both a valid action, and "final answer" rather than returning immedately from either condition.
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
````py
from langchain.agents.chat.output_parser import ChatOutputParser
parser = ChatOutputParser()
valid_action = """Action:
```
{
"action": "Python REPL",
"action_input": "print(\'Hello world!\')"
}
```
final_answer = """Final Answer: Goodbye world!"""
print(parser.parse(valid_action)) # outputs an AgentFinish
print(parser.parse(final_answer)) # outputs an AgentAction
print(parser.parse(valid_action + final_answer)) # outputs an AgentFinish, should probably raise an Exception
````
### Expected behavior
An exception should likely be raised if an LLM returns a response that both includes a final answer, and a parse-able action, rather than skipping the action and returning the final answer, since it probably hallucinated an output/observation from the action. | https://github.com/langchain-ai/langchain/issues/5601 | https://github.com/langchain-ai/langchain/pull/5609 | c112d7334d6cac3296b877250d3f575fbfd46da2 | 26ec845921425d99f222b6d21bd58eda36b2f49b | "2023-06-02T08:01:50Z" | python | "2023-06-04T21:40:49Z" | tests/unit_tests/agents/test_mrkl.py | """Test getting the action and action input from the text
when the LLM output is a well formed SQL query
"""
llm_output = """
I should query for the largest single shift payment for every unique user.
Action: query_sql_db
Action Input: \
SELECT "UserName", MAX(totalpayment) FROM user_shifts GROUP BY "UserName" """
action, action_input = get_action_and_input(llm_output)
assert action == "query_sql_db"
assert (
action_input
== 'SELECT "UserName", MAX(totalpayment) FROM user_shifts GROUP BY "UserName"'
)
def test_get_final_answer() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,601 | OutputParsers currently allows model to hallucinate the output of an action | ### System Info
The MRKL and chat output parsers currently will allow an LLM response to generate a valid action, as well as hallucinate a "final answer" based on that response.
[Logic](https://github.com/hwchase17/langchain/blob/master/langchain/agents/chat/output_parser.py#L15)
This is because the parser is returning an AgentFinish object immediately if `FINAL_ANSWER_ACTION` is in the text, rather than checking if the text also includes a valid action. I had this appear when using the Python agent, where the LLM returned a code block as the action, but simultaneously hallucinated the output and a final answer in one response. (In this case, it was quite obvious because the code block referred to a database which does not exist)
I'm not sure if there are any situations where it is desired that a response should output an action as well as an answer?
If this is not desired behaviour, it can be easily fixable by raising an exception if a response includes both a valid action, and "final answer" rather than returning immedately from either condition.
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
````py
from langchain.agents.chat.output_parser import ChatOutputParser
parser = ChatOutputParser()
valid_action = """Action:
```
{
"action": "Python REPL",
"action_input": "print(\'Hello world!\')"
}
```
final_answer = """Final Answer: Goodbye world!"""
print(parser.parse(valid_action)) # outputs an AgentFinish
print(parser.parse(final_answer)) # outputs an AgentAction
print(parser.parse(valid_action + final_answer)) # outputs an AgentFinish, should probably raise an Exception
````
### Expected behavior
An exception should likely be raised if an LLM returns a response that both includes a final answer, and a parse-able action, rather than skipping the action and returning the final answer, since it probably hallucinated an output/observation from the action. | https://github.com/langchain-ai/langchain/issues/5601 | https://github.com/langchain-ai/langchain/pull/5609 | c112d7334d6cac3296b877250d3f575fbfd46da2 | 26ec845921425d99f222b6d21bd58eda36b2f49b | "2023-06-02T08:01:50Z" | python | "2023-06-04T21:40:49Z" | tests/unit_tests/agents/test_mrkl.py | """Test getting final answer."""
llm_output = (
"Thought: I need to search for NBA\n"
"Action: Search\n"
"Action Input: NBA\n"
"Observation: founded in 1994\n"
"Thought: I can now answer the question\n"
"Final Answer: 1994"
)
action, action_input = get_action_and_input(llm_output)
assert action == "Final Answer"
assert action_input == "1994"
def test_get_final_answer_new_line() -> None:
"""Test getting final answer."""
llm_output = (
"Thought: I need to search for NBA\n"
"Action: Search\n"
"Action Input: NBA\n"
"Observation: founded in 1994\n"
"Thought: I can now answer the question\n"
"Final Answer:\n1994"
)
action, action_input = get_action_and_input(llm_output)
assert action == "Final Answer"
assert action_input == "1994"
def test_get_final_answer_multiline() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,601 | OutputParsers currently allows model to hallucinate the output of an action | ### System Info
The MRKL and chat output parsers currently will allow an LLM response to generate a valid action, as well as hallucinate a "final answer" based on that response.
[Logic](https://github.com/hwchase17/langchain/blob/master/langchain/agents/chat/output_parser.py#L15)
This is because the parser is returning an AgentFinish object immediately if `FINAL_ANSWER_ACTION` is in the text, rather than checking if the text also includes a valid action. I had this appear when using the Python agent, where the LLM returned a code block as the action, but simultaneously hallucinated the output and a final answer in one response. (In this case, it was quite obvious because the code block referred to a database which does not exist)
I'm not sure if there are any situations where it is desired that a response should output an action as well as an answer?
If this is not desired behaviour, it can be easily fixable by raising an exception if a response includes both a valid action, and "final answer" rather than returning immedately from either condition.
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
````py
from langchain.agents.chat.output_parser import ChatOutputParser
parser = ChatOutputParser()
valid_action = """Action:
```
{
"action": "Python REPL",
"action_input": "print(\'Hello world!\')"
}
```
final_answer = """Final Answer: Goodbye world!"""
print(parser.parse(valid_action)) # outputs an AgentFinish
print(parser.parse(final_answer)) # outputs an AgentAction
print(parser.parse(valid_action + final_answer)) # outputs an AgentFinish, should probably raise an Exception
````
### Expected behavior
An exception should likely be raised if an LLM returns a response that both includes a final answer, and a parse-able action, rather than skipping the action and returning the final answer, since it probably hallucinated an output/observation from the action. | https://github.com/langchain-ai/langchain/issues/5601 | https://github.com/langchain-ai/langchain/pull/5609 | c112d7334d6cac3296b877250d3f575fbfd46da2 | 26ec845921425d99f222b6d21bd58eda36b2f49b | "2023-06-02T08:01:50Z" | python | "2023-06-04T21:40:49Z" | tests/unit_tests/agents/test_mrkl.py | """Test getting final answer that is multiline."""
llm_output = (
"Thought: I need to search for NBA\n"
"Action: Search\n"
"Action Input: NBA\n"
"Observation: founded in 1994 and 1993\n"
"Thought: I can now answer the question\n"
"Final Answer: 1994\n1993"
)
action, action_input = get_action_and_input(llm_output)
assert action == "Final Answer"
assert action_input == "1994\n1993"
def test_bad_action_input_line() -> None:
"""Test handling when no action input found."""
llm_output = "Thought: I need to search for NBA\n" "Action: Search\n" "Thought: NBA"
with pytest.raises(OutputParserException) as e_info:
get_action_and_input(llm_output)
assert e_info.value.observation is not None
def test_bad_action_line() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,601 | OutputParsers currently allows model to hallucinate the output of an action | ### System Info
The MRKL and chat output parsers currently will allow an LLM response to generate a valid action, as well as hallucinate a "final answer" based on that response.
[Logic](https://github.com/hwchase17/langchain/blob/master/langchain/agents/chat/output_parser.py#L15)
This is because the parser is returning an AgentFinish object immediately if `FINAL_ANSWER_ACTION` is in the text, rather than checking if the text also includes a valid action. I had this appear when using the Python agent, where the LLM returned a code block as the action, but simultaneously hallucinated the output and a final answer in one response. (In this case, it was quite obvious because the code block referred to a database which does not exist)
I'm not sure if there are any situations where it is desired that a response should output an action as well as an answer?
If this is not desired behaviour, it can be easily fixable by raising an exception if a response includes both a valid action, and "final answer" rather than returning immedately from either condition.
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
````py
from langchain.agents.chat.output_parser import ChatOutputParser
parser = ChatOutputParser()
valid_action = """Action:
```
{
"action": "Python REPL",
"action_input": "print(\'Hello world!\')"
}
```
final_answer = """Final Answer: Goodbye world!"""
print(parser.parse(valid_action)) # outputs an AgentFinish
print(parser.parse(final_answer)) # outputs an AgentAction
print(parser.parse(valid_action + final_answer)) # outputs an AgentFinish, should probably raise an Exception
````
### Expected behavior
An exception should likely be raised if an LLM returns a response that both includes a final answer, and a parse-able action, rather than skipping the action and returning the final answer, since it probably hallucinated an output/observation from the action. | https://github.com/langchain-ai/langchain/issues/5601 | https://github.com/langchain-ai/langchain/pull/5609 | c112d7334d6cac3296b877250d3f575fbfd46da2 | 26ec845921425d99f222b6d21bd58eda36b2f49b | "2023-06-02T08:01:50Z" | python | "2023-06-04T21:40:49Z" | tests/unit_tests/agents/test_mrkl.py | """Test handling when no action found."""
llm_output = (
"Thought: I need to search for NBA\n" "Thought: Search\n" "Action Input: NBA"
)
with pytest.raises(OutputParserException) as e_info:
get_action_and_input(llm_output)
assert e_info.value.observation is not None
def test_from_chains() -> None:
"""Test initializing from chains."""
chain_configs = [
Tool(name="foo", func=lambda x: "foo", description="foobar1"),
Tool(name="bar", func=lambda x: "bar", description="foobar2"),
]
agent = ZeroShotAgent.from_llm_and_tools(FakeLLM(), chain_configs)
expected_tools_prompt = "foo: foobar1\nbar: foobar2"
expected_tool_names = "foo, bar"
expected_template = "\n\n".join(
[
PREFIX,
expected_tools_prompt,
FORMAT_INSTRUCTIONS.format(tool_names=expected_tool_names),
SUFFIX,
]
)
prompt = agent.llm_chain.prompt
assert isinstance(prompt, PromptTemplate)
assert prompt.template == expected_template |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,720 | AttributeError: 'GPT4All' object has no attribute 'model_type' (langchain 0.0.190) | ### System Info
Hi, this is related to #5651 but (on my machine ;) ) the issue is still there.
## Versions
* Intel Mac with latest OSX
* Python 3.11.2
* langchain 0.0.190, includes fix for #5651
* ggml-mpt-7b-instruct.bin, downloaded at June 5th from https://gpt4all.io/models/ggml-mpt-7b-instruct.bin
### Who can help?
@pakcheera @bwv988 First of all: thanks for the report and the fix :). Did this issues disappear on you machines?
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
## Error message
```shell
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /Users/christoph/src/sandstorm-labs/development-tools/ai-support-chat/gpt4all/chat.py:30 in โ
โ <module> โ
โ โ
โ 27 โ model_name="all-mpnet-base-v2") โ
โ 28 โ
โ 29 # see https://gpt4all.io/models/ggml-mpt-7b-instruct.bin โ
โ โฑ 30 llm = GPT4All( โ
โ 31 โ model="./ggml-mpt-7b-instruct.bin", โ
โ 32 โ #backend='gptj', โ
โ 33 โ top_p=0.5, โ
โ โ
โ in pydantic.main.BaseModel.__init__:339 โ
โ โ
โ in pydantic.main.validate_model:1102 โ
โ โ
โ /Users/christoph/src/sandstorm-labs/development-tools/ai-support-chat/gpt4all/venv/lib/python3.1 โ
โ 1/site-packages/langchain/llms/gpt4all.py:156 in validate_environment โ
โ โ
โ 153 โ โ if values["n_threads"] is not None: โ
โ 154 โ โ โ # set n_threads โ
โ 155 โ โ โ values["client"].model.set_thread_count(values["n_threads"]) โ
โ โฑ 156 โ โ values["backend"] = values["client"].model_type โ
โ 157 โ โ โ
โ 158 โ โ return values โ
โ 159 โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
```
As you can see in _gpt4all.py:156_ contains the changed from the fix of #5651
## Code
```python
from langchain.llms import GPT4All
# see https://gpt4all.io/models/ggml-mpt-7b-instruct.bin
llm = GPT4All(
model="./ggml-mpt-7b-instruct.bin",
#backend='gptj',
top_p=0.5,
top_k=0,
temp=0.1,
repeat_penalty=0.8,
n_threads=12,
n_batch=16,
n_ctx=2048)
```
FYI I am following [this example in a blog post](https://dev.to/akshayballal/beyond-openai-harnessing-open-source-models-to-create-your-personalized-ai-companion-1npb).
### Expected behavior
I expect an instance of _GPT4All_ instead of a stacktrace. | https://github.com/langchain-ai/langchain/issues/5720 | https://github.com/langchain-ai/langchain/pull/5743 | d0d89d39efb5f292f72e70973f3b70c4ca095047 | 74f8e603d942ca22ed07bf0ea23a57ed67b36b2c | "2023-06-05T09:44:08Z" | python | "2023-06-05T19:45:29Z" | langchain/llms/gpt4all.py | """Wrapper for the GPT4All model."""
from functools import partial
from typing import Any, Dict, List, Mapping, Optional, Set
from pydantic import Extra, Field, root_validator
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.llms.base import LLM
from langchain.llms.utils import enforce_stop_tokens
class GPT4All(LLM): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,720 | AttributeError: 'GPT4All' object has no attribute 'model_type' (langchain 0.0.190) | ### System Info
Hi, this is related to #5651 but (on my machine ;) ) the issue is still there.
## Versions
* Intel Mac with latest OSX
* Python 3.11.2
* langchain 0.0.190, includes fix for #5651
* ggml-mpt-7b-instruct.bin, downloaded at June 5th from https://gpt4all.io/models/ggml-mpt-7b-instruct.bin
### Who can help?
@pakcheera @bwv988 First of all: thanks for the report and the fix :). Did this issues disappear on you machines?
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
## Error message
```shell
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /Users/christoph/src/sandstorm-labs/development-tools/ai-support-chat/gpt4all/chat.py:30 in โ
โ <module> โ
โ โ
โ 27 โ model_name="all-mpnet-base-v2") โ
โ 28 โ
โ 29 # see https://gpt4all.io/models/ggml-mpt-7b-instruct.bin โ
โ โฑ 30 llm = GPT4All( โ
โ 31 โ model="./ggml-mpt-7b-instruct.bin", โ
โ 32 โ #backend='gptj', โ
โ 33 โ top_p=0.5, โ
โ โ
โ in pydantic.main.BaseModel.__init__:339 โ
โ โ
โ in pydantic.main.validate_model:1102 โ
โ โ
โ /Users/christoph/src/sandstorm-labs/development-tools/ai-support-chat/gpt4all/venv/lib/python3.1 โ
โ 1/site-packages/langchain/llms/gpt4all.py:156 in validate_environment โ
โ โ
โ 153 โ โ if values["n_threads"] is not None: โ
โ 154 โ โ โ # set n_threads โ
โ 155 โ โ โ values["client"].model.set_thread_count(values["n_threads"]) โ
โ โฑ 156 โ โ values["backend"] = values["client"].model_type โ
โ 157 โ โ โ
โ 158 โ โ return values โ
โ 159 โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
```
As you can see in _gpt4all.py:156_ contains the changed from the fix of #5651
## Code
```python
from langchain.llms import GPT4All
# see https://gpt4all.io/models/ggml-mpt-7b-instruct.bin
llm = GPT4All(
model="./ggml-mpt-7b-instruct.bin",
#backend='gptj',
top_p=0.5,
top_k=0,
temp=0.1,
repeat_penalty=0.8,
n_threads=12,
n_batch=16,
n_ctx=2048)
```
FYI I am following [this example in a blog post](https://dev.to/akshayballal/beyond-openai-harnessing-open-source-models-to-create-your-personalized-ai-companion-1npb).
### Expected behavior
I expect an instance of _GPT4All_ instead of a stacktrace. | https://github.com/langchain-ai/langchain/issues/5720 | https://github.com/langchain-ai/langchain/pull/5743 | d0d89d39efb5f292f72e70973f3b70c4ca095047 | 74f8e603d942ca22ed07bf0ea23a57ed67b36b2c | "2023-06-05T09:44:08Z" | python | "2023-06-05T19:45:29Z" | langchain/llms/gpt4all.py | r"""Wrapper around GPT4All language models.
To use, you should have the ``gpt4all`` python package installed, the
pre-trained model file, and the model's config information.
Example:
.. code-block:: python
from langchain.llms import GPT4All
model = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)
# Simplest invocation
response = model("Once upon a time, ")
"""
model: str
"""Path to the pre-trained GPT4All model file."""
backend: Optional[str] = Field(None, alias="backend")
n_ctx: int = Field(512, alias="n_ctx")
"""Token context window."""
n_parts: int = Field(-1, alias="n_parts")
"""Number of parts to split the model into.
If -1, the number of parts is automatically determined."""
seed: int = Field(0, alias="seed")
"""Seed. If -1, a random seed is used."""
f16_kv: bool = Field(False, alias="f16_kv")
"""Use half-precision for key/value cache."""
logits_all: bool = Field(False, alias="logits_all")
"""Return logits for all tokens, not just the last token."""
vocab_only: bool = Field(False, alias="vocab_only")
"""Only load the vocabulary, no weights."""
use_mlock: bool = Field(False, alias="use_mlock")
"""Force system to keep model in RAM."""
embedding: bool = Field(False, alias="embedding") |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,720 | AttributeError: 'GPT4All' object has no attribute 'model_type' (langchain 0.0.190) | ### System Info
Hi, this is related to #5651 but (on my machine ;) ) the issue is still there.
## Versions
* Intel Mac with latest OSX
* Python 3.11.2
* langchain 0.0.190, includes fix for #5651
* ggml-mpt-7b-instruct.bin, downloaded at June 5th from https://gpt4all.io/models/ggml-mpt-7b-instruct.bin
### Who can help?
@pakcheera @bwv988 First of all: thanks for the report and the fix :). Did this issues disappear on you machines?
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
## Error message
```shell
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /Users/christoph/src/sandstorm-labs/development-tools/ai-support-chat/gpt4all/chat.py:30 in โ
โ <module> โ
โ โ
โ 27 โ model_name="all-mpnet-base-v2") โ
โ 28 โ
โ 29 # see https://gpt4all.io/models/ggml-mpt-7b-instruct.bin โ
โ โฑ 30 llm = GPT4All( โ
โ 31 โ model="./ggml-mpt-7b-instruct.bin", โ
โ 32 โ #backend='gptj', โ
โ 33 โ top_p=0.5, โ
โ โ
โ in pydantic.main.BaseModel.__init__:339 โ
โ โ
โ in pydantic.main.validate_model:1102 โ
โ โ
โ /Users/christoph/src/sandstorm-labs/development-tools/ai-support-chat/gpt4all/venv/lib/python3.1 โ
โ 1/site-packages/langchain/llms/gpt4all.py:156 in validate_environment โ
โ โ
โ 153 โ โ if values["n_threads"] is not None: โ
โ 154 โ โ โ # set n_threads โ
โ 155 โ โ โ values["client"].model.set_thread_count(values["n_threads"]) โ
โ โฑ 156 โ โ values["backend"] = values["client"].model_type โ
โ 157 โ โ โ
โ 158 โ โ return values โ
โ 159 โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
```
As you can see in _gpt4all.py:156_ contains the changed from the fix of #5651
## Code
```python
from langchain.llms import GPT4All
# see https://gpt4all.io/models/ggml-mpt-7b-instruct.bin
llm = GPT4All(
model="./ggml-mpt-7b-instruct.bin",
#backend='gptj',
top_p=0.5,
top_k=0,
temp=0.1,
repeat_penalty=0.8,
n_threads=12,
n_batch=16,
n_ctx=2048)
```
FYI I am following [this example in a blog post](https://dev.to/akshayballal/beyond-openai-harnessing-open-source-models-to-create-your-personalized-ai-companion-1npb).
### Expected behavior
I expect an instance of _GPT4All_ instead of a stacktrace. | https://github.com/langchain-ai/langchain/issues/5720 | https://github.com/langchain-ai/langchain/pull/5743 | d0d89d39efb5f292f72e70973f3b70c4ca095047 | 74f8e603d942ca22ed07bf0ea23a57ed67b36b2c | "2023-06-05T09:44:08Z" | python | "2023-06-05T19:45:29Z" | langchain/llms/gpt4all.py | """Use embedding mode only."""
n_threads: Optional[int] = Field(4, alias="n_threads")
"""Number of threads to use."""
n_predict: Optional[int] = 256
"""The maximum number of tokens to generate."""
temp: Optional[float] = 0.8
"""The temperature to use for sampling."""
top_p: Optional[float] = 0.95
"""The top-p value to use for sampling."""
top_k: Optional[int] = 40
"""The top-k value to use for sampling."""
echo: Optional[bool] = False
"""Whether to echo the prompt."""
stop: Optional[List[str]] = []
"""A list of strings to stop generation when encountered."""
repeat_last_n: Optional[int] = 64
"Last n tokens to penalize"
repeat_penalty: Optional[float] = 1.3
"""The penalty to apply to repeated tokens."""
n_batch: int = Field(1, alias="n_batch")
"""Batch size for prompt processing."""
streaming: bool = False
"""Whether to stream the results or not."""
context_erase: float = 0.5
"""Leave (n_ctx * context_erase) tokens
starting from beginning if the context has run out."""
allow_download: bool = False
"""If model does not exist in ~/.cache/gpt4all/, download it."""
client: Any = None
class Config: |
Subsets and Splits