id
stringlengths 14
16
| text
stringlengths 13
2.7k
| source
stringlengths 57
178
|
---|---|---|
f9b28469aa6d-0 | langchain.document_loaders.excel.UnstructuredExcelLoader¶
class langchain.document_loaders.excel.UnstructuredExcelLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load Microsoft Excel files using Unstructured.
Like other
Unstructured loaders, UnstructuredExcelLoader can be used in both
“single” and “elements” mode. If you use the loader in “elements”
mode, each sheet in the Excel file will be a an Unstructured Table
element. If you use the loader in “elements” mode, an
HTML representation of the table will be available in the
“text_as_html” key in the document metadata.
Examples
from langchain.document_loaders.excel import UnstructuredExcelLoader
loader = UnstructuredExcelLoader(“stanley-cups.xlsd”, mode=”elements”)
docs = loader.load()
Parameters
file_path – The path to the Microsoft Excel file.
mode – The mode to use when partitioning the file. See unstructured docs
for more info. Optional. Defaults to “single”.
**unstructured_kwargs – Keyword arguments to pass to unstructured.
Methods
__init__(file_path[, mode])
param file_path
The path to the Microsoft Excel file.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Parameters
file_path – The path to the Microsoft Excel file.
mode – The mode to use when partitioning the file. See unstructured docs
for more info. Optional. Defaults to “single”.
**unstructured_kwargs – Keyword arguments to pass to unstructured.
lazy_load() → Iterator[Document]¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.excel.UnstructuredExcelLoader.html |
f9b28469aa6d-1 | lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredExcelLoader¶
Microsoft Excel | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.excel.UnstructuredExcelLoader.html |
537b3ff355bc-0 | langchain.document_loaders.generic.GenericLoader¶
class langchain.document_loaders.generic.GenericLoader(blob_loader: BlobLoader, blob_parser: BaseBlobParser)[source]¶
Generic Document Loader.
A generic document loader that allows combining an arbitrary blob loader with
a blob parser.
Examples
from langchain.document_loaders import GenericLoader
from langchain.document_loaders.blob_loaders import FileSystemBlobLoader
loader = GenericLoader.from_filesystem(
path="path/to/directory",
glob="**/[!.]*",
suffixes=[".pdf"],
show_progress=True,
)
docs = loader.lazy_load()
next(docs)
Example instantiations to change which files are loaded:
.. code-block:: python
# Recursively load all text files in a directory.
loader = GenericLoader.from_filesystem("/path/to/dir", glob="**/*.txt")
# Recursively load all non-hidden files in a directory.
loader = GenericLoader.from_filesystem("/path/to/dir", glob="**/[!.]*")
# Load all files in a directory without recursion.
loader = GenericLoader.from_filesystem("/path/to/dir", glob="*")
Example instantiations to change which parser is used:
.. code-block:: python
from langchain.document_loaders.parsers.pdf import PyPDFParser
# Recursively load all text files in a directory.
loader = GenericLoader.from_filesystem(
"/path/to/dir",
glob="**/*.pdf",
parser=PyPDFParser()
)
A generic document loader.
Parameters
blob_loader – A blob loader which knows how to yield blobs
blob_parser – A blob parser which knows how to parse blobs into documents
Methods
__init__(blob_loader, blob_parser)
A generic document loader. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.generic.GenericLoader.html |
537b3ff355bc-1 | Methods
__init__(blob_loader, blob_parser)
A generic document loader.
from_filesystem(path, *[, glob, exclude, ...])
Create a generic document loader using a filesystem blob loader.
lazy_load()
Load documents lazily.
load()
Load all documents.
load_and_split([text_splitter])
Load all documents and split them into sentences.
__init__(blob_loader: BlobLoader, blob_parser: BaseBlobParser) → None[source]¶
A generic document loader.
Parameters
blob_loader – A blob loader which knows how to yield blobs
blob_parser – A blob parser which knows how to parse blobs into documents
classmethod from_filesystem(path: Union[str, Path], *, glob: str = '**/[!.]*', exclude: Sequence[str] = (), suffixes: Optional[Sequence[str]] = None, show_progress: bool = False, parser: Union[Literal['default'], BaseBlobParser] = 'default') → GenericLoader[source]¶
Create a generic document loader using a filesystem blob loader.
Parameters
path – The path to the directory to load documents from.
glob – The glob pattern to use to find documents.
suffixes – The suffixes to use to filter documents. If None, all files
matching the glob will be loaded.
exclude – A list of patterns to exclude from the loader.
show_progress – Whether to show a progress bar or not (requires tqdm).
Proxies to the file system loader.
parser – A blob parser which knows how to parse blobs into documents
Returns
A generic document loader.
lazy_load() → Iterator[Document][source]¶
Load documents lazily. Use this when working at a large scale.
load() → List[Document][source]¶
Load all documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.generic.GenericLoader.html |
537b3ff355bc-2 | load() → List[Document][source]¶
Load all documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document][source]¶
Load all documents and split them into sentences.
Examples using GenericLoader¶
Grobid
Loading documents from a YouTube url
Source Code
Set env var OPENAI_API_KEY or load from a .env file | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.generic.GenericLoader.html |
b8ed7ecde909-0 | langchain.document_loaders.open_city_data.OpenCityDataLoader¶
class langchain.document_loaders.open_city_data.OpenCityDataLoader(city_id: str, dataset_id: str, limit: int)[source]¶
Load from Open City.
Initialize with dataset_id.
Example: https://dev.socrata.com/foundry/data.sfgov.org/vw6y-z8j6
e.g., city_id = data.sfgov.org
e.g., dataset_id = vw6y-z8j6
Parameters
city_id – The Open City city identifier.
dataset_id – The Open City dataset identifier.
limit – The maximum number of documents to load.
Methods
__init__(city_id, dataset_id, limit)
Initialize with dataset_id.
lazy_load()
Lazy load records.
load()
Load records.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(city_id: str, dataset_id: str, limit: int)[source]¶
Initialize with dataset_id.
Example: https://dev.socrata.com/foundry/data.sfgov.org/vw6y-z8j6
e.g., city_id = data.sfgov.org
e.g., dataset_id = vw6y-z8j6
Parameters
city_id – The Open City city identifier.
dataset_id – The Open City dataset identifier.
limit – The maximum number of documents to load.
lazy_load() → Iterator[Document][source]¶
Lazy load records.
load() → List[Document][source]¶
Load records.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.open_city_data.OpenCityDataLoader.html |
b8ed7ecde909-1 | Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using OpenCityDataLoader¶
Geopandas
Open City Data | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.open_city_data.OpenCityDataLoader.html |
fb9a19f7adf5-0 | langchain.document_loaders.odt.UnstructuredODTLoader¶
class langchain.document_loaders.odt.UnstructuredODTLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load OpenOffice ODT files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredODTLoader
loader = UnstructuredODTLoader(“example.odt”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-odt
Parameters
file_path – The path to the file to load.
mode – The mode to use when loading the file. Can be one of “single”,
“multi”, or “all”. Default is “single”.
**unstructured_kwargs – Any kwargs to pass to the unstructured.
Methods
__init__(file_path[, mode])
param file_path
The path to the file to load.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Parameters
file_path – The path to the file to load.
mode – The mode to use when loading the file. Can be one of “single”, | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.odt.UnstructuredODTLoader.html |
fb9a19f7adf5-1 | mode – The mode to use when loading the file. Can be one of “single”,
“multi”, or “all”. Default is “single”.
**unstructured_kwargs – Any kwargs to pass to the unstructured.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredODTLoader¶
Open Document Format (ODT) | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.odt.UnstructuredODTLoader.html |
4ffac6999228-0 | langchain.document_loaders.email.UnstructuredEmailLoader¶
class langchain.document_loaders.email.UnstructuredEmailLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load email files using Unstructured.
Works with both
.eml and .msg files. You can process attachments in addition to the
e-mail message itself by passing process_attachments=True into the
constructor for the loader. By default, attachments will be processed
with the unstructured partition function. If you already know the document
types of the attachments, you can specify another partitioning function
with the attachment partitioner kwarg.
Example
from langchain.document_loaders import UnstructuredEmailLoader
loader = UnstructuredEmailLoader(“example_data/fake-email.eml”, mode=”elements”)
loader.load()
Example
from langchain.document_loaders import UnstructuredEmailLoader
loader = UnstructuredEmailLoader(“example_data/fake-email-attachment.eml”,
mode=”elements”,
process_attachments=True,
)
loader.load()
Initialize with file path.
Methods
__init__(file_path[, mode])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.email.UnstructuredEmailLoader.html |
4ffac6999228-1 | Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredEmailLoader¶
Email | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.email.UnstructuredEmailLoader.html |
20516a7d830a-0 | langchain.document_loaders.blob_loaders.youtube_audio.YoutubeAudioLoader¶
class langchain.document_loaders.blob_loaders.youtube_audio.YoutubeAudioLoader(urls: List[str], save_dir: str)[source]¶
Load YouTube urls as audio file(s).
Methods
__init__(urls, save_dir)
yield_blobs()
Yield audio blobs for each url.
__init__(urls: List[str], save_dir: str)[source]¶
yield_blobs() → Iterable[Blob][source]¶
Yield audio blobs for each url.
Examples using YoutubeAudioLoader¶
Loading documents from a YouTube url | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.youtube_audio.YoutubeAudioLoader.html |
e8a930593347-0 | langchain.document_loaders.parsers.msword.MsWordParser¶
class langchain.document_loaders.parsers.msword.MsWordParser[source]¶
Parse the Microsoft Word documents from a blob.
Methods
__init__()
lazy_parse(blob)
Parse a Microsoft Word document into the Document iterator.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__()¶
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Parse a Microsoft Word document into the Document iterator.
Parameters
blob – The blob to parse.
Returns: An iterator of Documents.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.msword.MsWordParser.html |
82ef8acf6f4a-0 | langchain.document_loaders.parsers.grobid.ServerUnavailableException¶
class langchain.document_loaders.parsers.grobid.ServerUnavailableException[source]¶
Exception raised when the Grobid server is unavailable. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.grobid.ServerUnavailableException.html |
6682fb9e72c3-0 | langchain.document_loaders.unstructured.UnstructuredAPIFileLoader¶
class langchain.document_loaders.unstructured.UnstructuredAPIFileLoader(file_path: Union[str, List[str]] = '', mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]¶
Load files using Unstructured API.
By default, the loader makes a call to the hosted Unstructured API.
If you are running the unstructured API locally, you can change the
API rule by passing in the url parameter when you initialize the loader.
The hosted Unstructured API requires an API key. See
https://www.unstructured.io/api-key/ if you need to generate a key.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
```python
from langchain.document_loaders import UnstructuredAPIFileLoader
loader = UnstructuredFileAPILoader(“example.pdf”, mode=”elements”, strategy=”fast”, api_key=”MY_API_KEY”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition
https://www.unstructured.io/api-key/
https://github.com/Unstructured-IO/unstructured-api
Initialize with file path.
Methods
__init__([file_path, mode, url, api_key])
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load file. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileLoader.html |
6682fb9e72c3-1 | lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: Union[str, List[str]] = '', mode: str = 'single', url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredAPIFileLoader¶
Unstructured File | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.UnstructuredAPIFileLoader.html |
4de57da1f1fd-0 | langchain.document_loaders.confluence.ContentFormat¶
class langchain.document_loaders.confluence.ContentFormat(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Enumerator of the content formats of Confluence page.
EDITOR = 'body.editor'¶
EXPORT_VIEW = 'body.export_view'¶
ANONYMOUS_EXPORT_VIEW = 'body.anonymous_export_view'¶
STORAGE = 'body.storage'¶
VIEW = 'body.view'¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.confluence.ContentFormat.html |
978736c4c28a-0 | langchain.document_loaders.blob_loaders.schema.BlobLoader¶
class langchain.document_loaders.blob_loaders.schema.BlobLoader[source]¶
Abstract interface for blob loaders implementation.
Implementer should be able to load raw content from a storage system according
to some criteria and return the raw content lazily as a stream of blobs.
Methods
__init__()
yield_blobs()
A lazy loader for raw data represented by LangChain's Blob object.
__init__()¶
abstract yield_blobs() → Iterable[Blob][source]¶
A lazy loader for raw data represented by LangChain’s Blob object.
Returns
A generator over blobs | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blob_loaders.schema.BlobLoader.html |
1a5cc3da224d-0 | langchain.document_loaders.lakefs.LakeFSLoader¶
class langchain.document_loaders.lakefs.LakeFSLoader(lakefs_access_key: str, lakefs_secret_key: str, lakefs_endpoint: str, repo: Optional[str] = None, ref: Optional[str] = 'main', path: Optional[str] = '')[source]¶
Load from lakeFS.
Parameters
lakefs_access_key – [required] lakeFS server’s access key
lakefs_secret_key – [required] lakeFS server’s secret key
lakefs_endpoint – [required] lakeFS server’s endpoint address,
ex: https://example.my-lakefs.com
repo – [optional, default = ‘’] target repository
ref – [optional, default = ‘main’] target ref (branch name,
tag, or commit ID)
path – [optional, default = ‘’] target path
Attributes
repo
ref
path
Methods
__init__(lakefs_access_key, ...[, repo, ...])
param lakefs_access_key
[required] lakeFS server's access key
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
set_path(path)
set_ref(ref)
set_repo(repo)
__init__(lakefs_access_key: str, lakefs_secret_key: str, lakefs_endpoint: str, repo: Optional[str] = None, ref: Optional[str] = 'main', path: Optional[str] = '')[source]¶
Parameters
lakefs_access_key – [required] lakeFS server’s access key
lakefs_secret_key – [required] lakeFS server’s secret key
lakefs_endpoint – [required] lakeFS server’s endpoint address,
ex: https://example.my-lakefs.com | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.lakefs.LakeFSLoader.html |
1a5cc3da224d-1 | ex: https://example.my-lakefs.com
repo – [optional, default = ‘’] target repository
ref – [optional, default = ‘main’] target ref (branch name,
tag, or commit ID)
path – [optional, default = ‘’] target path
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
set_path(path: str) → None[source]¶
set_ref(ref: str) → None[source]¶
set_repo(repo: str) → None[source]¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.lakefs.LakeFSLoader.html |
d03783d5033e-0 | langchain.document_loaders.org_mode.UnstructuredOrgModeLoader¶
class langchain.document_loaders.org_mode.UnstructuredOrgModeLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load Org-Mode files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredOrgModeLoader
loader = UnstructuredOrgModeLoader(“example.org”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-org
Parameters
file_path – The path to the file to load.
mode – The mode to load the file from. Default is “single”.
**unstructured_kwargs – Any additional keyword arguments to pass
to the unstructured.
Methods
__init__(file_path[, mode])
param file_path
The path to the file to load.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Parameters
file_path – The path to the file to load.
mode – The mode to load the file from. Default is “single”.
**unstructured_kwargs – Any additional keyword arguments to pass
to the unstructured. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.org_mode.UnstructuredOrgModeLoader.html |
d03783d5033e-1 | **unstructured_kwargs – Any additional keyword arguments to pass
to the unstructured.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredOrgModeLoader¶
Org-mode | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.org_mode.UnstructuredOrgModeLoader.html |
e45808f475f4-0 | langchain.document_loaders.nuclia.NucliaLoader¶
class langchain.document_loaders.nuclia.NucliaLoader(path: str, nuclia_tool: NucliaUnderstandingAPI)[source]¶
Load from any file type using Nuclia Understanding API.
Methods
__init__(path, nuclia_tool)
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str, nuclia_tool: NucliaUnderstandingAPI)[source]¶
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using NucliaLoader¶
Nuclia Understanding API document loader | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.nuclia.NucliaLoader.html |
398d8008766a-0 | langchain.document_loaders.parsers.pdf.DocumentIntelligenceParser¶
class langchain.document_loaders.parsers.pdf.DocumentIntelligenceParser(client: Any, model: str)[source]¶
Loads a PDF with Azure Document Intelligence
(formerly Forms Recognizer) and chunks at character level.
Methods
__init__(client, model)
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(client: Any, model: str)[source]¶
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.pdf.DocumentIntelligenceParser.html |
c68d15f1c425-0 | langchain.document_loaders.trello.TrelloLoader¶
class langchain.document_loaders.trello.TrelloLoader(client: TrelloClient, board_name: str, *, include_card_name: bool = True, include_comments: bool = True, include_checklist: bool = True, card_filter: Literal['closed', 'open', 'all'] = 'all', extra_metadata: Tuple[str, ...] = ('due_date', 'labels', 'list', 'closed'))[source]¶
Load cards from a Trello board.
Initialize Trello loader.
Parameters
client – Trello API client.
board_name – The name of the Trello board.
include_card_name – Whether to include the name of the card in the document.
include_comments – Whether to include the comments on the card in the
document.
include_checklist – Whether to include the checklist on the card in the
document.
card_filter – Filter on card status. Valid values are “closed”, “open”,
“all”.
extra_metadata – List of additional metadata fields to include as document
metadata.Valid values are “due_date”, “labels”, “list”, “closed”.
Methods
__init__(client, board_name, *[, ...])
Initialize Trello loader.
from_credentials(board_name, *[, api_key, token])
Convenience constructor that builds TrelloClient init param for you.
lazy_load()
A lazy loader for Documents.
load()
Loads all cards from the specified Trello board.
load_and_split([text_splitter])
Load Documents and split into chunks. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.trello.TrelloLoader.html |
c68d15f1c425-1 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(client: TrelloClient, board_name: str, *, include_card_name: bool = True, include_comments: bool = True, include_checklist: bool = True, card_filter: Literal['closed', 'open', 'all'] = 'all', extra_metadata: Tuple[str, ...] = ('due_date', 'labels', 'list', 'closed'))[source]¶
Initialize Trello loader.
Parameters
client – Trello API client.
board_name – The name of the Trello board.
include_card_name – Whether to include the name of the card in the document.
include_comments – Whether to include the comments on the card in the
document.
include_checklist – Whether to include the checklist on the card in the
document.
card_filter – Filter on card status. Valid values are “closed”, “open”,
“all”.
extra_metadata – List of additional metadata fields to include as document
metadata.Valid values are “due_date”, “labels”, “list”, “closed”.
classmethod from_credentials(board_name: str, *, api_key: Optional[str] = None, token: Optional[str] = None, **kwargs: Any) → TrelloLoader[source]¶
Convenience constructor that builds TrelloClient init param for you.
Parameters
board_name – The name of the Trello board.
api_key – Trello API key. Can also be specified as environment variable
TRELLO_API_KEY.
token – Trello token. Can also be specified as environment variable
TRELLO_TOKEN.
include_card_name – Whether to include the name of the card in the document.
include_comments – Whether to include the comments on the card in the
document.
include_checklist – Whether to include the checklist on the card in the
document. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.trello.TrelloLoader.html |
c68d15f1c425-2 | include_checklist – Whether to include the checklist on the card in the
document.
card_filter – Filter on card status. Valid values are “closed”, “open”,
“all”.
extra_metadata – List of additional metadata fields to include as document
metadata.Valid values are “due_date”, “labels”, “list”, “closed”.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Loads all cards from the specified Trello board.
You can filter the cards, metadata and text included by using the optional
parameters.
Returns:A list of documents, one for each card in the board.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TrelloLoader¶
Trello | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.trello.TrelloLoader.html |
bbac3595f3b8-0 | langchain.document_loaders.baiducloud_bos_directory.BaiduBOSDirectoryLoader¶
class langchain.document_loaders.baiducloud_bos_directory.BaiduBOSDirectoryLoader(conf: Any, bucket: str, prefix: str = '')[source]¶
Load from Baidu BOS directory.
Initialize with BOS config, bucket and prefix.
:param conf(BosConfig): BOS config.
:param bucket(str): BOS bucket.
:param prefix(str): prefix.
Methods
__init__(conf, bucket[, prefix])
Initialize with BOS config, bucket and prefix.
lazy_load()
Load documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(conf: Any, bucket: str, prefix: str = '')[source]¶
Initialize with BOS config, bucket and prefix.
:param conf(BosConfig): BOS config.
:param bucket(str): BOS bucket.
:param prefix(str): prefix.
lazy_load() → Iterator[Document][source]¶
Load documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.baiducloud_bos_directory.BaiduBOSDirectoryLoader.html |
6bc17c814772-0 | langchain.document_loaders.pdf.PyPDFLoader¶
class langchain.document_loaders.pdf.PyPDFLoader(file_path: str, password: Optional[Union[str, bytes]] = None, headers: Optional[Dict] = None, extract_images: bool = False)[source]¶
Load PDF using pypdf into list of documents.
Loader chunks by page and stores page numbers in metadata.
Initialize with a file path.
Attributes
source
Methods
__init__(file_path[, password, headers, ...])
Initialize with a file path.
lazy_load()
Lazy load given path as pages.
load()
Load given path as pages.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, password: Optional[Union[str, bytes]] = None, headers: Optional[Dict] = None, extract_images: bool = False) → None[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document][source]¶
Lazy load given path as pages.
load() → List[Document][source]¶
Load given path as pages.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using PyPDFLoader¶
Document Comparison
Google Cloud Storage File
MergeDocLoader
QA using Activeloop’s DeepLake | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PyPDFLoader.html |
c4a0218e8f88-0 | langchain.document_loaders.lakefs.LakeFSClient¶
class langchain.document_loaders.lakefs.LakeFSClient(lakefs_access_key: str, lakefs_secret_key: str, lakefs_endpoint: str)[source]¶
Methods
__init__(lakefs_access_key, ...)
is_presign_supported()
ls_objects(repo, ref, path, presign)
__init__(lakefs_access_key: str, lakefs_secret_key: str, lakefs_endpoint: str)[source]¶
is_presign_supported() → bool[source]¶
ls_objects(repo: str, ref: str, path: str, presign: Optional[bool]) → List[source]¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.lakefs.LakeFSClient.html |
c70e53fe6d03-0 | langchain.document_loaders.rocksetdb.default_joiner¶
langchain.document_loaders.rocksetdb.default_joiner(docs: List[Tuple[str, Any]]) → str[source]¶
Default joiner for content columns. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rocksetdb.default_joiner.html |
84700a3c0601-0 | langchain.document_loaders.url_playwright.PlaywrightURLLoader¶
class langchain.document_loaders.url_playwright.PlaywrightURLLoader(urls: List[str], continue_on_failure: bool = True, headless: bool = True, remove_selectors: Optional[List[str]] = None, evaluator: Optional[PlaywrightEvaluator] = None)[source]¶
Load HTML pages with Playwright and parse with Unstructured.
This is useful for loading pages that require javascript to render.
urls¶
List of URLs to load.
Type
List[str]
continue_on_failure¶
If True, continue loading other URLs on failure.
Type
bool
headless¶
If True, the browser will run in headless mode.
Type
bool
Load a list of URLs using Playwright.
Methods
__init__(urls[, continue_on_failure, ...])
Load a list of URLs using Playwright.
aload()
Load the specified URLs with Playwright and create Documents asynchronously.
lazy_load()
A lazy loader for Documents.
load()
Load the specified URLs using Playwright and create Document instances.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(urls: List[str], continue_on_failure: bool = True, headless: bool = True, remove_selectors: Optional[List[str]] = None, evaluator: Optional[PlaywrightEvaluator] = None)[source]¶
Load a list of URLs using Playwright.
async aload() → List[Document][source]¶
Load the specified URLs with Playwright and create Documents asynchronously.
Use this function when in a jupyter notebook environment.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_playwright.PlaywrightURLLoader.html |
84700a3c0601-1 | A lazy loader for Documents.
load() → List[Document][source]¶
Load the specified URLs using Playwright and create Document instances.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using PlaywrightURLLoader¶
URL | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.url_playwright.PlaywrightURLLoader.html |
9210376a294f-0 | langchain.document_loaders.sitemap.SitemapLoader¶
class langchain.document_loaders.sitemap.SitemapLoader(web_path: str, filter_urls: Optional[List[str]] = None, parsing_function: Optional[Callable] = None, blocksize: Optional[int] = None, blocknum: int = 0, meta_function: Optional[Callable] = None, is_local: bool = False, continue_on_failure: bool = False, restrict_to_same_domain: bool = True, **kwargs: Any)[source]¶
Load a sitemap and its URLs.
Security Note: This loader can be used to load all URLs specified in a sitemap.If a malicious actor gets access to the sitemap, they could force
the server to load URLs from other domains by modifying the sitemap.
This could lead to server-side request forgery (SSRF) attacks; e.g.,
with the attacker forcing the server to load URLs from internal
service endpoints that are not publicly accessible. While the attacker
may not immediately gain access to this data, this data could leak
into downstream systems (e.g., data loader is used to load data for indexing).
This loader is a crawler and web crawlers should generally NOT be deployed
with network access to any internal servers.
Control access to who can submit crawling requests and what network access
the crawler has.
By default, the loader will only load URLs from the same domain as the sitemap
if the site map is not a local file. This can be disabled by setting
restrict_to_same_domain to False (not recommended).
If the site map is a local file, no such risk mitigation is applied by default.
Use the filter URLs argument to limit which URLs can be loaded.
See https://python.langchain.com/docs/security
Initialize with webpage path and optional filter URLs.
Parameters
web_path – url of the sitemap. can also be a local path | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html |
9210376a294f-1 | Parameters
web_path – url of the sitemap. can also be a local path
filter_urls – a list of regexes. If specified, only
URLS that match one of the filter URLs will be loaded.
WARNING The filter URLs are interpreted as regular expressions.
Remember to escape special characters if you do not want them to be
interpreted as regular expression syntax. For example, . appears
frequently in URLs and should be escaped if you want to match a literal
. rather than any character.
restrict_to_same_domain takes precedence over filter_urls when
restrict_to_same_domain is True and the sitemap is not a local file.
parsing_function – Function to parse bs4.Soup output
blocksize – number of sitemap locations per block
blocknum – the number of the block that should be loaded - zero indexed.
Default: 0
meta_function – Function to parse bs4.Soup output for metadata
remember when setting this method to also copy metadata[“loc”]
to metadata[“source”] if you are using this field
is_local – whether the sitemap is a local file. Default: False
continue_on_failure – whether to continue loading the sitemap if an error
occurs loading a url, emitting a warning instead of raising an
exception. Setting this to True makes the loader more robust, but also
may result in missing data. Default: False
restrict_to_same_domain – whether to restrict loading to URLs to the same
domain as the sitemap. Attention: This is only applied if the sitemap
is not a local file!
Attributes
web_path
Methods
__init__(web_path[, filter_urls, ...])
Initialize with webpage path and optional filter URLs.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load() | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html |
9210376a294f-2 | fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load sitemap.
load_and_split([text_splitter])
Load Documents and split into chunks.
parse_sitemap(soup)
Parse sitemap xml and load into a list of dicts.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
__init__(web_path: str, filter_urls: Optional[List[str]] = None, parsing_function: Optional[Callable] = None, blocksize: Optional[int] = None, blocknum: int = 0, meta_function: Optional[Callable] = None, is_local: bool = False, continue_on_failure: bool = False, restrict_to_same_domain: bool = True, **kwargs: Any)[source]¶
Initialize with webpage path and optional filter URLs.
Parameters
web_path – url of the sitemap. can also be a local path
filter_urls – a list of regexes. If specified, only
URLS that match one of the filter URLs will be loaded.
WARNING The filter URLs are interpreted as regular expressions.
Remember to escape special characters if you do not want them to be
interpreted as regular expression syntax. For example, . appears
frequently in URLs and should be escaped if you want to match a literal
. rather than any character.
restrict_to_same_domain takes precedence over filter_urls when
restrict_to_same_domain is True and the sitemap is not a local file.
parsing_function – Function to parse bs4.Soup output
blocksize – number of sitemap locations per block
blocknum – the number of the block that should be loaded - zero indexed.
Default: 0 | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html |
9210376a294f-3 | Default: 0
meta_function – Function to parse bs4.Soup output for metadata
remember when setting this method to also copy metadata[“loc”]
to metadata[“source”] if you are using this field
is_local – whether the sitemap is a local file. Default: False
continue_on_failure – whether to continue loading the sitemap if an error
occurs loading a url, emitting a warning instead of raising an
exception. Setting this to True makes the loader more robust, but also
may result in missing data. Default: False
restrict_to_same_domain – whether to restrict loading to URLs to the same
domain as the sitemap. Attention: This is only applied if the sitemap
is not a local file!
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load sitemap.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
parse_sitemap(soup: Any) → List[dict][source]¶
Parse sitemap xml and load into a list of dicts.
Parameters
soup – BeautifulSoup object.
Returns
List of dicts.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html |
9210376a294f-4 | Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
Examples using SitemapLoader¶
Sitemap | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.sitemap.SitemapLoader.html |
c4371f3752c0-0 | langchain.document_loaders.arxiv.ArxivLoader¶
class langchain.document_loaders.arxiv.ArxivLoader(query: str, doc_content_chars_max: Optional[int] = None, **kwargs: Any)[source]¶
Load a query result from Arxiv.
The loader converts the original PDF format into the text.
Parameters
ArxivAPIWrapper. (Supports all arguments of) –
Methods
__init__(query[, doc_content_chars_max])
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(query: str, doc_content_chars_max: Optional[int] = None, **kwargs: Any)[source]¶
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using ArxivLoader¶
Arxiv | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.arxiv.ArxivLoader.html |
355fb050c162-0 | langchain.document_loaders.max_compute.MaxComputeLoader¶
class langchain.document_loaders.max_compute.MaxComputeLoader(query: str, api_wrapper: MaxComputeAPIWrapper, *, page_content_columns: Optional[Sequence[str]] = None, metadata_columns: Optional[Sequence[str]] = None)[source]¶
Load from Alibaba Cloud MaxCompute table.
Initialize Alibaba Cloud MaxCompute document loader.
Parameters
query – SQL query to execute.
api_wrapper – MaxCompute API wrapper.
page_content_columns – The columns to write into the page_content of the
Document. If unspecified, all columns will be written to page_content.
metadata_columns – The columns to write into the metadata of the Document.
If unspecified, all columns not added to page_content will be written.
Methods
__init__(query, api_wrapper, *[, ...])
Initialize Alibaba Cloud MaxCompute document loader.
from_params(query, endpoint, project, *[, ...])
Convenience constructor that builds the MaxCompute API wrapper from
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(query: str, api_wrapper: MaxComputeAPIWrapper, *, page_content_columns: Optional[Sequence[str]] = None, metadata_columns: Optional[Sequence[str]] = None)[source]¶
Initialize Alibaba Cloud MaxCompute document loader.
Parameters
query – SQL query to execute.
api_wrapper – MaxCompute API wrapper.
page_content_columns – The columns to write into the page_content of the
Document. If unspecified, all columns will be written to page_content.
metadata_columns – The columns to write into the metadata of the Document.
If unspecified, all columns not added to page_content will be written. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.max_compute.MaxComputeLoader.html |
355fb050c162-1 | If unspecified, all columns not added to page_content will be written.
classmethod from_params(query: str, endpoint: str, project: str, *, access_id: Optional[str] = None, secret_access_key: Optional[str] = None, **kwargs: Any) → MaxComputeLoader[source]¶
Convenience constructor that builds the MaxCompute API wrapper fromgiven parameters.
Parameters
query – SQL query to execute.
endpoint – MaxCompute endpoint.
project – A project is a basic organizational unit of MaxCompute, which is
similar to a database.
access_id – MaxCompute access ID. Should be passed in directly or set as the
environment variable MAX_COMPUTE_ACCESS_ID.
secret_access_key – MaxCompute secret access key. Should be passed in
directly or set as the environment variable
MAX_COMPUTE_SECRET_ACCESS_KEY.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using MaxComputeLoader¶
Alibaba Cloud MaxCompute | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.max_compute.MaxComputeLoader.html |
2d0af6208a3c-0 | langchain.document_loaders.pdf.PDFPlumberLoader¶
class langchain.document_loaders.pdf.PDFPlumberLoader(file_path: str, text_kwargs: Optional[Mapping[str, Any]] = None, dedupe: bool = False, headers: Optional[Dict] = None, extract_images: bool = False)[source]¶
Load PDF files using pdfplumber.
Initialize with a file path.
Attributes
source
Methods
__init__(file_path[, text_kwargs, dedupe, ...])
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, text_kwargs: Optional[Mapping[str, Any]] = None, dedupe: bool = False, headers: Optional[Dict] = None, extract_images: bool = False) → None[source]¶
Initialize with a file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pdf.PDFPlumberLoader.html |
8f7856ba9128-0 | langchain.document_loaders.unstructured.get_elements_from_api¶
langchain.document_loaders.unstructured.get_elements_from_api(file_path: Optional[Union[str, List[str]]] = None, file: Optional[Union[IO, Sequence[IO]]] = None, api_url: str = 'https://api.unstructured.io/general/v0/general', api_key: str = '', **unstructured_kwargs: Any) → List[source]¶
Retrieve a list of elements from the Unstructured API. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.unstructured.get_elements_from_api.html |
d863f96b6c8a-0 | langchain.document_loaders.parsers.audio.YandexSTTParser¶
class langchain.document_loaders.parsers.audio.YandexSTTParser(*, api_key: Optional[str] = None, iam_token: Optional[str] = None, model: str = 'general', language: str = 'auto')[source]¶
Transcribe and parse audio files.
Audio transcription is with OpenAI Whisper model.
Initialize the parser.
Parameters
api_key – API key for a service account
role. (with the ai.speechkit-stt.user) –
iam_token – IAM token for a service account
role. –
model – Recognition model name.
Defaults to general.
language – The language in ISO 639-1 format.
Defaults to automatic language recognition.
Either api_key or iam_token must be provided, but not both.
Methods
__init__(*[, api_key, iam_token, model, ...])
Initialize the parser.
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(*, api_key: Optional[str] = None, iam_token: Optional[str] = None, model: str = 'general', language: str = 'auto')[source]¶
Initialize the parser.
Parameters
api_key – API key for a service account
role. (with the ai.speechkit-stt.user) –
iam_token – IAM token for a service account
role. –
model – Recognition model name.
Defaults to general.
language – The language in ISO 639-1 format.
Defaults to automatic language recognition.
Either api_key or iam_token must be provided, but not both.
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.audio.YandexSTTParser.html |
d863f96b6c8a-1 | Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.audio.YandexSTTParser.html |
c29bb20427d5-0 | langchain.document_loaders.roam.RoamLoader¶
class langchain.document_loaders.roam.RoamLoader(path: str)[source]¶
Load Roam files from a directory.
Initialize with a path.
Methods
__init__(path)
Initialize with a path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str)[source]¶
Initialize with a path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using RoamLoader¶
Roam | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.roam.RoamLoader.html |
5d397031013b-0 | langchain.document_loaders.gcs_file.GCSFileLoader¶
class langchain.document_loaders.gcs_file.GCSFileLoader(project_name: str, bucket: str, blob: str, loader_func: Optional[Callable[[str], BaseLoader]] = None)[source]¶
Load from GCS file.
Initialize with bucket and key name.
Parameters
project_name – The name of the project to load
bucket – The name of the GCS bucket.
blob – The name of the GCS blob to load.
loader_func – A loader function that instantiates a loader based on a
file_path argument. If nothing is provided, the
UnstructuredFileLoader is used.
Examples
To use an alternative PDF loader:
>> from from langchain.document_loaders import PyPDFLoader
>> loader = GCSFileLoader(…, loader_func=PyPDFLoader)
To use UnstructuredFileLoader with additional arguments:
>> loader = GCSFileLoader(…,
>> loader_func=lambda x: UnstructuredFileLoader(x, mode=”elements”))
Methods
__init__(project_name, bucket, blob[, ...])
Initialize with bucket and key name.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(project_name: str, bucket: str, blob: str, loader_func: Optional[Callable[[str], BaseLoader]] = None)[source]¶
Initialize with bucket and key name.
Parameters
project_name – The name of the project to load
bucket – The name of the GCS bucket.
blob – The name of the GCS blob to load.
loader_func – A loader function that instantiates a loader based on a
file_path argument. If nothing is provided, the | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_file.GCSFileLoader.html |
5d397031013b-1 | file_path argument. If nothing is provided, the
UnstructuredFileLoader is used.
Examples
To use an alternative PDF loader:
>> from from langchain.document_loaders import PyPDFLoader
>> loader = GCSFileLoader(…, loader_func=PyPDFLoader)
To use UnstructuredFileLoader with additional arguments:
>> loader = GCSFileLoader(…,
>> loader_func=lambda x: UnstructuredFileLoader(x, mode=”elements”))
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using GCSFileLoader¶
Google Cloud Storage
Google Cloud Storage File | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.gcs_file.GCSFileLoader.html |
6272f08ddbfb-0 | langchain.document_loaders.college_confidential.CollegeConfidentialLoader¶
class langchain.document_loaders.college_confidential.CollegeConfidentialLoader(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, bs_get_text_kwargs: Optional[Dict[str, Any]] = None, bs_kwargs: Optional[Dict[str, Any]] = None, session: Any = None)[source]¶
Load College Confidential webpages.
Initialize loader.
Parameters
web_paths – Web paths to load from.
requests_per_second – Max number of concurrent requests to make.
default_parser – Default parser to use for BeautifulSoup.
requests_kwargs – kwargs for requests
raise_for_status – Raise an exception if http status code denotes an error.
bs_get_text_kwargs – kwargs for beatifulsoup4 get_text
bs_kwargs – kwargs for beatifulsoup4 web page parsing
Attributes
web_path
Methods
__init__([web_path, header_template, ...])
Initialize loader.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load webpages as Documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.college_confidential.CollegeConfidentialLoader.html |
6272f08ddbfb-1 | scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
__init__(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, bs_get_text_kwargs: Optional[Dict[str, Any]] = None, bs_kwargs: Optional[Dict[str, Any]] = None, session: Any = None) → None¶
Initialize loader.
Parameters
web_paths – Web paths to load from.
requests_per_second – Max number of concurrent requests to make.
default_parser – Default parser to use for BeautifulSoup.
requests_kwargs – kwargs for requests
raise_for_status – Raise an exception if http status code denotes an error.
bs_get_text_kwargs – kwargs for beatifulsoup4 get_text
bs_kwargs – kwargs for beatifulsoup4 web page parsing
aload() → List[Document]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load webpages as Documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.college_confidential.CollegeConfidentialLoader.html |
6272f08ddbfb-2 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
scrape(parser: Optional[str] = None) → Any¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any]¶
Fetch all urls, then return soups for all results.
Examples using CollegeConfidentialLoader¶
College Confidential | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.college_confidential.CollegeConfidentialLoader.html |
2a3b2161b19d-0 | langchain.document_loaders.web_base.WebBaseLoader¶
class langchain.document_loaders.web_base.WebBaseLoader(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, bs_get_text_kwargs: Optional[Dict[str, Any]] = None, bs_kwargs: Optional[Dict[str, Any]] = None, session: Any = None)[source]¶
Load HTML pages using urllib and parse them with `BeautifulSoup’.
Initialize loader.
Parameters
web_paths – Web paths to load from.
requests_per_second – Max number of concurrent requests to make.
default_parser – Default parser to use for BeautifulSoup.
requests_kwargs – kwargs for requests
raise_for_status – Raise an exception if http status code denotes an error.
bs_get_text_kwargs – kwargs for beatifulsoup4 get_text
bs_kwargs – kwargs for beatifulsoup4 web page parsing
Attributes
web_path
Methods
__init__([web_path, header_template, ...])
Initialize loader.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Load text from the url(s) in web_path.
load_and_split([text_splitter])
Load Documents and split into chunks.
scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.web_base.WebBaseLoader.html |
2a3b2161b19d-1 | scrape([parser])
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls[, parser])
Fetch all urls, then return soups for all results.
__init__(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, bs_get_text_kwargs: Optional[Dict[str, Any]] = None, bs_kwargs: Optional[Dict[str, Any]] = None, session: Any = None) → None[source]¶
Initialize loader.
Parameters
web_paths – Web paths to load from.
requests_per_second – Max number of concurrent requests to make.
default_parser – Default parser to use for BeautifulSoup.
requests_kwargs – kwargs for requests
raise_for_status – Raise an exception if http status code denotes an error.
bs_get_text_kwargs – kwargs for beatifulsoup4 get_text
bs_kwargs – kwargs for beatifulsoup4 web page parsing
aload() → List[Document][source]¶
Load text from the urls in web_path async into Documents.
async fetch_all(urls: List[str]) → Any[source]¶
Fetch all urls concurrently with rate limiting.
lazy_load() → Iterator[Document][source]¶
Lazy load text from the url(s) in web_path.
load() → List[Document][source]¶
Load text from the url(s) in web_path.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.web_base.WebBaseLoader.html |
2a3b2161b19d-2 | Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
scrape(parser: Optional[str] = None) → Any[source]¶
Scrape data from webpage and return it in BeautifulSoup format.
scrape_all(urls: List[str], parser: Optional[str] = None) → List[Any][source]¶
Fetch all urls, then return soups for all results.
Examples using WebBaseLoader¶
RePhraseQueryRetriever
Ollama
Vectorstore
Zep
WebBaseLoader
MergeDocLoader
Set env var OPENAI_API_KEY or load from a .env file:
Set env var OPENAI_API_KEY or load from a .env file
Question Answering
Use local LLMs
MultiQueryRetriever
Combine agents and vector stores | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.web_base.WebBaseLoader.html |
f42f40607f3b-0 | langchain.document_loaders.html_bs.BSHTMLLoader¶
class langchain.document_loaders.html_bs.BSHTMLLoader(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '')[source]¶
Load HTML files and parse them with beautiful soup.
Initialise with path, and optionally, file encoding to use, and any kwargs
to pass to the BeautifulSoup object.
Parameters
file_path – The path to the file to load.
open_encoding – The encoding to use when opening the file.
bs_kwargs – Any kwargs to pass to the BeautifulSoup object.
get_text_separator – The separator to use when calling get_text on the soup.
Methods
__init__(file_path[, open_encoding, ...])
Initialise with path, and optionally, file encoding to use, and any kwargs to pass to the BeautifulSoup object.
lazy_load()
A lazy loader for Documents.
load()
Load HTML document into document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, open_encoding: Optional[str] = None, bs_kwargs: Optional[dict] = None, get_text_separator: str = '') → None[source]¶
Initialise with path, and optionally, file encoding to use, and any kwargs
to pass to the BeautifulSoup object.
Parameters
file_path – The path to the file to load.
open_encoding – The encoding to use when opening the file.
bs_kwargs – Any kwargs to pass to the BeautifulSoup object.
get_text_separator – The separator to use when calling get_text on the soup.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load HTML document into document objects. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.html_bs.BSHTMLLoader.html |
f42f40607f3b-1 | load() → List[Document][source]¶
Load HTML document into document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.html_bs.BSHTMLLoader.html |
37aa60b94806-0 | langchain.document_loaders.parsers.registry.get_parser¶
langchain.document_loaders.parsers.registry.get_parser(parser_name: str) → BaseBlobParser[source]¶
Get a parser by parser name. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.registry.get_parser.html |
b0f81326eea4-0 | langchain.document_loaders.etherscan.EtherscanLoader¶
class langchain.document_loaders.etherscan.EtherscanLoader(account_address: str, api_key: str = 'docs-demo', filter: str = 'normal_transaction', page: int = 1, offset: int = 10, start_block: int = 0, end_block: int = 99999999, sort: str = 'desc')[source]¶
Load transactions from Ethereum mainnet.
The Loader use Etherscan API to interact with Ethereum mainnet.
ETHERSCAN_API_KEY environment variable must be set use this loader.
Methods
__init__(account_address[, api_key, filter, ...])
getERC1155Tx()
getERC20Tx()
getERC721Tx()
getEthBalance()
getInternalTx()
getNormTx()
lazy_load()
Lazy load Documents from table.
load()
Load transactions from spcifc account by Etherscan.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(account_address: str, api_key: str = 'docs-demo', filter: str = 'normal_transaction', page: int = 1, offset: int = 10, start_block: int = 0, end_block: int = 99999999, sort: str = 'desc')[source]¶
getERC1155Tx() → List[Document][source]¶
getERC20Tx() → List[Document][source]¶
getERC721Tx() → List[Document][source]¶
getEthBalance() → List[Document][source]¶
getInternalTx() → List[Document][source]¶
getNormTx() → List[Document][source]¶
lazy_load() → Iterator[Document][source]¶
Lazy load Documents from table. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.etherscan.EtherscanLoader.html |
b0f81326eea4-1 | lazy_load() → Iterator[Document][source]¶
Lazy load Documents from table.
load() → List[Document][source]¶
Load transactions from spcifc account by Etherscan.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using EtherscanLoader¶
Etherscan Loader | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.etherscan.EtherscanLoader.html |
12f6cf41da9c-0 | langchain.document_loaders.s3_directory.S3DirectoryLoader¶
class langchain.document_loaders.s3_directory.S3DirectoryLoader(bucket: str, prefix: str = '', *, region_name: Optional[str] = None, api_version: Optional[str] = None, use_ssl: Optional[bool] = True, verify: Union[str, bool, None] = None, endpoint_url: Optional[str] = None, aws_access_key_id: Optional[str] = None, aws_secret_access_key: Optional[str] = None, aws_session_token: Optional[str] = None, boto_config: Optional[botocore.client.Config] = None)[source]¶
Load from Amazon AWS S3 directory.
Initialize with bucket and key name.
Parameters
bucket – The name of the S3 bucket.
prefix – The prefix of the S3 key. Defaults to “”.
region_name – The name of the region associated with the client.
A client is associated with a single region.
api_version – The API version to use. By default, botocore will
use the latest API version when creating a client. You only need
to specify this parameter if you want to use a previous API version
of the client.
use_ssl – Whether to use SSL. By default, SSL is used.
Note that not all services support non-ssl connections.
verify – Whether to verify SSL certificates.
By default SSL certificates are verified. You can provide the
following values:
False - do not validate SSL certificates. SSL will still be
used (unless use_ssl is False), but SSL certificates
will not be verified.
path/to/cert/bundle.pem - A filename of the CA cert bundle to
uses. You can specify this argument if you want to use a
different CA cert bundle than the one used by botocore. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_directory.S3DirectoryLoader.html |
12f6cf41da9c-1 | different CA cert bundle than the one used by botocore.
endpoint_url – The complete URL to use for the constructed
client. Normally, botocore will automatically construct the
appropriate URL to use when communicating with a service. You can
specify a complete URL (including the “http/https” scheme) to
override this behavior. If this value is provided, then
use_ssl is ignored.
aws_access_key_id – The access key to use when creating
the client. This is entirely optional, and if not provided,
the credentials configured for the session will automatically
be used. You only need to provide this argument if you want
to override the credentials used for this specific client.
aws_secret_access_key – The secret key to use when creating
the client. Same semantics as aws_access_key_id above.
aws_session_token – The session token to use when creating
the client. Same semantics as aws_access_key_id above.
boto_config (botocore.client.Config) – Advanced boto3 client configuration options. If a value
is specified in the client config, its value will take precedence
over environment variables and configuration values, but not over
a value passed explicitly to the method. If a default config
object is set on the session, the config object used when creating
the client will be the result of calling merge() on the
default config with the config provided to this call.
Methods
__init__(bucket[, prefix, region_name, ...])
Initialize with bucket and key name.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_directory.S3DirectoryLoader.html |
12f6cf41da9c-2 | load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(bucket: str, prefix: str = '', *, region_name: Optional[str] = None, api_version: Optional[str] = None, use_ssl: Optional[bool] = True, verify: Union[str, bool, None] = None, endpoint_url: Optional[str] = None, aws_access_key_id: Optional[str] = None, aws_secret_access_key: Optional[str] = None, aws_session_token: Optional[str] = None, boto_config: Optional[botocore.client.Config] = None)[source]¶
Initialize with bucket and key name.
Parameters
bucket – The name of the S3 bucket.
prefix – The prefix of the S3 key. Defaults to “”.
region_name – The name of the region associated with the client.
A client is associated with a single region.
api_version – The API version to use. By default, botocore will
use the latest API version when creating a client. You only need
to specify this parameter if you want to use a previous API version
of the client.
use_ssl – Whether to use SSL. By default, SSL is used.
Note that not all services support non-ssl connections.
verify – Whether to verify SSL certificates.
By default SSL certificates are verified. You can provide the
following values:
False - do not validate SSL certificates. SSL will still be
used (unless use_ssl is False), but SSL certificates
will not be verified.
path/to/cert/bundle.pem - A filename of the CA cert bundle to
uses. You can specify this argument if you want to use a
different CA cert bundle than the one used by botocore.
endpoint_url – The complete URL to use for the constructed
client. Normally, botocore will automatically construct the | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_directory.S3DirectoryLoader.html |
12f6cf41da9c-3 | client. Normally, botocore will automatically construct the
appropriate URL to use when communicating with a service. You can
specify a complete URL (including the “http/https” scheme) to
override this behavior. If this value is provided, then
use_ssl is ignored.
aws_access_key_id – The access key to use when creating
the client. This is entirely optional, and if not provided,
the credentials configured for the session will automatically
be used. You only need to provide this argument if you want
to override the credentials used for this specific client.
aws_secret_access_key – The secret key to use when creating
the client. Same semantics as aws_access_key_id above.
aws_session_token – The session token to use when creating
the client. Same semantics as aws_access_key_id above.
boto_config (botocore.client.Config) – Advanced boto3 client configuration options. If a value
is specified in the client config, its value will take precedence
over environment variables and configuration values, but not over
a value passed explicitly to the method. If a default config
object is set on the session, the config object used when creating
the client will be the result of calling merge() on the
default config with the config provided to this call.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using S3DirectoryLoader¶
AWS S3 Directory | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.s3_directory.S3DirectoryLoader.html |
92d3b1a14971-0 | langchain.document_loaders.notebook.NotebookLoader¶
class langchain.document_loaders.notebook.NotebookLoader(path: str, include_outputs: bool = False, max_output_length: int = 10, remove_newline: bool = False, traceback: bool = False)[source]¶
Load Jupyter notebook (.ipynb) files.
Initialize with a path.
Parameters
path – The path to load the notebook from.
include_outputs – Whether to include the outputs of the cell.
Defaults to False.
max_output_length – Maximum length of the output to be displayed.
Defaults to 10.
remove_newline – Whether to remove newlines from the notebook.
Defaults to False.
traceback – Whether to return a traceback of the error.
Defaults to False.
Methods
__init__(path[, include_outputs, ...])
Initialize with a path.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(path: str, include_outputs: bool = False, max_output_length: int = 10, remove_newline: bool = False, traceback: bool = False)[source]¶
Initialize with a path.
Parameters
path – The path to load the notebook from.
include_outputs – Whether to include the outputs of the cell.
Defaults to False.
max_output_length – Maximum length of the output to be displayed.
Defaults to 10.
remove_newline – Whether to remove newlines from the notebook.
Defaults to False.
traceback – Whether to return a traceback of the error.
Defaults to False.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.NotebookLoader.html |
92d3b1a14971-1 | load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using NotebookLoader¶
Jupyter Notebook
Notebook | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.notebook.NotebookLoader.html |
6aa78d83afb1-0 | langchain.document_loaders.parsers.language.language_parser.LanguageParser¶
class langchain.document_loaders.parsers.language.language_parser.LanguageParser(language: Optional[Language] = None, parser_threshold: int = 0)[source]¶
Parse using the respective programming language syntax.
Each top-level function and class in the code is loaded into separate documents.
Furthermore, an extra document is generated, containing the remaining top-level code
that excludes the already segmented functions and classes.
This approach can potentially improve the accuracy of QA models over source code.
Currently, the supported languages for code parsing are Python and JavaScript.
The language used for parsing can be configured, along with the minimum number of
lines required to activate the splitting based on syntax.
Examples
from langchain.text_splitter.Language
from langchain.document_loaders.generic import GenericLoader
from langchain.document_loaders.parsers import LanguageParser
loader = GenericLoader.from_filesystem(
"./code",
glob="**/*",
suffixes=[".py", ".js"],
parser=LanguageParser()
)
docs = loader.load()
Example instantiations to manually select the language:
.. code-block:: python
from langchain.text_splitter import Language
loader = GenericLoader.from_filesystem(
"./code",
glob="**/*",
suffixes=[".py"],
parser=LanguageParser(language=Language.PYTHON)
)
Example instantiations to set number of lines threshold:
.. code-block:: python
loader = GenericLoader.from_filesystem(
"./code",
glob="**/*",
suffixes=[".py"],
parser=LanguageParser(parser_threshold=200)
)
Language parser that split code using the respective language syntax.
Parameters
language – If None (default), it will try to infer language from source. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.language_parser.LanguageParser.html |
6aa78d83afb1-1 | Parameters
language – If None (default), it will try to infer language from source.
parser_threshold – Minimum lines needed to activate parsing (0 by default).
Methods
__init__([language, parser_threshold])
Language parser that split code using the respective language syntax.
lazy_parse(blob)
Lazy parsing interface.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__(language: Optional[Language] = None, parser_threshold: int = 0)[source]¶
Language parser that split code using the respective language syntax.
Parameters
language – If None (default), it will try to infer language from source.
parser_threshold – Minimum lines needed to activate parsing (0 by default).
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazy parsing interface.
Subclasses are required to implement this method.
Parameters
blob – Blob instance
Returns
Generator of documents
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
Examples using LanguageParser¶
Source Code
Set env var OPENAI_API_KEY or load from a .env file | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.language.language_parser.LanguageParser.html |
fd0d822e6766-0 | langchain.document_loaders.airbyte.AirbyteCDKLoader¶
class langchain.document_loaders.airbyte.AirbyteCDKLoader(config: Mapping[str, Any], source_class: Any, stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None)[source]¶
Load with an Airbyte source connector implemented using the CDK.
Initializes the loader.
Parameters
config – The config to pass to the source connector.
source_class – The source connector class.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None.
Attributes
last_state
Methods
__init__(config, source_class, stream_name)
Initializes the loader.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(config: Mapping[str, Any], source_class: Any, stream_name: str, record_handler: Optional[Callable[[Any, Optional[str]], Document]] = None, state: Optional[Any] = None) → None[source]¶
Initializes the loader.
Parameters
config – The config to pass to the source connector.
source_class – The source connector class.
stream_name – The name of the stream to load.
record_handler – A function that takes in a record and an optional id and
returns a Document. If None, the record will be used as the document.
Defaults to None.
state – The state to pass to the source connector. Defaults to None. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteCDKLoader.html |
fd0d822e6766-1 | state – The state to pass to the source connector. Defaults to None.
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AirbyteCDKLoader¶
Airbyte CDK | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airbyte.AirbyteCDKLoader.html |
3cf6b4dcb5f4-0 | langchain.document_loaders.git.GitLoader¶
class langchain.document_loaders.git.GitLoader(repo_path: str, clone_url: Optional[str] = None, branch: Optional[str] = 'main', file_filter: Optional[Callable[[str], bool]] = None)[source]¶
Load Git repository files.
The Repository can be local on disk available at repo_path,
or remote at clone_url that will be cloned to repo_path.
Currently, supports only text files.
Each document represents one file in the repository. The path points to
the local Git repository, and the branch specifies the branch to load
files from. By default, it loads from the main branch.
Parameters
repo_path – The path to the Git repository.
clone_url – Optional. The URL to clone the repository from.
branch – Optional. The branch to load files from. Defaults to main.
file_filter – Optional. A function that takes a file path and returns
a boolean indicating whether to load the file. Defaults to None.
Methods
__init__(repo_path[, clone_url, branch, ...])
param repo_path
The path to the Git repository.
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(repo_path: str, clone_url: Optional[str] = None, branch: Optional[str] = 'main', file_filter: Optional[Callable[[str], bool]] = None)[source]¶
Parameters
repo_path – The path to the Git repository.
clone_url – Optional. The URL to clone the repository from.
branch – Optional. The branch to load files from. Defaults to main.
file_filter – Optional. A function that takes a file path and returns | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.git.GitLoader.html |
3cf6b4dcb5f4-1 | file_filter – Optional. A function that takes a file path and returns
a boolean indicating whether to load the file. Defaults to None.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using GitLoader¶
Git | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.git.GitLoader.html |
d4bcf3124f90-0 | langchain.document_loaders.parsers.txt.TextParser¶
class langchain.document_loaders.parsers.txt.TextParser[source]¶
Parser for text blobs.
Methods
__init__()
lazy_parse(blob)
Lazily parse the blob.
parse(blob)
Eagerly parse the blob into a document or documents.
__init__()¶
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Lazily parse the blob.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.txt.TextParser.html |
54b0a6533dea-0 | langchain.document_loaders.rst.UnstructuredRSTLoader¶
class langchain.document_loaders.rst.UnstructuredRSTLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load RST files using Unstructured.
You can run the loader in one of two modes: “single” and “elements”.
If you use “single” mode, the document will be returned as a single
langchain Document object. If you use “elements” mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
from langchain.document_loaders import UnstructuredRSTLoader
loader = UnstructuredRSTLoader(“example.rst”, mode=”elements”, strategy=”fast”,
)
docs = loader.load()
References
https://unstructured-io.github.io/unstructured/bricks.html#partition-rst
Initialize with a file path.
Parameters
file_path – The path to the file to load.
mode – The mode to use for partitioning. See unstructured for details.
Defaults to “single”.
**unstructured_kwargs – Additional keyword arguments to pass
to unstructured.
Methods
__init__(file_path[, mode])
Initialize with a file path.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Initialize with a file path.
Parameters
file_path – The path to the file to load.
mode – The mode to use for partitioning. See unstructured for details.
Defaults to “single”. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rst.UnstructuredRSTLoader.html |
54b0a6533dea-1 | Defaults to “single”.
**unstructured_kwargs – Additional keyword arguments to pass
to unstructured.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredRSTLoader¶
RST | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.rst.UnstructuredRSTLoader.html |
b2d5af655377-0 | langchain.document_loaders.blockchain.BlockchainType¶
class langchain.document_loaders.blockchain.BlockchainType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Enumerator of the supported blockchains.
ETH_MAINNET = 'eth-mainnet'¶
ETH_GOERLI = 'eth-goerli'¶
POLYGON_MAINNET = 'polygon-mainnet'¶
POLYGON_MUMBAI = 'polygon-mumbai'¶
Examples using BlockchainType¶
Blockchain | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.blockchain.BlockchainType.html |
dfa7aac51010-0 | langchain.document_loaders.csv_loader.UnstructuredCSVLoader¶
class langchain.document_loaders.csv_loader.UnstructuredCSVLoader(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Load CSV files using Unstructured.
Like other
Unstructured loaders, UnstructuredCSVLoader can be used in both
“single” and “elements” mode. If you use the loader in “elements”
mode, the CSV file will be a single Unstructured Table element.
If you use the loader in “elements” mode, an HTML representation
of the table will be available in the “text_as_html” key in the
document metadata.
Examples
from langchain.document_loaders.csv_loader import UnstructuredCSVLoader
loader = UnstructuredCSVLoader(“stanley-cups.csv”, mode=”elements”)
docs = loader.load()
Parameters
file_path – The path to the CSV file.
mode – The mode to use when loading the CSV file.
Optional. Defaults to “single”.
**unstructured_kwargs – Keyword arguments to pass to unstructured.
Methods
__init__(file_path[, mode])
param file_path
The path to the CSV file.
lazy_load()
A lazy loader for Documents.
load()
Load file.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str, mode: str = 'single', **unstructured_kwargs: Any)[source]¶
Parameters
file_path – The path to the CSV file.
mode – The mode to use when loading the CSV file.
Optional. Defaults to “single”.
**unstructured_kwargs – Keyword arguments to pass to unstructured.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document]¶
Load file. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.csv_loader.UnstructuredCSVLoader.html |
dfa7aac51010-1 | A lazy loader for Documents.
load() → List[Document]¶
Load file.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using UnstructuredCSVLoader¶
CSV | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.csv_loader.UnstructuredCSVLoader.html |
db26d478359b-0 | langchain.document_loaders.concurrent.ConcurrentLoader¶
class langchain.document_loaders.concurrent.ConcurrentLoader(blob_loader: BlobLoader, blob_parser: BaseBlobParser, num_workers: int = 4)[source]¶
Load and pars Documents concurrently.
A generic document loader.
Parameters
blob_loader – A blob loader which knows how to yield blobs
blob_parser – A blob parser which knows how to parse blobs into documents
Methods
__init__(blob_loader, blob_parser[, num_workers])
A generic document loader.
from_filesystem(path, *[, glob, exclude, ...])
Create a concurrent generic document loader using a filesystem blob loader.
lazy_load()
Load documents lazily with concurrent parsing.
load()
Load all documents.
load_and_split([text_splitter])
Load all documents and split them into sentences.
__init__(blob_loader: BlobLoader, blob_parser: BaseBlobParser, num_workers: int = 4) → None[source]¶
A generic document loader.
Parameters
blob_loader – A blob loader which knows how to yield blobs
blob_parser – A blob parser which knows how to parse blobs into documents
classmethod from_filesystem(path: Union[str, Path], *, glob: str = '**/[!.]*', exclude: Sequence[str] = (), suffixes: Optional[Sequence[str]] = None, show_progress: bool = False, parser: Union[Literal['default'], BaseBlobParser] = 'default', num_workers: int = 4) → ConcurrentLoader[source]¶
Create a concurrent generic document loader using a
filesystem blob loader.
Parameters
path – The path to the directory to load documents from.
glob – The glob pattern to use to find documents.
suffixes – The suffixes to use to filter documents. If None, all files
matching the glob will be loaded. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.concurrent.ConcurrentLoader.html |
db26d478359b-1 | matching the glob will be loaded.
exclude – A list of patterns to exclude from the loader.
show_progress – Whether to show a progress bar or not (requires tqdm).
Proxies to the file system loader.
parser – A blob parser which knows how to parse blobs into documents
num_workers – Max number of concurrent workers to use.
lazy_load() → Iterator[Document][source]¶
Load documents lazily with concurrent parsing.
load() → List[Document]¶
Load all documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load all documents and split them into sentences.
Examples using ConcurrentLoader¶
Concurrent Loader | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.concurrent.ConcurrentLoader.html |
4825a849e695-0 | langchain.document_loaders.docugami.DocugamiLoader¶
class langchain.document_loaders.docugami.DocugamiLoader[source]¶
Bases: BaseLoader, BaseModel
Load from Docugami.
To use, you should have the lxml python package installed.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param access_token: Optional[str] = None¶
The Docugami API access token to use.
param api: str = 'https://api.docugami.com/v1preview1'¶
The Docugami API endpoint to use.
param docset_id: Optional[str] = None¶
The Docugami API docset ID to use.
param document_ids: Optional[Sequence[str]] = None¶
The Docugami API document IDs to use.
param file_paths: Optional[Sequence[Union[pathlib.Path, str]]] = None¶
The local file paths to use.
param min_chunk_size: int = 32¶
The minimum chunk size to use when parsing DGML. Defaults to 32.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶
Duplicate a model, optionally choose which fields to include, exclude and change. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.docugami.DocugamiLoader.html |
4825a849e695-1 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
classmethod from_orm(obj: Any) → Model¶
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.docugami.DocugamiLoader.html |
4825a849e695-2 | load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod parse_obj(obj: Any) → Model¶
classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶
classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
classmethod update_forward_refs(**localns: Any) → None¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
classmethod validate(value: Any) → Model¶
Examples using DocugamiLoader¶
Docugami | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.docugami.DocugamiLoader.html |
69060f855b88-0 | langchain.document_loaders.obs_directory.OBSDirectoryLoader¶
class langchain.document_loaders.obs_directory.OBSDirectoryLoader(bucket: str, endpoint: str, config: Optional[dict] = None, prefix: str = '')[source]¶
Load from Huawei OBS directory.
Initialize the OBSDirectoryLoader with the specified settings.
Parameters
bucket (str) – The name of the OBS bucket to be used.
endpoint (str) – The endpoint URL of your OBS bucket.
config (dict) – The parameters for connecting to OBS, provided as a dictionary. The dictionary could have the following keys:
- “ak” (str, optional): Your OBS access key (required if get_token_from_ecs is False and bucket policy is not public read).
- “sk” (str, optional): Your OBS secret key (required if get_token_from_ecs is False and bucket policy is not public read).
- “token” (str, optional): Your security token (required if using temporary credentials).
- “get_token_from_ecs” (bool, optional): Whether to retrieve the security token from ECS. Defaults to False if not provided. If set to True, ak, sk, and token will be ignored.
prefix (str, optional) – The prefix to be added to the OBS key. Defaults to “”.
Note
Before using this class, make sure you have registered with OBS and have the necessary credentials. The ak, sk, and endpoint values are mandatory unless get_token_from_ecs is True or the bucket policy is public read. token is required when using temporary credentials.
Example
To create a new OBSDirectoryLoader:
```
config = {
“ak”: “your-access-key”,
“sk”: “your-secret-key”
directory_loader = OBSDirectoryLoader(“your-bucket-name”, “your-end-endpoint”, config, “your-prefix”)
Methods | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obs_directory.OBSDirectoryLoader.html |
69060f855b88-1 | Methods
__init__(bucket, endpoint[, config, prefix])
Initialize the OBSDirectoryLoader with the specified settings.
lazy_load()
A lazy loader for Documents.
load()
Load documents.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(bucket: str, endpoint: str, config: Optional[dict] = None, prefix: str = '')[source]¶
Initialize the OBSDirectoryLoader with the specified settings.
Parameters
bucket (str) – The name of the OBS bucket to be used.
endpoint (str) – The endpoint URL of your OBS bucket.
config (dict) – The parameters for connecting to OBS, provided as a dictionary. The dictionary could have the following keys:
- “ak” (str, optional): Your OBS access key (required if get_token_from_ecs is False and bucket policy is not public read).
- “sk” (str, optional): Your OBS secret key (required if get_token_from_ecs is False and bucket policy is not public read).
- “token” (str, optional): Your security token (required if using temporary credentials).
- “get_token_from_ecs” (bool, optional): Whether to retrieve the security token from ECS. Defaults to False if not provided. If set to True, ak, sk, and token will be ignored.
prefix (str, optional) – The prefix to be added to the OBS key. Defaults to “”.
Note
Before using this class, make sure you have registered with OBS and have the necessary credentials. The ak, sk, and endpoint values are mandatory unless get_token_from_ecs is True or the bucket policy is public read. token is required when using temporary credentials.
Example
To create a new OBSDirectoryLoader:
```
config = {
“ak”: “your-access-key”, | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obs_directory.OBSDirectoryLoader.html |
69060f855b88-2 | ```
config = {
“ak”: “your-access-key”,
“sk”: “your-secret-key”
directory_loader = OBSDirectoryLoader(“your-bucket-name”, “your-end-endpoint”, config, “your-prefix”)
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load documents.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using OBSDirectoryLoader¶
Huawei OBS Directory | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.obs_directory.OBSDirectoryLoader.html |
a413e9ed31e6-0 | langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader¶
class langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader(spark_session: Optional[SparkSession] = None, df: Optional[Any] = None, page_content_column: str = 'text', fraction_of_memory: float = 0.1)[source]¶
Load PySpark DataFrames.
Initialize with a Spark DataFrame object.
Parameters
spark_session – The SparkSession object.
df – The Spark DataFrame object.
page_content_column – The name of the column containing the page content.
Defaults to “text”.
fraction_of_memory – The fraction of memory to use. Defaults to 0.1.
Methods
__init__([spark_session, df, ...])
Initialize with a Spark DataFrame object.
get_num_rows()
Gets the number of "feasible" rows for the DataFrame
lazy_load()
A lazy loader for document content.
load()
Load from the dataframe.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(spark_session: Optional[SparkSession] = None, df: Optional[Any] = None, page_content_column: str = 'text', fraction_of_memory: float = 0.1)[source]¶
Initialize with a Spark DataFrame object.
Parameters
spark_session – The SparkSession object.
df – The Spark DataFrame object.
page_content_column – The name of the column containing the page content.
Defaults to “text”.
fraction_of_memory – The fraction of memory to use. Defaults to 0.1.
get_num_rows() → Tuple[int, int][source]¶
Gets the number of “feasible” rows for the DataFrame
lazy_load() → Iterator[Document][source]¶
A lazy loader for document content.
load() → List[Document][source]¶ | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader.html |
a413e9ed31e6-1 | A lazy loader for document content.
load() → List[Document][source]¶
Load from the dataframe.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using PySparkDataFrameLoader¶
PySpark DataFrame Loader | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.pyspark_dataframe.PySparkDataFrameLoader.html |
118a2e999139-0 | langchain.document_loaders.base.BaseLoader¶
class langchain.document_loaders.base.BaseLoader[source]¶
Interface for Document Loader.
Implementations should implement the lazy-loading method using generators
to avoid loading all Documents into memory at once.
The load method will remain as is for backwards compatibility, but its
implementation should be just list(self.lazy_load()).
Methods
__init__()
lazy_load()
A lazy loader for Documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__()¶
lazy_load() → Iterator[Document][source]¶
A lazy loader for Documents.
abstract load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document][source]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using BaseLoader¶
Indexing | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.base.BaseLoader.html |
d05143b52603-0 | langchain.document_loaders.base_o365.fetch_mime_types¶
langchain.document_loaders.base_o365.fetch_mime_types(file_types: Sequence[_FileType]) → Dict[str, str][source]¶
Fetch the mime types for the specified file types. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.base_o365.fetch_mime_types.html |
58061a06d61f-0 | langchain.document_loaders.airtable.AirtableLoader¶
class langchain.document_loaders.airtable.AirtableLoader(api_token: str, table_id: str, base_id: str)[source]¶
Load the Airtable tables.
Initialize with API token and the IDs for table and base
Attributes
api_token
Airtable API token.
table_id
Airtable table ID.
base_id
Airtable base ID.
Methods
__init__(api_token, table_id, base_id)
Initialize with API token and the IDs for table and base
lazy_load()
Lazy load Documents from table.
load()
Load Documents from table.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(api_token: str, table_id: str, base_id: str)[source]¶
Initialize with API token and the IDs for table and base
lazy_load() → Iterator[Document][source]¶
Lazy load Documents from table.
load() → List[Document][source]¶
Load Documents from table.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using AirtableLoader¶
Airtable | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.airtable.AirtableLoader.html |
eb4889a27a3e-0 | langchain.document_loaders.dataframe.BaseDataFrameLoader¶
class langchain.document_loaders.dataframe.BaseDataFrameLoader(data_frame: Any, *, page_content_column: str = 'text')[source]¶
Initialize with dataframe object.
Parameters
data_frame – DataFrame object.
page_content_column – Name of the column containing the page content.
Defaults to “text”.
Methods
__init__(data_frame, *[, page_content_column])
Initialize with dataframe object.
lazy_load()
Lazy load records from dataframe.
load()
Load full dataframe.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(data_frame: Any, *, page_content_column: str = 'text')[source]¶
Initialize with dataframe object.
Parameters
data_frame – DataFrame object.
page_content_column – Name of the column containing the page content.
Defaults to “text”.
lazy_load() → Iterator[Document][source]¶
Lazy load records from dataframe.
load() → List[Document][source]¶
Load full dataframe.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.dataframe.BaseDataFrameLoader.html |
5086be9adfe7-0 | langchain.document_loaders.tencent_cos_directory.TencentCOSDirectoryLoader¶
class langchain.document_loaders.tencent_cos_directory.TencentCOSDirectoryLoader(conf: Any, bucket: str, prefix: str = '')[source]¶
Load from Tencent Cloud COS directory.
Initialize with COS config, bucket and prefix.
:param conf(CosConfig): COS config.
:param bucket(str): COS bucket.
:param prefix(str): prefix.
Methods
__init__(conf, bucket[, prefix])
Initialize with COS config, bucket and prefix.
lazy_load()
Load documents.
load()
Load data into Document objects.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(conf: Any, bucket: str, prefix: str = '')[source]¶
Initialize with COS config, bucket and prefix.
:param conf(CosConfig): COS config.
:param bucket(str): COS bucket.
:param prefix(str): prefix.
lazy_load() → Iterator[Document][source]¶
Load documents.
load() → List[Document][source]¶
Load data into Document objects.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using TencentCOSDirectoryLoader¶
Tencent COS Directory | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.tencent_cos_directory.TencentCOSDirectoryLoader.html |
801eb8a322a1-0 | langchain.document_loaders.parsers.docai.DocAIParser¶
class langchain.document_loaders.parsers.docai.DocAIParser(*, client: Optional[DocumentProcessorServiceClient] = None, location: Optional[str] = None, gcs_output_path: Optional[str] = None, processor_name: Optional[str] = None)[source]¶
Google Cloud Document AI parser.
For a detailed explanation of Document AI, refer to the product documentation.
https://cloud.google.com/document-ai/docs/overview
Initializes the parser.
Parameters
client – a DocumentProcessorServiceClient to use
location – a Google Cloud location where a Document AI processor is located
gcs_output_path – a path on Google Cloud Storage to store parsing results
processor_name – full resource name of a Document AI processor or processor
version
You should provide either a client or location (and then a clientwould be instantiated).
Methods
__init__(*[, client, location, ...])
Initializes the parser.
batch_parse(blobs[, gcs_output_path, ...])
Parses a list of blobs lazily.
docai_parse(blobs, *[, gcs_output_path, ...])
Runs Google Document AI PDF Batch Processing on a list of blobs.
get_results(operations)
is_running(operations)
lazy_parse(blob)
Parses a blob lazily.
online_process(blob[, ...])
Parses a blob lazily using online processing.
operations_from_names(operation_names)
Initializes Long-Running Operations from their names.
parse(blob)
Eagerly parse the blob into a document or documents.
parse_from_results(results) | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.docai.DocAIParser.html |
801eb8a322a1-1 | Eagerly parse the blob into a document or documents.
parse_from_results(results)
__init__(*, client: Optional[DocumentProcessorServiceClient] = None, location: Optional[str] = None, gcs_output_path: Optional[str] = None, processor_name: Optional[str] = None)[source]¶
Initializes the parser.
Parameters
client – a DocumentProcessorServiceClient to use
location – a Google Cloud location where a Document AI processor is located
gcs_output_path – a path on Google Cloud Storage to store parsing results
processor_name – full resource name of a Document AI processor or processor
version
You should provide either a client or location (and then a clientwould be instantiated).
batch_parse(blobs: Sequence[Blob], gcs_output_path: Optional[str] = None, timeout_sec: int = 3600, check_in_interval_sec: int = 60) → Iterator[Document][source]¶
Parses a list of blobs lazily.
Parameters
blobs – a list of blobs to parse.
gcs_output_path – a path on Google Cloud Storage to store parsing results.
timeout_sec – a timeout to wait for Document AI to complete, in seconds.
check_in_interval_sec – an interval to wait until next check
whether parsing operations have been completed, in seconds
This is a long-running operation. A recommended way is to decoupleparsing from creating LangChain Documents:
>>> operations = parser.docai_parse(blobs, gcs_path)
>>> parser.is_running(operations)
You can get operations names and save them:
>>> names = [op.operation.name for op in operations]
And when all operations are finished, you can use their results:
>>> operations = parser.operations_from_names(operation_names)
>>> results = parser.get_results(operations)
>>> docs = parser.parse_from_results(results) | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.docai.DocAIParser.html |
801eb8a322a1-2 | >>> results = parser.get_results(operations)
>>> docs = parser.parse_from_results(results)
docai_parse(blobs: Sequence[Blob], *, gcs_output_path: Optional[str] = None, processor_name: Optional[str] = None, batch_size: int = 1000, enable_native_pdf_parsing: bool = True, field_mask: Optional[str] = None) → List[Operation][source]¶
Runs Google Document AI PDF Batch Processing on a list of blobs.
Parameters
blobs – a list of blobs to be parsed
gcs_output_path – a path (folder) on GCS to store results
processor_name – name of a Document AI processor.
batch_size – amount of documents per batch
enable_native_pdf_parsing – a config option for the parser
field_mask – a comma-separated list of which fields to include in the
Document AI response.
suggested: “text,pages.pageNumber,pages.layout”
Document AI has a 1000 file limit per batch, so batches larger than that need
to be split into multiple requests.
Batch processing is an async long-running operation
and results are stored in a output GCS bucket.
get_results(operations: List[Operation]) → List[DocAIParsingResults][source]¶
is_running(operations: List[Operation]) → bool[source]¶
lazy_parse(blob: Blob) → Iterator[Document][source]¶
Parses a blob lazily.
Parameters
blobs – a Blob to parse
This is a long-running operation. A recommended way is to batchdocuments together and use the batch_parse() method.
online_process(blob: Blob, enable_native_pdf_parsing: bool = True, field_mask: Optional[str] = None, page_range: Optional[List[int]] = None) → Iterator[Document][source]¶
Parses a blob lazily using online processing.
Parameters | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.docai.DocAIParser.html |
801eb8a322a1-3 | Parses a blob lazily using online processing.
Parameters
blob – a blob to parse.
enable_native_pdf_parsing – enable pdf embedded text extraction
field_mask – a comma-separated list of which fields to include in the
Document AI response.
suggested: “text,pages.pageNumber,pages.layout”
page_range – list of page numbers to parse. If None,
entire document will be parsed.
operations_from_names(operation_names: List[str]) → List[Operation][source]¶
Initializes Long-Running Operations from their names.
parse(blob: Blob) → List[Document]¶
Eagerly parse the blob into a document or documents.
This is a convenience method for interactive development environment.
Production applications should favor the lazy_parse method instead.
Subclasses should generally not over-ride this parse method.
Parameters
blob – Blob instance
Returns
List of documents
parse_from_results(results: List[DocAIParsingResults]) → Iterator[Document][source]¶
Examples using DocAIParser¶
docai.md | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.parsers.docai.DocAIParser.html |
708814a4bd57-0 | langchain.document_loaders.word_document.Docx2txtLoader¶
class langchain.document_loaders.word_document.Docx2txtLoader(file_path: str)[source]¶
Load DOCX file using docx2txt and chunks at character level.
Defaults to check for local file, but if the file is a web path, it will download it
to a temporary file, and use that, then clean up the temporary file after completion
Initialize with file path.
Methods
__init__(file_path)
Initialize with file path.
lazy_load()
A lazy loader for Documents.
load()
Load given path as single page.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(file_path: str)[source]¶
Initialize with file path.
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load given path as single page.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using Docx2txtLoader¶
Microsoft Word | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.word_document.Docx2txtLoader.html |
b4b67ce2cdde-0 | langchain.document_loaders.reddit.RedditPostsLoader¶
class langchain.document_loaders.reddit.RedditPostsLoader(client_id: str, client_secret: str, user_agent: str, search_queries: Sequence[str], mode: str, categories: Sequence[str] = ['new'], number_posts: Optional[int] = 10)[source]¶
Load Reddit posts.
Read posts on a subreddit.
First, you need to go to
https://www.reddit.com/prefs/apps/
and create your application
Initialize with client_id, client_secret, user_agent, search_queries, mode,categories, number_posts.
Example: https://www.reddit.com/r/learnpython/
Parameters
client_id – Reddit client id.
client_secret – Reddit client secret.
user_agent – Reddit user agent.
search_queries – The search queries.
mode – The mode.
categories – The categories. Default: [“new”]
number_posts – The number of posts. Default: 10
Methods
__init__(client_id, client_secret, ...[, ...])
Initialize with client_id, client_secret, user_agent, search_queries, mode,
lazy_load()
A lazy loader for Documents.
load()
Load reddits.
load_and_split([text_splitter])
Load Documents and split into chunks.
__init__(client_id: str, client_secret: str, user_agent: str, search_queries: Sequence[str], mode: str, categories: Sequence[str] = ['new'], number_posts: Optional[int] = 10)[source]¶
Initialize with client_id, client_secret, user_agent, search_queries, mode,categories, number_posts.
Example: https://www.reddit.com/r/learnpython/
Parameters
client_id – Reddit client id.
client_secret – Reddit client secret.
user_agent – Reddit user agent.
search_queries – The search queries. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.reddit.RedditPostsLoader.html |
b4b67ce2cdde-1 | user_agent – Reddit user agent.
search_queries – The search queries.
mode – The mode.
categories – The categories. Default: [“new”]
number_posts – The number of posts. Default: 10
lazy_load() → Iterator[Document]¶
A lazy loader for Documents.
load() → List[Document][source]¶
Load reddits.
load_and_split(text_splitter: Optional[TextSplitter] = None) → List[Document]¶
Load Documents and split into chunks. Chunks are returned as Documents.
Parameters
text_splitter – TextSplitter instance to use for splitting documents.
Defaults to RecursiveCharacterTextSplitter.
Returns
List of Documents.
Examples using RedditPostsLoader¶
Reddit | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.reddit.RedditPostsLoader.html |
f6d7b5601415-0 | langchain.document_loaders.hn.HNLoader¶
class langchain.document_loaders.hn.HNLoader(web_path: Union[str, Sequence[str]] = '', header_template: Optional[dict] = None, verify_ssl: bool = True, proxies: Optional[dict] = None, continue_on_failure: bool = False, autoset_encoding: bool = True, encoding: Optional[str] = None, web_paths: Sequence[str] = (), requests_per_second: int = 2, default_parser: str = 'html.parser', requests_kwargs: Optional[Dict[str, Any]] = None, raise_for_status: bool = False, bs_get_text_kwargs: Optional[Dict[str, Any]] = None, bs_kwargs: Optional[Dict[str, Any]] = None, session: Any = None)[source]¶
Load Hacker News data.
It loads data from either main page results or the comments page.
Initialize loader.
Parameters
web_paths – Web paths to load from.
requests_per_second – Max number of concurrent requests to make.
default_parser – Default parser to use for BeautifulSoup.
requests_kwargs – kwargs for requests
raise_for_status – Raise an exception if http status code denotes an error.
bs_get_text_kwargs – kwargs for beatifulsoup4 get_text
bs_kwargs – kwargs for beatifulsoup4 web page parsing
Attributes
web_path
Methods
__init__([web_path, header_template, ...])
Initialize loader.
aload()
Load text from the urls in web_path async into Documents.
fetch_all(urls)
Fetch all urls concurrently with rate limiting.
lazy_load()
Lazy load text from the url(s) in web_path.
load()
Get important HN webpage information.
load_and_split([text_splitter])
Load Documents and split into chunks.
load_comments(soup_info)
Load comments from a HN post. | lang/api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.hn.HNLoader.html |
Subsets and Splits